﻿<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"><channel><title>BlogJava-ivaneeo's blog-随笔分类-云</title><link>http://www.blogjava.net/ivanwan/category/47650.html</link><description>自由的力量，自由的生活。</description><language>zh-cn</language><lastBuildDate>Fri, 05 Jun 2015 16:58:38 GMT</lastBuildDate><pubDate>Fri, 05 Jun 2015 16:58:38 GMT</pubDate><ttl>60</ttl><item><title>hadoop生态圈</title><link>http://www.blogjava.net/ivanwan/archive/2015/04/25/424664.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Sat, 25 Apr 2015 06:08:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/04/25/424664.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/424664.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/04/25/424664.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/424664.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/424664.html</trackback:ping><description><![CDATA[<div>http://www.csdn.net/article/2014-01-02/2817984-13-tools-let-hadoop-fly<br />好用的数据工具<br /><div>http://blog.itpub.net/7816530/viewspace-1119924/</div></div><img src ="http://www.blogjava.net/ivanwan/aggbug/424664.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-04-25 14:08 <a href="http://www.blogjava.net/ivanwan/archive/2015/04/25/424664.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>mesos调度框架</title><link>http://www.blogjava.net/ivanwan/archive/2015/04/15/424426.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Tue, 14 Apr 2015 20:49:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/04/15/424426.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/424426.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/04/15/424426.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/424426.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/424426.html</trackback:ping><description><![CDATA[<div style="text-align: center;"><div style="text-align: left; ">http://m.blog.csdn.net/blog/ebay/43529401</div></div><img src ="http://www.blogjava.net/ivanwan/aggbug/424426.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-04-15 04:49 <a href="http://www.blogjava.net/ivanwan/archive/2015/04/15/424426.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>centos6.5 docker install</title><link>http://www.blogjava.net/ivanwan/archive/2015/04/02/424049.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Thu, 02 Apr 2015 04:41:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/04/02/424049.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/424049.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/04/02/424049.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/424049.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/424049.html</trackback:ping><description><![CDATA[<div><p style="padding: 0px; margin: 8px 0px; line-height: 25.2000007629395px; letter-spacing: 0.5px; color: #333333; font-family: 'Bitstream Vera Sans', 'Lucida Grande', Verdana, Lucida, sans-serif; background-color: #ffffff;">运行yum makecache生成缓存</p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;">eple源：</p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;">rpm -Uvh&nbsp;http://ftp.sjtu.edu.cn/fedora/epel/6/i386/epel-release-6-8.noarch.rpm</p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;">docker 安装：</p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"><span style="padding: 0px; margin: 0px; font-family: Cabin, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; line-height: 20px;">You will need&nbsp;</span><a href="https://access.redhat.com/site/articles/3078#RHEL6" rel="nofollow" style="padding: 0px; margin: 0px; color: #ff8373; outline: 0px; font-size: 12px;">RHEL 6.5</a><span style="padding: 0px; margin: 0px; font-family: Cabin, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; line-height: 20px;">&nbsp;or higher, with a RHEL 6 kernel version 2.6.32-431 or higher as this has specific kernel fixes to allow Docker to work.</span></p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;">CentOS 6.5已经是<span style="padding: 0px; margin: 0px; font-family: Cabin, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; line-height: 20px;">2.6.32-431内核了，所以最好安装这个版本。</span></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">yum -y install docker-io</pre><span style="color: #333333; font-family: Verdana, sans-serif, 宋体; font-size: 12.5px; line-height: 22.5px; background-color: #ffffff;">升级：</span><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">yum -y update docker-io</pre><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;">手动升级：</p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">wget https://get.docker.io/builds/Linux/x86_64/docker-latest -O docker mv -f docker /usr/bin/docker </pre><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;">升级完成</p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;">启动：</p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">service docker start</pre><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"><span style="padding: 0px; margin: 0px; line-height: 1.5;">开机启动：</span></p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 12.5px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">chkconfig docker on</pre></div><img src ="http://www.blogjava.net/ivanwan/aggbug/424049.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-04-02 12:41 <a href="http://www.blogjava.net/ivanwan/archive/2015/04/02/424049.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>docker run restart</title><link>http://www.blogjava.net/ivanwan/archive/2015/03/28/423906.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Sat, 28 Mar 2015 02:31:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/03/28/423906.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/423906.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/03/28/423906.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/423906.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/423906.html</trackback:ping><description><![CDATA[<div>http://docs.docker.com/articles/host_integration/</div><img src ="http://www.blogjava.net/ivanwan/aggbug/423906.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-03-28 10:31 <a href="http://www.blogjava.net/ivanwan/archive/2015/03/28/423906.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>mincloud install log</title><link>http://www.blogjava.net/ivanwan/archive/2015/03/27/423895.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Fri, 27 Mar 2015 10:48:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/03/27/423895.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/423895.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/03/27/423895.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/423895.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/423895.html</trackback:ping><description><![CDATA[<div>172.20.20.8 mysql-mm1</div><div>172.20.20.11 mysql-mm2</div><div>172.20.20.10 mysql-data1</div><div>172.20.20.9 mysql-data2</div><div>172.20.20.10 mysql-sql1</div><div>172.20.20.9 mysql-sql2<br /><br /><br /><div><div>mysql-mm1:</div><div>&nbsp; docker run -d --name="mysql_mm1" --net=host -v /opt/mysql:/usr/local/mysql mysql_mm/ubuntu /bin/bash -exec 'echo -e "172.20.20.7 mysql-mm1\n172.20.20.10 mysql-mm2\n172.20.20.8 mysql-data1\n172.20.20.9 mysql-data2\n172.20.20.8 mysql-sql1\n172.20.20.9 mysql-sql2\n127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; ndb_mgmd -f /usr/local/mysql/data/mysql-cluster/config.ini &amp;&amp; /usr/sbin/sshd -D'</div><div>mysql-mm2:</div><div>&nbsp; docker run -d --name="mysql_mm2" --net=host -v /opt/mysql:/usr/local/mysql mysql_mm/ubuntu /bin/bash -exec 'echo -e "172.20.20.7 mysql-mm1\n172.20.20.10 mysql-mm2\n172.20.20.8 mysql-data1\n172.20.20.9 mysql-data2\n172.20.20.8 mysql-sql1\n172.20.20.9 mysql-sql2\n127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; ndb_mgmd -f /usr/local/mysql/data/mysql-cluster/config.ini &amp;&amp; zabbix_agentd &amp;&amp; /usr/sbin/sshd -D'</div><div>mysql-data1:</div><div>&nbsp; docker run -d --name="mysql_data1" --net=host -v /opt/mysql:/usr/local/mysql mysql_data/ubuntu /bin/bash -exec 'echo -e "172.20.20.7 mysql-mm1\n172.20.20.10 mysql-mm2\n172.20.20.8 mysql-data1\n172.20.20.9 mysql-data2\n172.20.20.8 mysql-sql1\n172.20.20.9 mysql-sql2\n127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; /usr/local/mysql/bin/ndbd &amp;&amp; zabbix_agentd &amp;&amp; /usr/sbin/sshd -D'</div><div>mysql-data2:</div><div>&nbsp; docker run -d --name="mysql_data2" --net=host -v /opt/mysql:/usr/local/mysql mysql_data/ubuntu /bin/bash -exec 'echo -e "172.20.20.7 mysql-mm1\n172.20.20.10 mysql-mm2\n172.20.20.8 mysql-data1\n172.20.20.9 mysql-data2\n172.20.20.8 mysql-sql1\n172.20.20.9 mysql-sql2\n127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; /usr/local/mysql/bin/ndbd &amp;&amp; zabbix_agentd &amp;&amp; /usr/sbin/sshd -D'</div><div>mysql-sql1:</div><div>&nbsp; docker run -d --name="mysql_sql1" --net=host -v /opt/mysql:/usr/local/mysql mysql_sql/ubuntu /bin/bash -exec 'echo -e "172.20.20.7 mysql-mm1\n172.20.20.10 mysql-mm2\n172.20.20.8 mysql-data1\n172.20.20.9 mysql-data2\n172.20.20.8 mysql-sql1\n172.20.20.9 mysql-sql2\n127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; /usr/local/mysql/bin/mysqld_safe --user=mysql'</div><div>mysql-sql2:</div><div>&nbsp; docker run -d --name="mysql_sql2" --net=host -v /opt/mysql:/usr/local/mysql mysql_sql/ubuntu /bin/bash -exec 'echo -e "172.20.20.7 mysql-mm1\n172.20.20.10 mysql-mm2\n172.20.20.8 mysql-data1\n172.20.20.9 mysql-data2\n172.20.20.8 mysql-sql1\n172.20.20.9 mysql-sql2\n127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; /usr/local/mysql/bin/mysqld_safe --user=mysql'</div><div></div></div><div></div><div>haproxy &amp;&amp; nginx:&nbsp;</div><div>&nbsp; docker run -d --name="loadbalancer_master"&nbsp;-p 8888:8888&nbsp;-p 6080:6080 -p 8089:8089 -p 8774:8774 -p 9696:9696 -p 9292:9292 -p 8776:8776 -p 5000:5000 -p 8777:8777 -p 11211:11211 -p 11222:11222 -p 5672:5672 -p 35357:35357 -p 8181:2181 -p 10389:10389 -p 2222:22 -p 80:80 -p 1936:1936 -p 3306:3306 -p 10052:10052 -p 10051:10051 -p 8080:8080 -v /opt/etc/nginx/conf:/usr/local/nginx-1.0.6/conf -v /opt/etc/haproxy:/etc/haproxy loadbalancer/ubuntu /bin/bash -exec 'echo -e "127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; service haproxy start &amp;&amp; /usr/local/nginx-1.0.6/sbin/nginx &amp;&amp; zabbix_agentd &amp;&amp; /usr/sbin/sshd -D'</div><div></div><div>redis_master: &nbsp;</div><div>&nbsp; docker run -d --name="redis_master" -p 18:22 -p 6379:6379 -p 6380:6380 redis_master/ubuntu /bin/bash -exec '/usr/local/webserver/redis/start.sh &amp;&amp; /usr/sbin/sshd -D'</div><div></div><div>redis_slave:&nbsp;</div><div>&nbsp; docker run -d --name="redis_slave1" -p 18:22 -p 6379:6379 -p 6380:6380 redis_slave/ubuntu /bin/bash -exec 'echo -e "172.20.20.10 redis-master\n127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; /usr/local/webserver/redis/start.sh &amp;&amp; /usr/sbin/sshd -D'&nbsp;<br /><br /><div>rabbitmq: &nbsp; &nbsp; &nbsp; &nbsp;</div><div>&nbsp; docker run -d --name="rabbitmq_master" -p 2222:22 -p 25672:25672 -p 15672:15672 -p 5672:5672 -p 4369:4369 -p 10051:10050 rabbitmq/ubuntu /bin/bash -exec 'echo -e "172.20.20.10 rabbitmq-master\n127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; /etc/init.d/rabbitmq-server start &amp;&amp; /usr/sbin/sshd -D'<br />&nbsp;<br />mule:<br />&nbsp; docker run -d --name="mule1" -p 5005:5005 -p 2222:22 -p 9999:9999 -p 9003:9003 -p 9000:9000 -p 9001:9001 -p 9004:9004 -v /opt/mule:/opt/mule-standalone-3.5.0_cloud mule/ubuntu /bin/bash -exec 'echo -e "192.168.1.180 lb-master\n192.168.1.180 controller-node\n127.0.0.1 localhost" &gt;&gt; /etc/hosts &amp;&amp; /usr/sbin/sshd &amp;&amp; export JAVA_HOME=/opt/jdk1.7.0_51 &amp;&amp; export PATH=$JAVA_HOME/bin:$PATH &amp;&amp; /opt/mule-standalone-3.5.0_cloud/bin/mule'<br /><br /><br />zentao:<br /><p>&nbsp; docker run -d --name="zentao" -p 22222:22 -p 10008:80 -v /opt/www/html/zentaopms:/opt/zentao --privileged=true zentao/ubuntu /bin/bash -exec 'service apache2 start &amp;&amp; /usr/sbin/sshd -D'</p>websocket-tomcat:</div><div>&nbsp; docker run -d --name="websocket_tomcat1" -p 8888:8080 -p 2222:22 -v /opt/apache-tomcat-8.0.15:/opt/apache-tomcat websocket-tomcat/ubuntu /bin/bash -exec 'echo -e "192.168.1.180 lb-master\n127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; export JAVA_HOME=/opt/jdk1.7.0_51 &amp;&amp; /opt/apache-tomcat/bin/startup.sh &amp;&amp; /usr/sbin/sshd -D'<br /><br />&nbsp;docker run -d --name="guacamole1" -p 8088:8088 -p 38:22 -v /opt/apache-tomcat-7.0.53:/opt/apache-tomcat guacamole/ubuntu /bin/bash -exec 'echo -e "192.168.1.150 lb-master\n127.0.0.1 localhost" &gt; /etc/hosts &amp;&amp; /etc/init.d/guacd start &amp;&amp; /opt/apache-tomcat/bin/start-tomcat.sh &amp;&amp; /usr/sbin/sshd -D'</div></div></div><img src ="http://www.blogjava.net/ivanwan/aggbug/423895.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-03-27 18:48 <a href="http://www.blogjava.net/ivanwan/archive/2015/03/27/423895.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>mysql cluster install faq</title><link>http://www.blogjava.net/ivanwan/archive/2015/03/27/423893.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Fri, 27 Mar 2015 08:43:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/03/27/423893.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/423893.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/03/27/423893.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/423893.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/423893.html</trackback:ping><description><![CDATA[<div>http://www.docin.com/p-558099649.html</div><img src ="http://www.blogjava.net/ivanwan/aggbug/423893.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-03-27 16:43 <a href="http://www.blogjava.net/ivanwan/archive/2015/03/27/423893.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>centos7 testing yum</title><link>http://www.blogjava.net/ivanwan/archive/2015/03/26/423873.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Thu, 26 Mar 2015 15:32:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/03/26/423873.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/423873.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/03/26/423873.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/423873.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/423873.html</trackback:ping><description><![CDATA[<pre style="box-sizing: border-box; overflow: auto; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 13px; padding: 8px 15px; margin-top: 0px; margin-bottom: 0px; line-height: 18px; word-break: break-all; word-wrap: break-word; color: #333333; border: 0px; border-radius: 3px; white-space: pre-wrap; background-color: #f7f7f7;"><section markdowncontent-headingenabled="" js-task-list-container="" clearfix=""  position-relative"="" id="item-e9ef350a274817390445" itemprop="articleBody" style="box-sizing: border-box; font-size: 16px; word-wrap: break-word; color: #4a4a4a; font-family: 'Helvetica Neue', Helvetica, 'ヒラギノ角ゴ ProN W3', 'Hiragino Kaku Gothic ProN', メイリオ, Meiryo, sans-serif; white-space: normal; background-color: #ffffff;"><h1>ステップ１ /etc/yum.repos.d/virt7-testing.repo というファイルを作ります。</h1><div data-lang="txt" style="box-sizing: border-box; border-radius: 3px; margin: 1em 0px; line-height: 0; background-color: #f7f7f7;"><div style="box-sizing: border-box; color: #555555; display: inline-block; padding: 3px 6px; margin: 0px; line-height: 1; font-size: 12px; background-color: rgba(0, 0, 0, 0.0666667);"><span style="box-sizing: border-box;">/etc/yum.repos.d/virt7-testing.repo</span></div><div style="box-sizing: border-box; background: #ffffff;"><pre style="box-sizing: border-box; overflow: auto; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 13px; padding: 8px 15px; margin-top: 0px; margin-bottom: 0px; line-height: 18px; word-break: break-all; word-wrap: break-word; color: #333333; border: 0px; border-radius: 3px; white-space: pre-wrap; background-color: #f7f7f7;">[virt7-testing] name=virt7-testing baseurl=http://cbs.centos.org/repos/virt7-testing/x86_64/os/ enabled=0  gpgcheck=0 </pre></div></div><h1><a href="http://qiita.com/DQNEO/items/e9ef350a274817390445#%E3%82%B9%E3%83%86%E3%83%83%E3%83%972-%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%BC%E3%83%AB%E3%81%97%E3%81%BE%E3%81%99" style="box-sizing: border-box; color: #337ab7; text-decoration: none; word-wrap: break-word; word-break: break-all; background-color: transparent;"></a>ステップ2 インストールします。</h1><div data-lang="text" style="box-sizing: border-box; border-radius: 3px; margin: 1em 0px; line-height: 0; background-color: #f7f7f7;"><div style="box-sizing: border-box; background: #ffffff;"><pre style="box-sizing: border-box; overflow: auto; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 13px; padding: 8px 15px; margin-top: 0px; margin-bottom: 0px; line-height: 18px; word-break: break-all; word-wrap: break-word; color: #333333; border: 0px; border-radius: 3px; white-space: pre-wrap; background-color: #f7f7f7;">sudo yum --enablerepo=virt7-testing install docker </pre></div></div><p style="box-sizing: border-box; margin: 1.6em 0px 0px; word-wrap: break-word; font-size: 1em; line-height: 1.875;">確認します。</p><div data-lang="text" style="box-sizing: border-box; border-radius: 3px; margin: 1em 0px; line-height: 0; background-color: #f7f7f7;"><div style="box-sizing: border-box; background: #ffffff;"><pre style="box-sizing: border-box; overflow: auto; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 13px; padding: 8px 15px; margin-top: 0px; margin-bottom: 0px; line-height: 18px; word-break: break-all; word-wrap: break-word; color: #333333; border: 0px; border-radius: 3px; white-space: pre-wrap; background-color: #f7f7f7;">$ docker --version Docker version 1.5.0, build a8a31ef/1.5.0 </pre></div></div><p style="box-sizing: border-box; margin: 1.6em 0px 0px; word-wrap: break-word; font-size: 1em; line-height: 1.875;">やったーー！！</p><p style="box-sizing: border-box; margin: 1.6em 0px 0px; word-wrap: break-word; font-size: 1em; line-height: 1.875;">&#8251;ご利用は自己責任でお願いします。</p><p style="box-sizing: border-box; margin: 1.6em 0px 0px; word-wrap: break-word; font-size: 1em; line-height: 1.875;"><a href="http://billpaxtonwasright.com/installing-docker-1-5-0-on-centos-7/" style="box-sizing: border-box; color: #337ab7; text-decoration: none; word-wrap: break-word; word-break: break-all; background-color: transparent;">http://billpaxtonwasright.com/installing-docker-1-5-0-on-centos-7/</a></p><div></div></section></pre><img src ="http://www.blogjava.net/ivanwan/aggbug/423873.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-03-26 23:32 <a href="http://www.blogjava.net/ivanwan/archive/2015/03/26/423873.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>解决KVM中鼠标不同步问题</title><link>http://www.blogjava.net/ivanwan/archive/2015/03/23/423760.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Mon, 23 Mar 2015 12:49:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/03/23/423760.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/423760.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/03/23/423760.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/423760.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/423760.html</trackback:ping><description><![CDATA[<p style="word-wrap: break-word; margin-right: 0px; margin-left: 0px; padding: 0px; color: #666666; font-family: 宋体, Arial; font-size: 16px; line-height: 26px; background-color: #ffffff;"><span style="word-wrap: break-word; line-height: 20px; font-family: 微软雅黑, Verdana, Geneva, sans-serif; color: #333333; font-size: 14px;">在虚拟机的配置文件中增加：</span></p><p style="word-wrap: break-word; margin: 0px 0px 1.62em; padding: 0px; line-height: 20px; font-family: 微软雅黑, Verdana, Geneva, sans-serif; color: #333333; background-color: #ffffff;">&lt;input type=&#8217;tablet&#8217; bus=&#8217;usb&#8217;/&gt;<br style="word-wrap: break-word; padding: 0px; margin: 0px;" />（该句位于&lt;devices&gt;配置中）<br /><br /><br /></p><h2 style="border: 0px; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 15px; margin: 0px 0px 0.8125em; outline: 0px; padding: 0px; vertical-align: baseline; clear: both; line-height: 24.375px; background-color: #ffffff;">Linux:</h2><p style="word-wrap: break-word; margin: 0px 0px 1.62em; padding: 0px; line-height: 20px; font-family: 微软雅黑, Verdana, Geneva, sans-serif; color: #333333; background-color: #ffffff;"><br /></p><p style="border: 0px; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 15px; margin: 0px 0px 1.625em; outline: 0px; padding: 0px; vertical-align: baseline; color: #373737; line-height: 24.375px; background-color: #ffffff;">在终端中输入：</p><pre style="border: 0px; font-family: 'Courier 10 Pitch', Courier, monospace; font-size: 13px; margin-top: 0px; margin-bottom: 1.625em; outline: 0px; padding: 0.75em 1.625em; vertical-align: baseline; font-stretch: normal; line-height: 1.5; overflow: auto; color: #373737; background: #f4f4f4;">xset -m 0</pre><p style="border: 0px; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 15px; margin: 0px 0px 1.625em; outline: 0px; padding: 0px; vertical-align: baseline; color: #373737; line-height: 24.375px; background-color: #ffffff;">&nbsp;</p><h2>Windows:</h2><p style="border: 0px; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 15px; margin: 0px 0px 1.625em; outline: 0px; padding: 0px; vertical-align: baseline; color: #373737; line-height: 24.375px; background-color: #ffffff;">进入控制面板 -&gt; 鼠标 -&gt; 指针选项，去掉&#8220;提高指针精确度&#8221;前面的勾。</p><img src ="http://www.blogjava.net/ivanwan/aggbug/423760.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-03-23 20:49 <a href="http://www.blogjava.net/ivanwan/archive/2015/03/23/423760.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>openstack virt vnc port</title><link>http://www.blogjava.net/ivanwan/archive/2015/03/22/423729.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Sun, 22 Mar 2015 15:16:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/03/22/423729.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/423729.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/03/22/423729.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/423729.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/423729.html</trackback:ping><description><![CDATA[<div>http://docs.openstack.org/image-guide/content/virt-install.html</div><img src ="http://www.blogjava.net/ivanwan/aggbug/423729.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-03-22 23:16 <a href="http://www.blogjava.net/ivanwan/archive/2015/03/22/423729.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>ceilometer alarm例子</title><link>http://www.blogjava.net/ivanwan/archive/2015/03/17/423541.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Tue, 17 Mar 2015 10:13:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/03/17/423541.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/423541.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/03/17/423541.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/423541.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/423541.html</trackback:ping><description><![CDATA[<div>http://blog.csdn.net/hackerain/article/details/38172941</div><img src ="http://www.blogjava.net/ivanwan/aggbug/423541.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-03-17 18:13 <a href="http://www.blogjava.net/ivanwan/archive/2015/03/17/423541.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>curl openstack</title><link>http://www.blogjava.net/ivanwan/archive/2015/03/13/423445.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Fri, 13 Mar 2015 11:32:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/03/13/423445.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/423445.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/03/13/423445.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/423445.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/423445.html</trackback:ping><description><![CDATA[<div>http://blog.csdn.net/anhuidelinger/article/details/9818693</div><img src ="http://www.blogjava.net/ivanwan/aggbug/423445.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-03-13 19:32 <a href="http://www.blogjava.net/ivanwan/archive/2015/03/13/423445.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>ubuntu docker1.5 install</title><link>http://www.blogjava.net/ivanwan/archive/2015/03/02/423137.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Mon, 02 Mar 2015 08:21:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/03/02/423137.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/423137.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/03/02/423137.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/423137.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/423137.html</trackback:ping><description><![CDATA[<div>https://docs.docker.com/installation/ubuntulinux/#ubuntu-trusty-1404-lts-64-bit</div><img src ="http://www.blogjava.net/ivanwan/aggbug/423137.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-03-02 16:21 <a href="http://www.blogjava.net/ivanwan/archive/2015/03/02/423137.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>docker api demo</title><link>http://www.blogjava.net/ivanwan/archive/2015/02/14/422927.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Sat, 14 Feb 2015 06:29:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2015/02/14/422927.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/422927.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2015/02/14/422927.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/422927.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/422927.html</trackback:ping><description><![CDATA[<div>http://my.oschina.net/guol/blog/271416</div><img src ="http://www.blogjava.net/ivanwan/aggbug/422927.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2015-02-14 14:29 <a href="http://www.blogjava.net/ivanwan/archive/2015/02/14/422927.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>ndb manage show</title><link>http://www.blogjava.net/ivanwan/archive/2014/12/26/421868.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Fri, 26 Dec 2014 10:41:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2014/12/26/421868.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/421868.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2014/12/26/421868.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/421868.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/421868.html</trackback:ping><description><![CDATA[<p>root@proxzone-project-4:/usr/local/mysql/bin# ndb_mgm -e show</p> <p>Connected to Management Server at: localhost:1186</p> <p>Cluster Configuration</p> <p>---------------------</p> <p>[ndbd(NDB)]	2 node(s)</p> <p>id=3	@172.21.21.108&nbsp; (mysql-5.6.21 ndb-7.3.7, Nodegroup: 0)</p> <p>id=4	@172.21.21.109&nbsp; (mysql-5.6.21 ndb-7.3.7, Nodegroup: 0, *)</p> <p><br /></p> <p>[ndb_mgmd(MGM)]	2 node(s)</p> <p>id=1	@172.21.21.107&nbsp; (mysql-5.6.21 ndb-7.3.7)</p> <p>id=2	@172.21.21.110&nbsp; (mysql-5.6.21 ndb-7.3.7)</p> <p><br /></p> <p>[mysqld(API)]	2 node(s)</p> <p>id=5	@172.21.21.108&nbsp; (mysql-5.6.21 ndb-7.3.7)</p> <p>id=6	@172.21.21.109&nbsp; (mysql-5.6.21 ndb-7.3.7)</p><img src ="http://www.blogjava.net/ivanwan/aggbug/421868.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2014-12-26 18:41 <a href="http://www.blogjava.net/ivanwan/archive/2014/12/26/421868.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>docker!</title><link>http://www.blogjava.net/ivanwan/archive/2014/12/19/421553.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Thu, 18 Dec 2014 16:57:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2014/12/19/421553.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/421553.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2014/12/19/421553.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/421553.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/421553.html</trackback:ping><description><![CDATA[<div>http://www.blogjava.net/yongboy/archive/2013/12/12/407498.html<br /><h2>docker-registry:</h2><div>http://www.cnblogs.com/xguo/p/3829329.html<br /><br /><br />ubuntu 14.04<br /><div>http://www.tuicool.com/articles/b63uei<br /><br />centos 6.5<br /><div><div>http://blog.yourtion.com/ubuntu-install-docker.html</div></div></div></div></div><img src ="http://www.blogjava.net/ivanwan/aggbug/421553.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2014-12-19 00:57 <a href="http://www.blogjava.net/ivanwan/archive/2014/12/19/421553.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>cloudstack xenserver agent</title><link>http://www.blogjava.net/ivanwan/archive/2014/12/17/421501.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Wed, 17 Dec 2014 06:54:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2014/12/17/421501.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/421501.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2014/12/17/421501.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/421501.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/421501.html</trackback:ping><description><![CDATA[<pre data-find="_1" style="white-space: normal; color: #222222; line-height: 19px; text-align: justify; background-color: #8a8a8a;">/etc/sysctl.conf</pre><pre data-find="_1" style="white-space: normal; color: #222222; line-height: 19px; text-align: justify; background-color: #8a8a8a;"><br /></pre><pre data-find="_1" style="white-space: normal; color: #222222; line-height: 19px; text-align: justify; background-color: #8a8a8a;"></pre><pre data-find="_1" style="white-space: normal; color: #323e32; background-color: #8a8a8a;">&gt; &gt; net.bridge.bridge-nf-call-iptables = 1 &gt;&nbsp;<wbr></pre><pre data-find="_1" style="white-space: normal; color: #323e32; background-color: #8a8a8a;">&gt; net.bridge.bridge-nf-call-ip6tables = 0 &gt;&nbsp;<wbr></pre><pre data-find="_1" style="white-space: normal; color: #323e32; background-color: #8a8a8a;">&gt; net.bridge.bridge-nf-call-arptables = 1</pre><pre data-find="_1" style="white-space: normal; color: #323e32; background-color: #8a8a8a;"><br /></pre><pre data-find="_1" style="white-space: normal; color: #323e32; background-color: #8a8a8a;">xe-switch-network-backend bridge</pre><pre data-find="_1" style="white-space: normal; color: #323e32; background-color: #8a8a8a;"><br /></pre><pre data-find="_1" style="white-space: normal; color: #323e32; background-color: #8a8a8a;">REBOOT</pre><img src ="http://www.blogjava.net/ivanwan/aggbug/421501.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2014-12-17 14:54 <a href="http://www.blogjava.net/ivanwan/archive/2014/12/17/421501.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Hazelcast River Plugin for ElasticSearch</title><link>http://www.blogjava.net/ivanwan/archive/2013/10/08/404716.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Mon, 07 Oct 2013 16:57:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2013/10/08/404716.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/404716.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2013/10/08/404716.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/404716.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/404716.html</trackback:ping><description><![CDATA[<div>https://github.com/sksamuel/elasticsearch-river-hazelcast</div><img src ="http://www.blogjava.net/ivanwan/aggbug/404716.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2013-10-08 00:57 <a href="http://www.blogjava.net/ivanwan/archive/2013/10/08/404716.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>elasticsearch安装配置及中文分词</title><link>http://www.blogjava.net/ivanwan/archive/2013/10/04/404680.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Thu, 03 Oct 2013 18:09:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2013/10/04/404680.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/404680.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2013/10/04/404680.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/404680.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/404680.html</trackback:ping><description><![CDATA[<div>ElasticSearch是一个基于Lucene构建的开源，分布式，RESTful搜索引擎。设计用于云计算中，能够达到实时搜索，稳定，可靠，快速，安装使用方便。支持通过HTTP使用JSON进行数据索引。&nbsp; <p><span>　　我们建立一个网站或应用程序，并要添加搜索功能，令我们受打击的是：搜索工作是很难的。我们希望我们的搜索解决方案要快，我们希望 有一个零配置和一个完全免费的搜索模式，我们希望能够简单地使用JSON通过HTTP的索引数据，我们希望我们的搜索服务器始终可用，我们希望能够一台开 始并扩展到数百，我们要实时搜索，我们要简单的多租户，我们希望建立一个云的解决方案。Elasticsearch旨在解决所有这些问题和更多的。</span></p> <h2>安装</h2> <p><span>　　以windows操作系统和ES0.19.7版本为例：</span></p> <div>&nbsp;</div> <p><span>　　&#9312;下载elasticsearch-0.19.7.zip</span></p> <div>&nbsp;</div> <p><span>　　&#9313;直接解压至某目录，设置该目录为ES_HOME环境变量</span></p> <div>&nbsp;</div> <p><span>　　&#9314;安装JDK，并设置JAVA_HOME环境变量</span></p> <div>&nbsp;</div> <p><span>　　&#9315;在windows下，运行 %ES_HOME%\bin\elasticsearch.bat即可运行<br /></span></p> <p><strong>分布式搜索elasticsearch单机与服务器环境搭建</strong></p> <div id="article_content"> <p>&nbsp; &nbsp; &nbsp; 先到<a href="http://www.elasticsearch.org/download/">http://www.elasticsearch.org/download/</a><span>下 载最新版的elasticsearch运行包，本文写时最新的是0.19.1，作者是个很勤快的人，es的更新很频繁，bug修复得很快。下载完解开有三 个包:bin是运行的脚本，config是设置文件，lib是放依赖的包。如果你要装插件的话就要多新建一个plugins的文件夹，把插件放到这个文件 夹中。<br /></span></p> <p>1.单机环境：</p> <p>单机版的elasticsearch运行很简单，linux下直接&nbsp;bin/elasticsearch就运行了，windows运行bin/elasticsearch.bat。如果是在局域网中运行elasticsearch集群也是很简单的，只要cluster.name设置一致，并且机器在同一网段下，启动的es会自动发现对方，组成集群。</p> <p>2.服务器环境：</p> <p>如果是在服务器上就可以使用elasticsearch-servicewrapper这个es插件，它支持通过参数，指定是在后台或前台运行es，并且支持启动，停止，重启es服务（默认es脚本只能通过ctrl+c关闭es）。使用方法是到<a href="https://github.com/elasticsearch/elasticsearch-servicewrapper">https://github.com/elasticsearch/elasticsearch-servicewrapper</a>下载service文件夹，放到es的bin目录下。下面是命令集合：<br />bin/service/elasticsearch +<br />console 在前台运行es<br />start 在后台运行es<br />stop 停止es<br />install 使es作为服务在服务器启动时自动启动<br />remove 取消启动时自动启动</p> <p>在service目录下有个elasticsearch.conf配置文件，主要是设置一些java运行环境参数，其中比较重要的是下面的</p> <p>参数：</p> <p>#es的home路径，不用用默认值就可以<br />set.default.ES_HOME=&lt;Path to ElasticSearch Home&gt;</p> <p>#分配给es的最小内存<br />set.default.ES_MIN_MEM=256</p> <p>#分配给es的最大内存<br />set.default.ES_MAX_MEM=1024</p> <p><br /># 启动等待超时时间（以秒为单位）<br />wrapper.startup.timeout=300</p> <p># 关闭等待超时时间（以秒为单位）</p> <p>wrapper.shutdown.timeout=300</p> <p># ping超时时间(以秒为单位)</p> <p>wrapper.ping.timeout=300</p> </div> <h2>安装插件</h2> <p><span>　　以head插件为例：</span></p> <div>&nbsp;</div> <p><span>　　联网时，直接运行%ES_HOME%\bin\plugin -install mobz/elasticsearch-head</span></p> <div>&nbsp;</div> <p><span>　　不联网时，下载elasticsearch-head的zipball的master包，把内容解压到%ES_HOME%\plugin\head\_site目录下，[该插件为site类型插件]</span></p> <div>&nbsp;</div> <p><span>　　安装完成，重启服务，在浏览器打开 http://localhost:9200/_plugin/head/ 即可<br /></span></p> <h2>ES概念</h2> <p><span>　　cluster</span></p> <div>&nbsp;</div> <p><span><span>　　代表一个集群，集群中有多个节点，其中有一个为主节点，这个主节点是可以通过选举产生的，主从节点是对于集群内部来说 的。es的一个概念就是去中心化，字面上理解就是无中心节点，这是对于集群外部来说的，因为从外部来看es集群，在逻辑上是个整体，你与任何一个节点的通 信和与整个es集群通信是等价的。</span></span></p> <div>&nbsp;</div> <p><span>　　shards</span></p> <div>&nbsp;</div> <p><span>　　代表索引分片，es可以把一个完整的索引分成多个分片，这样的好处是可以把一个大的索引拆分成多个，分布到不同的节点上。构成分布式搜索。分片的数量只能在索引创建前指定，并且索引创建后不能更改。</span></p> <div>&nbsp;</div> <p><span>　　replicas</span></p> <div>&nbsp;</div> <p><span>　　代表索引副本，es可以设置多个索引的副本，副本的作用一是提高系统的容错性，当个某个节点某个分片损坏或丢失时可以从副本中恢复。二是提高es的查询效率，es会自动对搜索请求进行负载均衡。</span></p> <div>&nbsp;</div> <p><span>　　recovery</span></p> <div>&nbsp;</div> <p><span>　　代表数据恢复或叫数据重新分布，es在有节点加入或退出时会根据机器的负载对索引分片进行重新分配，挂掉的节点重新启动时也会进行数据恢复。</span></p> <div>&nbsp;</div> <p><span>　　river</span></p> <div>&nbsp;</div> <p><span><span>　　代表es的一个数据源，也是其它存储方式（如：数据库）同步数据到es的一个方法。它是以插件方式存在的一个es服 务，通过读取river中的数据并把它索引到es中，官方的river有couchDB的，RabbitMQ的，Twitter的，Wikipedia 的。</span></span></p> <div>&nbsp;</div> <p><span>　　gateway</span></p> <div>&nbsp;</div> <p><span><span>　　代表es索引的持久化存储方式，es默认是先把索引存放到内存中，当内存满了时再持久化到硬盘。当这个es集群关闭再 重新启动时就会从gateway中读取索引数据。es支持多种类型的gateway，有本地文件系统（默认），分布式文件系统，Hadoop的HDFS和 amazon的s3云存储服务。</span></span></p> <div>&nbsp;</div> <p><span>　　discovery.zen</span></p> <div>&nbsp;</div> <p><span>　　代表es的自动发现节点机制，es是一个基于p2p的系统，它先通过广播寻找存在的节点，再通过多播协议来进行节点之间的通信，同时也支持点对点的交互。</span></p> <div>&nbsp;</div> <p><span>　　Transport</span></p> <div>&nbsp;</div> <p><span>　　代表es内部节点或集群与客户端的交互方式，默认内部是使用tcp协议进行交互，同时它支持http协议（json格式）、thrift、servlet、memcached、zeroMQ等的传输协议（通过插件方式集成）。<br /></span></p> <p><strong>分布式搜索elasticsearch中文分词集成</strong></p> <div id="article_content"> <p>elasticsearch官方只提供smartcn这个中文分词插件，效果不是很好，好在国内有medcl大神（国内最早研究es的人之一）写的两个中文分词插件，一个是ik的，一个是mmseg的，下面分别介绍下两者的用法，其实都差不多的，先安装插件，命令行：<br />安装ik插件：</p> <p>plugin&nbsp;-install&nbsp;medcl/elasticsearch-analysis-ik/1.1.0 &nbsp;</p> <p>下载ik相关配置词典文件到config目录</p> <div bg_plain"=""><ol start="1"><li><span>cd&nbsp;config&nbsp;&nbsp;</span></li><li>wget&nbsp;http://github.com/downloads/medcl/elasticsearch-analysis-ik/ik.zip&nbsp;--no-check-certificate&nbsp;&nbsp;</li><li>unzip&nbsp;ik.zip&nbsp;&nbsp;</li><li>rm&nbsp;ik.zip&nbsp;&nbsp;</li></ol></div> <p>安装mmseg插件：</p> <div bg_plain"=""><ol start="1"><li><span>bin/plugin&nbsp;-install&nbsp;medcl/elasticsearch-analysis-mmseg/1.1.0&nbsp;&nbsp;</span></li></ol></div> <p>下载相关配置词典文件到config目录</p> <div bg_plain"=""><ol start="1"><li><span>cd&nbsp;config&nbsp;&nbsp;</span></li><li>wget&nbsp;http://github.com/downloads/medcl/elasticsearch-analysis-mmseg/mmseg.zip&nbsp;--no-check-certificate&nbsp;&nbsp;</li><li>unzip&nbsp;mmseg.zip&nbsp;&nbsp;</li><li>rm&nbsp;mmseg.zip&nbsp;&nbsp;</li></ol></div> <p>分词配置</p> <p>ik分词配置，在elasticsearch.yml文件中加上</p> <div bg_html"=""><ol start="1"><li><span>index:&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;analysis:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;analyzer:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ik:&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alias:&nbsp;[ik_analyzer]&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;type:&nbsp;org.elasticsearch.index.analysis.IkAnalyzerProvider&nbsp;&nbsp;</li></ol></div> <p>或</p> <div bg_html"=""><ol start="1"><li><span>index.analysis.analyzer.ik.type&nbsp;:&nbsp;&#8220;ik&#8221;&nbsp;&nbsp;</span></li></ol></div> <p>这两句的意义相同<br />mmseg分词配置，也是在在elasticsearch.yml文件中</p> <div bg_html"=""><ol start="1"><li><span>index:&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;analysis:&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;analyzer:&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;mmseg:&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alias:&nbsp;[news_analyzer,&nbsp;mmseg_analyzer]&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;type:&nbsp;org.elasticsearch.index.analysis.MMsegAnalyzerProvider&nbsp;&nbsp;</li></ol></div> <p>或</p> <div bg_html"=""><ol start="1"><li><span>index.analysis.analyzer.default.type&nbsp;:&nbsp;"mmseg"&nbsp;&nbsp;</span></li></ol></div> <p>mmseg分词还有些更加个性化的参数设置如下</p> <div bg_html"=""><ol start="1"><li><span>index:&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;analysis:&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;tokenizer:&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;mmseg_maxword:&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;type:&nbsp;mmseg&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;seg_type:&nbsp;"max_word"&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;mmseg_complex:&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;type:&nbsp;mmseg&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;seg_type:&nbsp;"complex"&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;mmseg_simple:&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;type:&nbsp;mmseg&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;seg_type:&nbsp;"simple"&nbsp;&nbsp;</li></ol></div> <p>这样配置完后插件安装完成，启动es就会加载插件。</p> <p>定义mapping</p> <p>在添加索引的mapping时就可以这样定义分词器</p> <div bg_plain"=""><ol start="1"><li><span>{&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;"page":{&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"properties":{&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"title":{&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"type":"string",&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"indexAnalyzer":"ik",&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"searchAnalyzer":"ik"&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;},&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"content":{&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"type":"string",&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"indexAnalyzer":"ik",&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"searchAnalyzer":"ik"&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;</li><li>}&nbsp;&nbsp;</li></ol></div> <p>indexAnalyzer为索引时使用的分词器，searchAnalyzer为搜索时使用的分词器。</p> <p>java mapping代码如下：</p> <div bg_java"=""><ol start="1"><li><span>XContentBuilder&nbsp;content&nbsp;=&nbsp;XContentFactory.jsonBuilder().startObject()&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.startObject(<span>"page")&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.startObject(<span>"properties")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.startObject(<span>"title")&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.field(<span>"type",&nbsp;"string")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.field(<span>"indexAnalyzer",&nbsp;"ik")&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.field(<span>"searchAnalyzer",&nbsp;"ik")&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.endObject()&nbsp;&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.startObject(<span>"code")&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.field(<span>"type",&nbsp;"string")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.field(<span>"indexAnalyzer",&nbsp;"ik")&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.field(<span>"searchAnalyzer",&nbsp;"ik")&nbsp;&nbsp;</span></li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.endObject()&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.endObject()&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.endObject()&nbsp;&nbsp;</li><li>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.endObject()&nbsp;&nbsp;</li></ol></div> <p>定义完后操作索引就会以指定的分词器来进行分词。</p> <p>&nbsp;附：</p> <p>ik分词插件项目地址：<a href="https://github.com/medcl/elasticsearch-analysis-ik">https://github.com/medcl/elasticsearch-analysis-ik</a></p> <p>mmseg分词插件项目地址：<a href="https://github.com/medcl/elasticsearch-analysis-mmseg">https://github.com/medcl/elasticsearch-analysis-mmseg</a></p> <p>如果觉得配置麻烦，也可以下载个配置好的es版本，地址如下：<a href="https://github.com/medcl/elasticsearch-rtf">https://github.com/medcl/elasticsearch-rtf</a></p> </div> <p>&nbsp;</p> <div> <h3><strong>elasticsearch的基本用法</strong></h3> </div> <div id="blog_content"><br />最大的特点：&nbsp;<br />1. 数据库的 database, 就是&nbsp; index&nbsp;<br />2. 数据库的 table,&nbsp; 就是 tag&nbsp;<br />3. 不要使用browser， 使用curl来进行客户端操作.&nbsp; 否则会出现 java heap ooxx...&nbsp;<br /><br />curl:&nbsp; -X 后面跟 RESTful ：&nbsp; GET, POST ...&nbsp;<br />-d 后面跟数据。 (d = data to send)&nbsp;<br /><br />1. create:&nbsp;&nbsp;<br /><br />指定 ID 来建立新记录。 （貌似PUT， POST都可以）&nbsp;<br />$ curl -XPOST localhost:9200/films/md/2 -d '&nbsp;<br />{ "name":"hei yi ren", "tag": "good"}'&nbsp;<br /><br />使用自动生成的 ID 建立新纪录：&nbsp;<br />$ curl -XPOST localhost:9200/films/md -d '&nbsp;<br />{ "name":"ma da jia si jia3", "tag": "good"}'&nbsp;<br /><br />2. 查询：&nbsp;<br />2.1 查询所有的 index, type:&nbsp;<br />$ curl localhost:9200/_search?pretty=true&nbsp;<br /><br />2.2 查询某个index下所有的type:&nbsp;<br />$ curl localhost:9200/films/_search&nbsp;<br /><br />2.3 查询某个index 下， 某个 type下所有的记录：&nbsp;<br />$ curl localhost:9200/films/md/_search?pretty=true&nbsp;<br /><br />2.4 带有参数的查询：&nbsp;&nbsp;<br />$ curl localhost:9200/films/md/_search?q=tag:good&nbsp;<br />{"took":7,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":2,"max_score":1.0,"hits":[{"_index":"film","_type":"md","_id":"2","_score":1.0,  "_source" :&nbsp;<br />{ "name":"hei yi ren", "tag": "good"}},{"_index":"film","_type":"md","_id":"1","_score":0.30685282, "_source" :&nbsp;<br />{ "name":"ma da jia si jia", "tag": "good"}}]}}&nbsp;<br /><br />2.5 使用JSON参数的查询： （注意 query 和 term 关键字）&nbsp;<br />$ curl localhost:9200/film/_search -d '&nbsp;<br />{"query" : { "term": { "tag":"bad"}}}'&nbsp;<br /><br />3. update&nbsp;&nbsp;<br />$ curl -XPUT localhost:9200/films/md/1 -d { ...(data)... }&nbsp;<br /><br />4. 删除。 删除所有的：&nbsp;<br />$ curl -XDELETE localhost:9200/films</div></div><img src ="http://www.blogjava.net/ivanwan/aggbug/404680.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2013-10-04 02:09 <a href="http://www.blogjava.net/ivanwan/archive/2013/10/04/404680.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Cloudera Impala TarBall 编译、安装与配置</title><link>http://www.blogjava.net/ivanwan/archive/2013/06/29/401074.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Sat, 29 Jun 2013 09:12:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2013/06/29/401074.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/401074.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2013/06/29/401074.html#Feedback</comments><slash:comments>1</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/401074.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/401074.html</trackback:ping><description><![CDATA[<div><p>Impala是由Cloudera开发的高性能实时计算工具，相比Hive性能提升了几十、甚至近百倍，基本思想是将计算分发到每个 Datanode所在的节点，依靠内存实现数据的缓存进行快速计算，类似的系统还有Berkeley的Shark。从实际测试来看，Impala效率确实 不错，由于Impala大量使用C++实现，不使用CDH的Image而自己编译安装要费不少功夫，这里记录一下安装配置过程和碰到的一些问题。我在测试 时候使用的是CentOS6.2。<br /> 一些基本的安装步骤在这里，但我在安装的时候碰到一些问题，这里再详细说明一下过程。</p> <p><strong>1.安装所需的依赖lib，这一步没有什么不同</strong></p> <div>sudo yum install boost-test boost-program-options  libevent-devel automake libtool flex bison gcc-c++ openssl-devel make  cmake doxygen.x86_64 glib-devel boost-devel python-devel bzip2-devel svn  libevent-devel cyrus-sasl-devel wget git unzip</div> <p><strong>2.安装LLVM</strong>，按照流程做即可，注意要在多台机器上编译安装Impala的话，只用在一台机器上执行下面蓝色的部分，再把llvm分发到多台机器上执行后面红色部分的指令就可以了，没必要每个机器都通过svn下载一遍源代码，很费时。</p> <div> <div style="color:blue"> wget http://llvm.org/releases/3.2/llvm-3.2.src.tar.gz<br /> tar xvzf llvm-3.2.src.tar.gz<br /> cd llvm-3.2.src/tools<br /> svn co http://llvm.org/svn/llvm-project/cfe/tags/RELEASE_32/final/ clang<br /> cd ../projects<br /> svn co http://llvm.org/svn/llvm-project/compiler-rt/tags/RELEASE_32/final/ compiler-rt </div> <div style="color:red"> cd ..<br /> ./configure &#8211;with-pic<br /> make -j4 REQUIRES_RTTI=1<br /> sudo make install </div> </div> <p><strong>3.安装Maven</strong>，这个没什么好说的，按照步骤，设置一下环境变量即可，Maven是为了后面build impala源代码用的。</p> <div>wget http://www.fightrice.com/mirrors/apache/maven/maven-3/3.0.4/binaries/apache-maven-3.0.4-bin.tar.gz<br /> tar xvf apache-maven-3.0.4.tar.gz &amp;&amp; sudo mv apache-maven-3.0.4 /usr/local</div> <p>修改~/.bashrc，增加maven环境变量</p> <div>export M2_HOME=/usr/local/apache-maven-3.0.4<br /> export M2=$M2_HOME/bin<br /> export PATH=$M2:$PATH</div> <p>更新环境变量，查看mvn版本是否正确</p> <div>source ~/.bashrc<br /> mvn -version</div> <p><strong>4.下载Impala源代码</strong></p> <div>git clone https://github.com/cloudera/impala.git</div> <p><strong>5.设置Impala环境变量，编译时需要</strong></p> <div>cd impala<br /> ./bin/impala-config.sh</div> <p><strong>6.下载impala依赖的第三方package</strong></p> <div>cd thirdparty<br /> ./download_thirdparty.sh</div> <p>注意这里其中一个包cyrus-sasl-2.1.23可能下载失败，可以自行搜索(CSDN里面就有)下载下来然后解压缩到thirdparty 文件夹，最好是在执行完download_thirdparty.sh之后做这一步，因为download_thirdparty.sh会把所有目录下下 载下来的tar.gz给删除掉。</p> <p><strong>7.理论上现在可以开始build impala了</strong>，但是实际build过程中可能会出现问题，我碰到的问题和 Boost相关的(具体错误不记得了)，最后发现是由于boost版本太低导致的，CentOS  6.2系统默认yum源中的boost和boost-devel版本是1.41，但是impala编译需要1.44以上的版本，因此需要做的是自己重新编 译boost，我用的是boost 1.46版本。</p> <div>#删除已安装的boost和boost-devel<br /> yum remove boost<br /> yum remove boost-devel<br /> #下载boost<br /> #可以去(http://www.boost.org/users/history/)下载boost<br /> #下载后解压缩<br /> tar xvzf boost_1_46_0.tar.gz<br /> mv boost_1_46_0 /usr/local/<br /> cd /usr/include<br /> ./bootstrap.sh<br /> ./bjam<br /> #执行后若打印以下内容，则表示安装成功<br /> # The Boost C++ Libraries were successfully built!<br /> # The following directory should be added to compiler include paths:<br /> # /usr/local/boost_1_46_0<br /> # The following directory should be added to linker library paths:<br /> # /usr/local/boost_1_46_0/stage/lib<br /> #现在还需要设置Boost环境变量和Impala环境变量 <p>export BOOST_ROOT=&#8217;/usr/local/boost_1_46_0&#8242;<br /> export IMPALA_HOME=&#8217;/home/extend/impala&#8217;</p> <p>#注意一下，这里虽然安装了boost，但是我在实际使用的时候，编译还是会报错的，报的错误是找不到这个包：#libboost_filesystem-mt.so，这个包是由boost-devel提供的，所以我的做法是把boost-devel给重新装上<br /> #我没有试过如果之前不删除boost-devel会不会有问题，能确定的是按这里写的流程做是没问题的</p> <p>yum install boost-devel </p></div> <p><strong>8.现在终于可以编译impala了</strong></p> <div>cd $IMPALA_HOME<br /> ./build_public.sh -build_thirdparty<br /> #编译首先会编译C++部分，然后再用mvn编译java部分，整个过程比较慢，我在虚拟机上大概需要1-2个小时。<br /> #Impala编译完后的东西在be/build/debug里面</div> <p><strong>9.启动impala_shell需要用到的python包</strong></p> <div>#第一次执行impalad_shell可能会报错，这里需要安装python的两个包:thrift和prettytable，使用easy_install即可<br /> easy_install prettytable<br /> easy_install thrift</div> <p><strong>10.</strong>如果你以为到这里就万事大吉就太天真了，在配置、启动、使用Impala的时候还会有很多奇葩的问题；</p> <p><strong>问题1：Hive和Hadoop使用的版本</strong><br /> CDH对版本的依赖要求比较高，为了保证Impala正常运行，强烈建议使用Impala里面thirdparty目录中自带的Hadoop(native lib已经编译好的)和Hive版本。<br /> Hadoop的配置文件在$HADOOP_HOME/etc/hadoop中，要注意的是需要启用native lib</p> <div> <div xml=""  geshi"="" style="overflow:auto;white-space:nowrap;"><div codecolorer"="" style="white-space:nowrap">#修改hadoop的core-site.xml，除了这个选项之外，其他配置和问题2中的core-site.xml一致<br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>hadoop.native.lib<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>true<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;description<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>Should native hadoop libraries, if present, be used.<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/description<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span></div></div> </div> <p><strong>问题2：Impala的配置文件位置</strong><br /> Impala默认使用的配置文件路径是在bin/set-classpath.sh中配置的，建议把CLASSPATH部分改成</p> <div>CLASSPATH=\<br /> $IMPALA_HOME/conf:\<br /> $IMPALA_HOME/fe/target/classes:\<br /> $IMPALA_HOME/fe/target/dependency:\<br /> $IMPALA_HOME/fe/target/test-classes:\<br /> ${HIVE_HOME}/lib/datanucleus-core-2.0.3.jar:\<br /> ${HIVE_HOME}/lib/datanucleus-enhancer-2.0.3.jar:\<br /> ${HIVE_HOME}/lib/datanucleus-rdbms-2.0.3.jar:\<br /> ${HIVE_HOME}/lib/datanucleus-connectionpool-2.0.3.jar:</div> <p>即要求Impala使用其目录下的Conf文件夹作为配置文件，然后创建一下Conf目录，把3样东西拷贝进来：core-site.xml、hdfs-site.xml、hive-site.xml。<br /> core-site.xml的配置，下面几个选项是必须要配置的，</p> <div> <div xml=""  geshi"="" style="overflow:auto;white-space:nowrap;"><div codecolorer"="" style="white-space:nowrap"><span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;?xml</span> <span style="color: #000066;">version</span>=<span style="color: #ff0000;">"1.0"</span><span style="color: #000000; font-weight: bold;">?&gt;</span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;?xml-stylesheet</span> <span style="color: #000066;">type</span>=<span style="color: #ff0000;">"text/xsl"</span> <span style="color: #000066;">href</span>=<span style="color: #ff0000;">"configuration.xsl"</span><span style="color: #000000; font-weight: bold;">?&gt;</span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;configuration<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>fs.defaultFS<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>hdfs://10.200.4.11:9000<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>dfs.client.read.shortcircuit<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>true<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>dfs.client.use.legacy.blockreader.local<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>false<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>dfs.client.read.shortcircuit.skip.checksum<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>false<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/configuration<span style="color: #000000; font-weight: bold;">&gt;</span></span></span></div></div> </div> <p>hdfs-site.xml的配置</p> <div> <div xml=""  geshi"="" style="overflow:auto;white-space:nowrap;"><div codecolorer"="" style="white-space:nowrap"><span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;?xml</span> <span style="color: #000066;">version</span>=<span style="color: #ff0000;">"1.0"</span> <span style="color: #000066;">encoding</span>=<span style="color: #ff0000;">"UTF-8"</span><span style="color: #000000; font-weight: bold;">?&gt;</span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;?xml-stylesheet</span> <span style="color: #000066;">type</span>=<span style="color: #ff0000;">"text/xsl"</span> <span style="color: #000066;">href</span>=<span style="color: #ff0000;">"configuration.xsl"</span><span style="color: #000000; font-weight: bold;">?&gt;</span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;configuration<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>dfs.block.local-path-access.user<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>${your user}<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>dfs.datanode.hdfs-blocks-metadata.enabled<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>true<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>dfs.datanode.data.dir<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>${yourdatadir}<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; &nbsp;<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>dfs.client.use.legacy.blockreader.local<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; &nbsp;<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>false<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; &nbsp;<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>dfs.datanode.data.dir.perm<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; &nbsp;<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>750<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>dfs.client.file-block-storage-locations.timeout<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>5000<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>dfs.domain.socket.path<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>/home/extend/cdhhadoop/dn.8075<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/configuration<span style="color: #000000; font-weight: bold;">&gt;</span></span></span></div></div> </div> <p>最后是hive-site.xml，这个比较简单，指定使用DBMS为元数据存储即可(impala必须和hive共享元数据，因为impala无 法create table)；Hive-site.xml使用mysql作为metastore的说明在很多地方都可以查到，配置如下：</p> <div> <div xml=""  geshi"="" style="overflow:auto;white-space:nowrap;"><div codecolorer"="" style="white-space:nowrap"><span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;?xml</span> <span style="color: #000066;">version</span>=<span style="color: #ff0000;">"1.0"</span><span style="color: #000000; font-weight: bold;">?&gt;</span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;?xml-stylesheet</span> <span style="color: #000066;">type</span>=<span style="color: #ff0000;">"text/xsl"</span> <span style="color: #000066;">href</span>=<span style="color: #ff0000;">"configuration.xsl"</span><span style="color: #000000; font-weight: bold;">?&gt;</span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;configuration<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>javax.jdo.option.ConnectionURL<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>jdbc:mysql://10.28.0.190:3306/impala?createDatabaseIfNotExist=true<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;description<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>JDBC connect string for a JDBC metastore<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/description<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>javax.jdo.option.ConnectionDriverName<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>com.mysql.jdbc.Driver<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;description<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>Driver class name for a JDBC metastore<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/description<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>javax.jdo.option.ConnectionUserName<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>root<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;description<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>username to use against metastore database<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/description<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>javax.jdo.option.ConnectionPassword<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/name<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>root<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/value<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> &nbsp; <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;description<span style="color: #000000; font-weight: bold;">&gt;</span></span></span>password to use against metastore database<span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/description<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/property<span style="color: #000000; font-weight: bold;">&gt;</span></span></span><br /> <span style="color: #009900;"><span style="color: #000000; font-weight: bold;">&lt;/configuration<span style="color: #000000; font-weight: bold;">&gt;</span></span></span></div></div> </div> <p>记得把mysql-connector的jar包给拷贝到hive的lib里面去，同样也要拷贝给impala ( 拷贝至$IMPALA_HOME/fe/target/dependency)</p> <p><strong>11.启动Impala</strong>。到此，Impala是可以正常启动的。这里说明一下，官方文档没有说很清楚Impala的Service之间是如何互相协调的，按照官方的步骤，最后通过如下方法来在一台机器上启动Impala Service：</p> <div> #启动单机impala service<br /> ${IMPALA_HOME}/bin/start-impalad.sh -use_statestore=false<br /> #启动impala shell<br /> ${IMPALA_HOME}/bin/impala-shell.sh </div> <p>然后impala-shell就可以连接到localhost进行查询了；注意，这里只是单机查询，可以用来验证你的Impala是否正常work 了；如何启动一个Impala集群，跳到第12步。这里继续说一下可能遇到的问题，我遇到的一个比较奇葩的问题是show  tables和count(1)没有问题，但是select * from table的时候impala在读取数据的时候就崩溃了(有时报错could  not find method close from class org/apache/hadoop/fs/FSDataInputStream  with signature ()V )，这里修改了两个地方解决这个问题:</p> <p>a.修改impala的set-classpath.sh并移除$IMPALA_HOME/fe/target/dependency目录中除了hadoop-auth-2.0.0-*.jar之外所有hadoop-*开头的jar包。</p> <div> #把impala dependency中和hadoop相关的包给弄出来，只保留auth<br /> mv $IMPALA_HOME/fe/target/dependency/hadoo* $IMPALA_HOME<br /> mv $IMPALA_HOME/hadoop-auth*.jar mv $IMPALA_HOME/fe/target/dependency<br /> #修改bin/set-classpath.sh，将$HADOOP_HOME中的lib给加入，在set-classpath.sh最后一行export CLASSPATH之前#添加<br /> for jar in `ls $HADOOP_HOME/share/hadoop/common/*.jar`; do<br /> CLASSPATH=${CLASSPATH}:$jar<br /> done<br /> for jar in `ls $HADOOP_HOME/share/hadoop/yarn/*.jar`; do<br /> CLASSPATH=${CLASSPATH}:$jar<br /> done<br /> for jar in `ls $HADOOP_HOME/share/hadoop/hdfs/*.jar`; do<br /> CLASSPATH=${CLASSPATH}:$jar<br /> done<br /> for jar in `ls $HADOOP_HOME/share/hadoop/mapreduce/*.jar`; do<br /> CLASSPATH=${CLASSPATH}:$jar<br /> done<br /> for jar in `ls $HADOOP_HOME/share/hadoop/tools/lib/*.jar`; do<br /> CLASSPATH=${CLASSPATH}:$jar<br /> done </div> <p>b.注意到Impala对待table的时候只能够使用hive的默认列分隔符，如果在hive里面create table的时候使用了自定义的分隔符，Impala servive就会在读数据的时候莫名其妙的崩溃。</p> <p><strong>12.启动Impala 集群</strong><br /> Impala实际上由两部分组成，一个是StateStore，用来协调各个机器计算，相当于Master，然后就是Impalad，相当于Slave，启动方法如下：</p> <div> #启动statestore<br /> #方法1，直接利用impala/bin下面的这个python脚本<br /> #这个脚本会启动一个StateStore，同时启动-s个数量的Impala Service在本机<br /> $IMPALA_HOME/bin/start-impala-cluster.py -s 1 &#8211;log_dir /home/extend/impala/impalaLogs<br /> #方法2，手动启动StateStore<br /> $IMPALA_HOME/be/build/debug/statestore/statestored -state_store_port=24000 <p>#启动impala service<br /> #在每个编译安装了impala的节点上执行命令<br /> #参数-state_store_host指定启动了stateStore的机器名<br /> #-nn即namenode，指定hadoop的namenode<br /> #-nn_port是namenode的HDFS入口端口号<br /> $IMPALA_HOME/bin/start-impalad.sh -state_store_host=m11 -nn=m11 -nn_port=9000 </p></div> <p>正常启动之后，访问http://${stateStore_Server}:25010/ 可以看到StateStore的状态，其中的subscribers页面可以看到已经连接上的impala service node；</p> <p><strong>13.使用Impala客户端</strong><br /> 这一步最简单，随便找一个机器启动</p>  $IMPALA_HOME/bin/impala-shell.sh<br /> #启动之后可以随便连接一个impala service<br /> connect m12<br /> #连接上之后就可以执行show tables之类的操作了<br /> #需要注意的是，如果hive创建表或更新了表结构，impala的节点是不知道的<br /> #必须通过客户端连接各个impala service并执行refresh来刷新metadata<br /> #或者重启所有impala service</div><img src ="http://www.blogjava.net/ivanwan/aggbug/401074.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2013-06-29 17:12 <a href="http://www.blogjava.net/ivanwan/archive/2013/06/29/401074.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Virtual Desktop</title><link>http://www.blogjava.net/ivanwan/archive/2012/10/20/389916.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Sat, 20 Oct 2012 05:18:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2012/10/20/389916.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/389916.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2012/10/20/389916.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/389916.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/389916.html</trackback:ping><description><![CDATA[<div><div><div style="overflow: hidden"><div style="font-family:Consolas,'Lucida Console',monospace;padding-left:7px;word-wrap:break-word;color:#000000">8 Virtual Desktop program: Ulteo, NX Enteprise Server,  FoSS CLOUD, Orcale Virtualbox, Thinstuff, JetClouding, Go Grid,2xCloud  Computing</div></div></div>           </div><img src ="http://www.blogjava.net/ivanwan/aggbug/389916.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2012-10-20 13:18 <a href="http://www.blogjava.net/ivanwan/archive/2012/10/20/389916.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>kvm创建</title><link>http://www.blogjava.net/ivanwan/archive/2012/06/08/380368.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Fri, 08 Jun 2012 09:55:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2012/06/08/380368.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/380368.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2012/06/08/380368.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/380368.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/380368.html</trackback:ping><description><![CDATA[<div><div>sudo qemu-img create -f qcow2 -o size=30240M,preallocation=metadata win2003_hda.img</div><div>http://blog.kreyolys.com/2011/09/27/kvm-virtual-machines-disk-format-file-basedqcow2-or-block-devicelvm2/---比较</div><div>sudo virt-install \<br />--name win2003_test \<br />--ram=1024 \<br />--vcpus=2 \<br />--disk /kvm/win2003_hda.img,bus=virtio \<br />--network bridge:br0,model=virtio \<br />--vnc \<br />--accelerate \<br />-c /share/os/win2003-i386.iso \<br />--disk /home/kvm/virtio-win-1.1.16.vfd,device=floppy \<br />-c /home/kvm/virtio-win-0.1-22.iso \<br />--os-type=windows \<br />--os-variant=win2k3 \<br />--noapic \<br />--connect \<br />qemu:///system \<br />--hvm</div><br /><div>http://www.howtoforge.com/installing-kvm-guests-with-virt-install-on-ubuntu-12.04-lts-server<br /><br /><div><p><a href="http://www.linuxwind.org/download/virtio-win-1.1.16.vfd"><span style="color:#000000;"><span style="font-size:14px;"><span style="font-family:lucida sans unicode,lucida grande,sans-serif;"><span style="line-height: 24px; ">http://www.linuxwind.org/download/</span>virtio-win-1.1.16.vfd</span></span></span></a></p> <p><a href="http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-0.1-22.iso"><span style="color:#000000;"><span style="font-size:14px;"><span style="font-family:lucida sans unicode,lucida grande,sans-serif;">http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-0.1-22.iso</span></span></span></a></p></div><br />半虚拟化参考：<br /><div><ol><li><div>#!/bin/sh</div></li><li><div>WINISO=/path/to/win7.iso &nbsp; &nbsp;#Windows ISO</div></li><li><div>INSTALLDISK=win7virtio.img &nbsp;#Disk location. Can be LVM LV</div></li><li><div>VFD=http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-1.1.16.vfd</div></li><li><div>DRVRISO=http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-0.1-22.iso</div></li><li><div>&nbsp;</div></li><li><div>[ -e $(basename $VFD) ] &nbsp; &nbsp; || wget $VFD</div></li><li><div>[ -e $(basename $DRVRISO) ] || wget $DRVRISO</div></li><li><div>[ -e $INSTALLDISK ] &nbsp; &nbsp; &nbsp; &nbsp; || qemu-img create $INSTALLDISK 30G</div></li><li><div>&nbsp;</div></li><li><div>sudo virt-install -c qemu:///system --virt-type kvm --name win7virtio --ram 1024 --disk path="$INSTALLDISK",bus=virtio \</div></li><li><div>--disk $(basename $VFD),device=floppy --os-variant win7 --cdrom $(basename $DRVRISO) --cdrom "$WINISO" --vcpus 2</div></li><li><div>ENDING OF BASH SCRIPT </div></li></ol><p>其他参考：</p><p>&nbsp;</p><div><p>In my previous article <a title="KVM Guests: Using Virt-Install to Import an Existing Disk Image" href="http://blog.allanglesit.com/2011/03/kvm-guests-using-virt-install-to-import-an-existing-disk-image/">KVM Guests: Using Virt-Install to Import an Existing Disk Image</a>  we discussed how to use virt-install to import an existing disk image,  which already has an OS installed into it.&nbsp; Additionally in <a title="KVM Guests: Using Virt-Install to Install Debian and Ubuntu Guests" href="http://blog.allanglesit.com/2011/03/kvm-guests-using-virt-install-to-install-debian-and-ubuntu-guests/">KVM Guests: Using Virt-Install to Install Debian and Ubuntu Guests</a>  I documented how to initiate an install directly off of the apt mirror  of your choice for Debian and Ubuntu Guests using virt-install.&nbsp; In this  article we will use virt-install to create a guest and begin the  installation using a CD or ISO image for installation media.</p> <p><strong>Assumptions I Have Made</strong></p> <ul><li>My KVM host is Ubuntu 10.10 and I am assuming that yours is as    well.&nbsp; If it is not then the syntax might be slightly different or may    not include the same features.</li><li>That you have kvm installed on the host and you can manually create VMs using virt-manager and they work perfectly.</li><li>That you have a bridge configured and working on other guests.</li><li>That you have virt-install and libvirt-bin installed as well as    virt-manager or virt-viewer so that you can complete the install after    the virt-install command has completed.</li><li>That you are trying to import disk images that support VirtIO   devices (most recent Linux distributions, Windows does not natively   support the VirtIO interface, so you will had to have manually installed   the VirtIO drivers into your disk image).</li></ul> <p><strong>The Basic Command</strong></p> <pre># virt-install -n vmname -r 2048 --os-type=linux --os-variant=ubuntu --disk /kvm/images/disk/vmname_boot.img,device=disk,bus=virtio,size=40,sparse=true,format=raw -w bridge=br0,model=virtio --vnc --noautoconsole -c /kvm/images/iso/ubuntu.iso</pre> <p><strong>Parameters Detailed</strong></p> <ul><li>-n <em>vmname</em> [the name of your VM]</li><li>-r <em>2048</em> [the amount of RAM in MB for your VM]</li><li>&#8211;os-type=<em>linux</em> [the type of OS linux or windows]</li><li>&#8211;os-variant=<em>ubuntu</em> [the distribution or version of Windows for a full list see man virt-install]</li><li>&#8211;disk    <em>/kvm/images/disk/vmname_boot.img</em>,device=<em>disk</em>,bus=<em>virtio</em>,size=<em>40</em>,sparse=<em>true</em>,format=<em>raw </em>  [this is a long one you define the path, then comma delimited options,    device is the type of storage cdrom, disk, floppy, bus is the  interface   ide, scsi, usb, virtio - virtio is the fastest but you need  to install   the drivers for Windows and older versions of Linux don't  have  support]</li><li>-w bridge=<em>br0</em>,model=<em>virtio</em> [the network    configuration, in this case we are connecting to a bridge named br0, and    using the virtio drivers which perform much better if you are using  an   OS which doesn't support virtio you can use e1000 or rtl8139.&nbsp; You   could  alternatively use --nonetworks if you do not need networking]</li><li>&#8211;vnc [configures the graphics card to use VNC allowing you to use    virt-viewer or virt-manager to see the desktop as if you were at the a    monitor of a physical machine]</li><li>&#8211;noautoconsole [configures the installer to NOT automatically try    to open virt-viewer to view the console to complete the installation -    this is helpful if you are working on a remote system through SSH]</li><li>-c <em>/kvm/images/iso/ubuntu.iso</em> [this option specifies the  cdrom device or iso image with which to boot off of.&nbsp; You could  additionally specify the cdrom device as a disk device, and not use the  -c option, it will then boot off of the cdrom if you don't specify  another installation method]</li></ul> <p><strong>LVM Disk Variation</strong></p> <pre># virt-install -n vmname -r 2048 --os-type=linux --os-variant=ubuntulucid  --disk  /dev/vg_name/lv_name,device=disk,bus=virtio  -w bridge=br0,model=virtio --vnc --noautoconsole -c  /kvm/images/iso/ubuntu.iso</pre> <p><strong>No VirtIO Variation (Uses IDE and e1000 NIC Emulation)</strong></p> <pre># virt-install -n vmname -r 2048 --os-type=linux  --os-variant=ubuntulucid --disk  /kvm/images/disk/vmname_boot.img,device=disk,bus=ide,size=40,sparse=true,format=raw  -w bridge=br0,model=e1000 --vnc --noautoconsole -c  /kvm/images/iso/ubuntu.iso</pre> <p><strong>Define VM Without Installation Method</strong></p> <pre># virt-install -n vmname -r 2048 --os-type=linux --os-variant=ubuntulucid --disk /kvm/images/disk/vmname_boot.img,device=disk,bus=virtio,size=40,sparse=true,format=raw --disk /kvm/images/iso/ubuntu.iso,device=cdrom -w bridge=br0,model=virtio --vnc --noautoconsole</pre></div><br /><p>&nbsp;</p></div></div></div><img src ="http://www.blogjava.net/ivanwan/aggbug/380368.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2012-06-08 17:55 <a href="http://www.blogjava.net/ivanwan/archive/2012/06/08/380368.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Cassandra、MongoDB、CouchDB、Redis、Riak、HBase比较</title><link>http://www.blogjava.net/ivanwan/archive/2011/07/05/353713.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Tue, 05 Jul 2011 07:11:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2011/07/05/353713.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/353713.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2011/07/05/353713.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/353713.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/353713.html</trackback:ping><description><![CDATA[<div><p>本文有标题党之嫌。在NoSQL如日中天的今天，各种NoSQL产品可谓百花齐放，但每一个产品都有自己的特点，有长处也有不适合的场景。本文对<a href="http://cassandra.apache.org/">Cassandra</a>,&nbsp;<a href="http://www.mongodb.org/">Mongodb</a>,&nbsp;<a href="http://couchdb.apache.org/">CouchDB</a>,&nbsp;<a href="http://redis.io/">Redis</a>,&nbsp;<a href="http://www.basho.com/Riak.html">Riak</a> 以及&nbsp;<a href="http://hbase.apache.org/">HBase</a> 进行了多方面的特点分析，希望看完此文的您能够对这些NoSQL产品的特性有所了解。</p> <table border="1"> <tbody> <tr> <td> <h2><a href="http://blog.nosqlfan.com/tags/couchdb" title="查看 CouchDB 的全部文章" target="_blank">CouchDB</a></h2> </td> <td> <ul><li><strong>Written in:</strong> Erlang</li><li><strong>Main point:</strong> DB consistency, ease of use</li><li><strong>License:</strong> Apache</li><li><strong>Protocol:</strong> HTTP/REST</li><li>Bi-directional (!) replication,</li><li>continuous or ad-hoc,</li><li>with conflict detection,</li><li>thus, master-master replication. (!)</li><li>MVCC &#8211; write operations do not block reads</li><li>Previous versions of documents are available</li><li>Crash-only (reliable) design</li><li>Needs compacting from time to time</li><li>Views: embedded map/reduce</li><li>Formatting views: lists &amp; shows</li><li>Server-side document validation possible</li><li>Authentication possible</li><li>Real-time updates via _changes (!)</li><li>Attachment handling</li><li>thus,&nbsp;<a href="http://couchapp.org/">CouchApps</a> (standalone js apps)</li><li>jQuery library included</li></ul> <p><strong>Best used:</strong> For accumulating, occasionally changing data, on which pre-defined queries are to be run. Places where versioning is important.</p> <p><strong>For example:</strong> CRM, CMS systems. Master-master replication is an especially interesting feature, allowing easy multi-site deployments.</p></td> </tr> <tr> <td> <h2><a href="http://blog.nosqlfan.com/tags/redis" title="查看 Redis 的全部文章" target="_blank">Redis</a></h2> </td> <td> <ul><li><strong>Written in:</strong> C/C++</li><li><strong>Main point:</strong> Blazing fast</li><li><strong>License:</strong> BSD</li><li><strong>Protocol:</strong> Telnet-like</li><li>Disk-backed in-memory database,</li><li>but since 2.0, it can swap to disk.</li><li>Master-slave replication</li><li>Simple keys and values,</li><li>but&nbsp;<a href="http://redis.io/commands">complex operations</a> like ZREVRANGEBYSCORE</li><li>INCR &amp; co (good for rate limiting or statistics)</li><li>Has sets (also union/diff/inter)</li><li>Has lists (also a queue; blocking pop)</li><li>Has hashes (objects of multiple fields)</li><li>Of all these databases, only <a href="http://blog.nosqlfan.com/tags/redis" title="查看 Redis 的全部文章" target="_blank">Redis</a> does transactions (!)</li><li>Values can be set to expire (as in a cache)</li><li>Sorted sets (high score table, good for range queries)</li><li>Pub/Sub and WATCH on data changes (!)</li></ul> <p><strong>Best used:</strong> For rapidly changing data with a foreseeable database size (should fit mostly in memory).</p> <p><strong>For example:</strong> Stock prices. Analytics. Real-time data collection. Real-time communication.</p></td> </tr> <tr> <td> <h2><a href="http://blog.nosqlfan.com/tags/mongodb" title="查看 MongoDB 的全部文章" target="_blank">MongoDB</a></h2> </td> <td> <ul><li><strong>Written in:</strong> C++</li><li><strong>Main point:</strong> Retains some friendly properties of SQL. (Query, index)</li><li><strong>License:</strong> AGPL (Drivers: Apache)</li><li><strong>Protocol:</strong> Custom, binary (BSON)</li><li>Master/slave replication</li><li>Queries are javascript expressions</li><li>Run arbitrary javascript functions server-side</li><li>Better update-in-place than <a href="http://blog.nosqlfan.com/tags/couchdb" title="查看 CouchDB 的全部文章" target="_blank">CouchDB</a></li><li>Sharding built-in</li><li>Uses memory mapped files for data storage</li><li>Performance over features</li><li>After crash, it needs to repair tables</li><li>Better durablity coming in V1.8</li></ul> <p><strong>Best used:</strong> If you need dynamic queries. If you  prefer to define indexes, not map/reduce functions. If you need good  performance on a big DB. If you wanted CouchDB, but your data changes  too much, filling up disks.</p> <p><strong>For example:</strong> For all things that you would do with MySQL or PostgreSQL, but having predefined columns really holds you back.</p></td> </tr> <tr> <td> <h2><a href="http://blog.nosqlfan.com/tags/cassandra" title="查看 Cassandra 的全部文章" target="_blank">Cassandra</a></h2> </td> <td> <ul><li><strong>Written in:</strong> Java</li><li><strong>Main point:</strong> Best of BigTable and Dynamo</li><li><strong>License:</strong> Apache</li><li><strong>Protocol:</strong> Custom, binary (Thrift)</li><li>Tunable trade-offs for distribution and replication (N,&nbsp;R,&nbsp;W)</li><li>Querying by column, range of keys</li><li>BigTable-like features: columns, column families</li><li>Writes are much faster than reads (!)</li><li>Map/reduce possible with Apache Hadoop</li><li>I admit being a bit biased against it, because of the bloat and  complexity it has partly because of Java (configuration, seeing  exceptions, etc)</li></ul> <p><strong>Best used:</strong> When you write more than you read  (logging). If every component of the system must be in Java. (&#8220;No one  gets fired for choosing Apache&#8217;s stuff.&#8221;)</p> <p><strong>For example:</strong> Banking, financial industry (though not  necessarily for financial transactions, but these industries are much  bigger than that.) Writes are faster than reads, so one natural niche is  real time data analysis.</p></td> </tr> <tr> <td> <h2><a href="http://blog.nosqlfan.com/tags/riak" title="查看 Riak 的全部文章" target="_blank">Riak</a></h2> </td> <td> <ul><li><strong>Written in:</strong> Erlang &amp; C, some Javascript</li><li><strong>Main point:</strong> Fault tolerance</li><li><strong>License:</strong> Apache</li><li><strong>Protocol:</strong> HTTP/REST</li><li>Tunable trade-offs for distribution and replication (N,&nbsp;R,&nbsp;W)</li><li>Pre- and post-commit hooks,</li><li>for validation and security.</li><li>Built-in full-text search</li><li>Map/reduce in javascript or Erlang</li><li>Comes in &#8220;open source&#8221; and &#8220;enterprise&#8221; editions</li></ul> <p><strong>Best used:</strong> If you want something <a href="http://blog.nosqlfan.com/tags/cassandra" title="查看 Cassandra 的全部文章" target="_blank">Cassandra</a>-like  (Dynamo-like), but no way you&#8217;re gonna deal with the bloat and  complexity. If you need very good single-site scalability, availability  and fault-tolerance, but you&#8217;re ready to pay for multi-site replication.</p> <p><strong>For example:</strong> Point-of-sales data collection. Factory control systems. Places where even seconds of downtime hurt.</p></td> </tr> <tr> <td> <h2>HBase</h2> </td> <td> <ul><li><strong>Written in:</strong> Java</li><li><strong>Main point:</strong> Billions of rows X millions of columns</li><li><strong>License:</strong> Apache</li><li><strong>Protocol:</strong> HTTP/REST (also Thrift)</li><li>Modeled after BigTable</li><li>Map/reduce with Hadoop</li><li>Query predicate push down via server side scan and get filters</li><li>Optimizations for real time queries</li><li>A high performance Thrift gateway</li><li>HTTP supports XML, Protobuf, and binary</li><li>Cascading, hive, and pig source and sink modules</li><li>Jruby-based (JIRB) shell</li><li>No single point of failure</li><li>Rolling restart for configuration changes and minor upgrades</li><li>Random access performance is like MySQL</li></ul> <p><strong>Best used:</strong> If you&#8217;re in love with BigTable. <img src="http://blog.nosqlfan.com/wp-includes/images/smilies/icon_smile.gif" alt=":)" />  And when you need random, realtime read/write access to your Big Data.</p> <p><strong>For example:</strong> Facebook Messaging Database (more general example coming soon)</p></td> </tr> </tbody> </table> <p>原文链接：<a href="http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis">Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase comparison</a> </p></div><img src ="http://www.blogjava.net/ivanwan/aggbug/353713.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2011-07-05 15:11 <a href="http://www.blogjava.net/ivanwan/archive/2011/07/05/353713.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Java虚拟机类型卸载和类型更新解析</title><link>http://www.blogjava.net/ivanwan/archive/2011/06/16/352458.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Thu, 16 Jun 2011 12:05:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2011/06/16/352458.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/352458.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2011/06/16/352458.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/352458.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/352458.html</trackback:ping><description><![CDATA[<div>&nbsp;前面系统讨论过java类型加载(loading)的问题，在这篇文章中简要分析一下java类型卸载(unloading)的问题，并简要分析一下如何解决如何运行时加载newly compiled version的问题。<br /><br />【相关规范摘要】<br />&nbsp;&nbsp;&nbsp; 首先看一下，关于java虚拟机规范中时如何阐述类型卸载(unloading)的：<br />&nbsp;&nbsp;&nbsp; A class or interface may be unloaded if and only if its class loader is unreachable. The bootstrap class loader is always reachable; as a result， system classes may never be unloaded.<br />&nbsp;&nbsp;&nbsp; Java虚拟机规范中关于类型卸载的内容就这么简单两句话，大致意思就是：只有当加载该类型的类加载器实例(非类加载器类型)为unreachable状态时，当前被加载的类型才被卸载.启动类加载器实例永远为reachable状态，由启动类加载器加载的类型可能永远不会被卸载.<br /><br />&nbsp;&nbsp;&nbsp; 我们再看一下Java语言规范提供的关于类型卸载的更详细的信息(部分摘录)：<br />&nbsp;&nbsp;&nbsp; //摘自JLS 12.7 Unloading of Classes and Interfaces<br />&nbsp;&nbsp;&nbsp; 1、An implementation of the Java programming language may unload classes.<br />&nbsp;&nbsp;&nbsp; 2、Class unloading is an optimization that helps reduce memory use. Obviously，the semantics of a program should not depend&nbsp; on whether and how a system chooses to implement an optimization such as class unloading.<br />&nbsp;&nbsp;&nbsp; 3、Consequently，whether a class or interface has been unloaded or not should be transparent to a program<br /><br />&nbsp;&nbsp;&nbsp; 通过以上我们可以得出结论： 类型卸载(unloading)仅仅是作为一种减少内存使用的性能优化措施存在的，具体和虚拟机实现有关，对开发者来说是透明的.<br /><br />&nbsp;&nbsp;&nbsp; 纵观java语言规范及其相关的API规范，找不到显示类型卸载(unloading)的接口， 换句话说： <br />&nbsp;&nbsp;&nbsp; 1、一个已经加载的类型被卸载的几率很小至少被卸载的时间是不确定的<br />&nbsp;&nbsp;&nbsp; 2、一个被特定类加载器实例加载的类型运行时可以认为是无法被更新的<br /><br />【类型卸载进一步分析】<br />&nbsp;&nbsp;&nbsp;&nbsp; 前面提到过，如果想卸载某类型，必须保证加载该类型的类加载器处于unreachable状态，现在我们再看看有 关unreachable状态的解释：<br />&nbsp;&nbsp;&nbsp; 1、A reachable object is any object that can be accessed in any potential continuing computation from any live thread.<br />&nbsp;&nbsp;&nbsp; 2、finalizer-reachable: A finalizer-reachable object can be reached from some finalizable object through some chain of references, but not from any live thread. An unreachable object cannot be reached by either means.<br /><br />&nbsp;&nbsp;&nbsp; 某种程度上讲，在一个稍微复杂的java应用中，我们很难准确判断出一个实例是否处于unreachable状态，所&nbsp;&nbsp;&nbsp; 以为了更加准确的逼近这个所谓的unreachable状态，我们下面的测试代码尽量简单一点.<br />&nbsp;&nbsp; &nbsp;<br />&nbsp;&nbsp;&nbsp; 【测试场景一】使用自定义类加载器加载， 然后测试将其设置为unreachable的状态<br />&nbsp;&nbsp;&nbsp; 说明：<br />&nbsp;&nbsp;&nbsp; 1、自定义类加载器(为了简单起见， 这里就假设加载当前工程以外D盘某文件夹的class)<br />&nbsp;&nbsp;&nbsp; 2、假设目前有一个简单自定义类型MyClass对应的字节码存在于D：/classes目录下<br />&nbsp;&nbsp; &nbsp;<br />public class MyURLClassLoader extends URLClassLoader { <br />&nbsp;&nbsp; public MyURLClassLoader() { <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; super(getMyURLs()); <br />&nbsp;&nbsp; } <br /><br />&nbsp;&nbsp; private static URL[] getMyURLs() { <br />&nbsp;&nbsp;&nbsp; try { <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return new URL[]{new File ("D：/classes/").toURL()}; <br />&nbsp;&nbsp;&nbsp; } catch (Exception e) { <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; e.printStackTrace(); <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return null; <br />&nbsp;&nbsp;&nbsp; } <br />&nbsp; } <br />} <br /><br />&nbsp;1 public class Main { <br />&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp; public static void main(String[] args) { <br />&nbsp;3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; try { <br />&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MyURLClassLoader classLoader = new MyURLClassLoader(); <br />&nbsp;5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Class classLoaded = classLoader.loadClass("MyClass"); <br />&nbsp;6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println(classLoaded.getName()); <br />&nbsp;7 <br />&nbsp;8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; classLoaded = null; <br />&nbsp;9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; classLoader = null; <br />10 <br />11&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("开始GC"); <br />12&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.gc(); <br />13&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("GC完成"); <br />14&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } catch (Exception e) { <br />15&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; e.printStackTrace(); <br />16&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } <br />17&nbsp;&nbsp;&nbsp;&nbsp; } <br />18 } <br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 我们增加虚拟机参数-verbose：gc来观察垃圾收集的情况，对应输出如下：&nbsp; &nbsp;<br />MyClass <br />开始GC <br />[Full GC[Unloading class MyClass] <br />207K-&gt;131K(1984K)， 0.0126452 secs] <br />GC完成 <br /><br />&nbsp;&nbsp;&nbsp; 【测试场景二】使用系统类加载器加载，但是无法将其设置为unreachable的状态<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 说明：将场景一中的MyClass类型字节码文件放置到工程的输出目录下，以便系统类加载器可以加载<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;<br />&nbsp;1 public class Main { <br />&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp; public static void main(String[] args) { <br />&nbsp;3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; try { <br />&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Class classLoaded =&nbsp; ClassLoader.getSystemClassLoader().loadClass( <br />&nbsp;5 "MyClass"); <br />&nbsp;6 <br />&nbsp;7 <br />&nbsp;8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.printl(sun.misc.Launcher.getLauncher().getClassLoader()); <br />&nbsp;9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println(classLoaded.getClassLoader()); <br />10&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println(Main.class.getClassLoader()); <br />11 <br />12&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; classLoaded = null; <br />13 <br />14&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("开始GC"); <br />15&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.gc(); <br />16&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("GC完成"); <br />17 <br />18&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //判断当前系统类加载器是否有被引用(是否是unreachable状态) <br />19&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println(Main.class.getClassLoader()); <br />20&nbsp;&nbsp;&nbsp;&nbsp; } catch (Exception e) { <br />21&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; e.printStackTrace(); <br />22&nbsp;&nbsp;&nbsp;&nbsp; } <br />23&nbsp;&nbsp; } <br />24 } <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 我们增加虚拟机参数-verbose：gc来观察垃圾收集的情况， 对应输出如下： <br />sun.misc.Launcher$AppClassLoader@197d257 <br />sun.misc.Launcher$AppClassLoader@197d257 <br />sun.misc.Launcher$AppClassLoader@197d257 <br />开始GC <br />[Full GC 196K-&gt;131K(1984K)， 0.0130748 secs] <br />GC完成 <br />sun.misc.Launcher$AppClassLoader@197d257 <br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 由于系统ClassLoader实例(AppClassLoader@197d257"&gt;sun.misc.Launcher$AppClassLoader@197d257)加载了很多类型，而且又没有明确的接口将其设置为null，所以我们无法将加载MyClass类型的系统类加载器实例设置为unreachable状态，所以通过测试结果我们可以看出，MyClass类型并没有被卸载.(说明： 像类加载器实例这种较为特殊的对象一般在很多地方被引用， 会在虚拟机中呆比较长的时间)<br /><br />&nbsp;&nbsp;&nbsp; 【测试场景三】使用扩展类加载器加载， 但是无法将其设置为unreachable的状态<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 说明：将测试场景二中的MyClass类型字节码文件打包成jar放置到JRE扩展目录下，以便扩展类加载器可以加载的到。由于标志扩展ClassLoader实例(ExtClassLoader@7259da"&gt;sun.misc.Launcher$ExtClassLoader@7259da)加载了很多类型，而且又没有明确的接口将其设置为null，所以我们无法将加载MyClass类型的系统类加载器实例设置为unreachable状态，所以通过测试结果我们可以看出，MyClass类型并没有被卸载.<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;<br />&nbsp;1 public class Main { <br />&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; public static void main(String[] args) { <br />&nbsp;3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; try { <br />&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Class classLoaded = ClassLoader.getSystemClassLoader().getParent() <br />&nbsp;5 .loadClass("MyClass"); <br />&nbsp;6 <br />&nbsp;7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println(classLoaded.getClassLoader()); <br />&nbsp;8 <br />&nbsp;9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; classLoaded = null; <br />10 <br />11&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("开始GC"); <br />12&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.gc(); <br />13&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("GC完成"); <br />14&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //判断当前标准扩展类加载器是否有被引用(是否是unreachable状态) <br />15&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println(Main.class.getClassLoader().getParent()); <br />16&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } catch (Exception e) { <br />17&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; e.printStackTrace(); <br />18&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } <br />19&nbsp;&nbsp;&nbsp; } <br />20 } <br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 我们增加虚拟机参数-verbose：gc来观察垃圾收集的情况，对应输出如下：<br />sun.misc.Launcher$ExtClassLoader@7259da <br />开始GC <br />[Full GC 199K-&gt;133K(1984K)， 0.0139811 secs] <br />GC完成 <br />sun.misc.Launcher$ExtClassLoader@7259da <br /><br /><br />&nbsp;&nbsp;&nbsp; 关于启动类加载器我们就不需再做相关的测试了，jvm规范和JLS中已经有明确的说明了.<br /><br /><br />&nbsp;&nbsp;&nbsp; 【类型卸载总结】<br />&nbsp;&nbsp;&nbsp; 通过以上的相关测试(虽然测试的场景较为简单)我们可以大致这样概括：<br />&nbsp;&nbsp;&nbsp; 1、有启动类加载器加载的类型在整个运行期间是不可能被卸载的(jvm和jls规范).<br />&nbsp;&nbsp;&nbsp; 2、被系统类加载器和标准扩展类加载器加载的类型在运行期间不太可能被卸载，因为系统类加载器实例或者标准扩展类的实例基本上在整个运行期间总能直接或者间接的访问的到，其达到unreachable的可能性极小.(当然，在虚拟机快退出的时候可以，因为不管ClassLoader实例或者Class(java.lang.Class)实例也都是在堆中存在，同样遵循垃圾收集的规则).<br />&nbsp;&nbsp;&nbsp; 3、被开发者自定义的类加载器实例加载的类型只有在很简单的上下文环境中才能被卸载，而且一般还要借助于强制调用虚拟机的垃圾收集功能才可以做到.可以预想，稍微复杂点的应用场景中(尤其很多时候，用户在开发自定义类加载器实例的时候采用缓存的策略以提高系统性能)，被加载的类型在运行期间也是几乎不太可能被卸载的(至少卸载的时间是不确定的).<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 综合以上三点，我们可以默认前面的结论1， 一个已经加载的类型被卸载的几率很小至少被卸载的时间是不确定的.同时，我们可以看的出来，开发者在开发代码时候，不应该对虚拟机的类型卸载做任何假设的前提下来实现系统中的特定功能.<br />&nbsp;&nbsp; &nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 【类型更新进一步分析】<br />&nbsp;&nbsp;&nbsp; 前面已经明确说过，被一个特定类加载器实例加载的特定类型在运行时是无法被更新的.注意这里说的<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 是一个特定的类加载器实例，而非一个特定的类加载器类型.<br />&nbsp;&nbsp; &nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 【测试场景四】<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 说明：现在要删除前面已经放在工程输出目录下和扩展目录下的对应的MyClass类型对应的字节码 <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;<br />&nbsp;1 public class Main { <br />&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; public static void main(String[] args) { <br />&nbsp;3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; try { <br />&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MyURLClassLoader classLoader = new MyURLClassLoader(); <br />&nbsp;5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Class classLoaded1 = classLoader.loadClass("MyClass"); <br />&nbsp;6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Class classLoaded2 = classLoader.loadClass("MyClass"); <br />&nbsp;7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //判断两次加载classloader实例是否相同 <br />&nbsp;8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println(classLoaded1.getClassLoader() == classLoaded2.getClassLoader()); <br />&nbsp;9 <br />10&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //判断两个Class实例是否相同 <br />11&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println(classLoaded1 == classLoaded2); <br />12&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } catch (Exception e) { <br />13&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; e.printStackTrace(); <br />14&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } <br />15&nbsp;&nbsp;&nbsp; } <br />16 } <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 输出如下：<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; true<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; true<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 通过结果我们可以看出来，两次加载获取到的两个Class类型实例是相同的.那是不是确实是我们的自定义<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 类加载器真正意义上加载了两次呢(即从获取class字节码到定义class类型&#8230;整个过程呢)?<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 通过对java.lang.ClassLoader的loadClass(String name，boolean resolve)方法进行调试，我们可以看出来，第二<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 次&nbsp; 加载并不是真正意义上的加载，而是直接返回了上次加载的结果.<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 说明：为了调试方便， 在Class classLoaded2 = classLoader.loadClass("MyClass");行设置断点，然后单步跳入， 可以看到第二次加载请求返回的结果直接是上次加载的Class实例. 调试过程中的截图 最好能自己调试一下).<br />&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;<br />&nbsp;&nbsp;&nbsp; &nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 【测试场景五】同一个类加载器实例重复加载同一类型<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 说明：首先要对已有的用户自定义类加载器做一定的修改，要覆盖已有的类加载逻辑， MyURLClassLoader.java类简要修改如下：重新运行测试场景四中的测试代码<br />&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;<br />&nbsp;1 public class MyURLClassLoader extends URLClassLoader { <br />&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp; //省略部分的代码和前面相同，只是新增如下覆盖方法 <br />&nbsp;3&nbsp;&nbsp;&nbsp;&nbsp; /* <br />&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp; * 覆盖默认的加载逻辑，如果是D：/classes/下的类型每次强制重新完整加载 <br />&nbsp;5&nbsp;&nbsp;&nbsp;&nbsp; * <br />&nbsp;6&nbsp;&nbsp;&nbsp;&nbsp; * @see java.lang.ClassLoader#loadClass(java.lang.String) <br />&nbsp;7&nbsp;&nbsp;&nbsp;&nbsp; */ <br />&nbsp;8&nbsp;&nbsp;&nbsp;&nbsp; @Override <br />&nbsp;9&nbsp;&nbsp;&nbsp;&nbsp; public Class&lt;?&gt; loadClass(String name) throws ClassNotFoundException { <br />10&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; try { <br />11&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //首先调用系统类加载器加载 <br />12&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Class c = ClassLoader.getSystemClassLoader().loadClass(name); <br />13&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return c; <br />14&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } catch (ClassNotFoundException e) { <br />15&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; // 如果系统类加载器及其父类加载器加载不上，则调用自身逻辑来加载D：/classes/下的类型 <br />16&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return this.findClass(name); <br />17&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } <br />18&nbsp;&nbsp; } <br />19 }<br />说明： this.findClass(name)会进一步调用父类URLClassLoader中的对应方法，其中涉及到了defineClass(String name)的调用，所以说现在类加载器MyURLClassLoader会针对D：/classes/目录下的类型进行真正意义上的强制加载并定义对应的类型信息.<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 测试输出如下：<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Exception in thread "main" java.lang.LinkageError： duplicate class definition： MyClass<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.lang.ClassLoader.defineClass1(Native Method)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.lang.ClassLoader.defineClass(ClassLoader.java：620)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.security.SecureClassLoader.defineClass(SecureClassLoader.java：124)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.net.URLClassLoader.defineClass(URLClassLoader.java：260)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.net.URLClassLoader.access$100(URLClassLoader.java：56)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.net.URLClassLoader$1.run(URLClassLoader.java：195)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.security.AccessController.doPrivileged(Native Method)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.net.URLClassLoader.findClass(URLClassLoader.java：188)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at MyURLClassLoader.loadClass(MyURLClassLoader.java：51)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at Main.main(Main.java：27)<br />&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 结论：如果同一个类加载器实例重复强制加载(含有定义类型defineClass动作)相同类型，会引起java.lang.LinkageError: duplicate class definition.<br />&nbsp;&nbsp; &nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 【测试场景六】同一个加载器类型的不同实例重复加载同一类型<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;<br />&nbsp;1 public class Main { <br />&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp; public static void main(String[] args) { <br />&nbsp;3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; try { <br />&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MyURLClassLoader classLoader1 = new MyURLClassLoader(); <br />&nbsp;5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Class classLoaded1 = classLoader1.loadClass("MyClass"); <br />&nbsp;6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MyURLClassLoader classLoader2 = new MyURLClassLoader(); <br />&nbsp;7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Class classLoaded2 = classLoader2.loadClass("MyClass"); <br />&nbsp;8 <br />&nbsp;9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //判断两个Class实例是否相同 <br />10&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; System.out.println(classLoaded1 == classLoaded2); <br />11&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } catch (Exception e) { <br />12&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; e.printStackTrace(); <br />13&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } <br />14&nbsp;&nbsp;&nbsp; } <br />15 } <br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 测试对应的输出如下：<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; false<br />&nbsp;&nbsp;&nbsp; &nbsp;<br />&nbsp;&nbsp; &nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 【类型更新总结】&nbsp; &nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp; 由不同类加载器实例重复强制加载(含有定义类型defineClass动作)同一类型不会引起java.lang.LinkageError错误， 但是加载结果对应的Class类型实例是不同的，即实际上是不同的类型(虽然包名+类名相同). 如果强制转化使用，会引起ClassCastException.(说明： 头一段时间那篇文章中解释过，为什么不同类加载器加载同名类型实际得到的结果其实是不同类型， 在JVM中一个类用其全名和一个加载类ClassLoader的实例作为唯一标识，不同类加载器加载的类将被置于不同的命名空间).<br /><br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 应用场景：我们在开发的时候可能会遇到这样的需求，就是要动态加载某指定类型class文件的不同版本，以便能动态更新对应功能.<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 建议：<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1. 不要寄希望于等待指定类型的以前版本被卸载，卸载行为对java开发人员透明的.<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2. 比较可靠的做法是，每次创建特定类加载器的新实例来加载指定类型的不同版本，这种使用场景下，一般就要牺牲缓存特定类型的类加载器实例以带来性能优化的策略了.对于指定类型已经被加载的版本， 会在适当时机达到unreachable状态，被unload并垃圾回收.每次使用完类加载器特定实例后(确定不需要再使用时)， 将其显示赋为null， 这样可能会比较快的达到jvm 规范中所说的类加载器实例unreachable状态， 增大已经不再使用的类型版本被尽快卸载的机会.<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3. 不得不提的是，每次用新的类加载器实例去加载指定类型的指定版本，确实会带来一定的内存消耗，一般类加载器实例会在内存中保留比较长的时间. 在bea开发者网站上找到一篇相关的文章(有专门分析ClassLoader的部分)：http：//dev2dev.bea.com/pub/a/2005/06/memory_leaks.html<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 写的过程中参考了jvm规范和jls， 并参考了sun公司官方网站上的一些bug的分析文档。<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 欢迎大家批评指正！<br /><br /><br />本博客中的所有文章、随笔除了标题中含有引用或者转载字样的，其他均为原创。转载请注明出处，谢谢！</div><img src ="http://www.blogjava.net/ivanwan/aggbug/352458.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2011-06-16 20:05 <a href="http://www.blogjava.net/ivanwan/archive/2011/06/16/352458.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>hbase单独启动region server</title><link>http://www.blogjava.net/ivanwan/archive/2011/06/16/352414.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Thu, 16 Jun 2011 04:10:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2011/06/16/352414.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/352414.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2011/06/16/352414.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/352414.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/352414.html</trackback:ping><description><![CDATA[<div>启动集群中所有的regionserver<br /><div fc05="" fc11="" nbw-blog="" ztag="" js-fs2=""><wbr>./<wbr>hbase-daemons.sh start regionserver<br />启动某个regionserver<br />./hbase-daemon.sh start regionserver</div></div><img src ="http://www.blogjava.net/ivanwan/aggbug/352414.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2011-06-16 12:10 <a href="http://www.blogjava.net/ivanwan/archive/2011/06/16/352414.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Htable数据的访问问题</title><link>http://www.blogjava.net/ivanwan/archive/2011/06/15/352369.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Wed, 15 Jun 2011 09:17:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2011/06/15/352369.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/352369.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2011/06/15/352369.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/352369.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/352369.html</trackback:ping><description><![CDATA[<div><h1><cite> </cite> 					 				</h1> 				<div>  					  					  					<p> </p><p>做了几天工程，对HBase中的表操作熟悉了一下。下面总结一下常用的表操作和容易出错的几个方面。当然主要来源于大牛们的文章。我在前人的基础上稍作解释。</p> <p>1.连接HBase中的表testtable,用户名：root,密码：root</p> <p>public void ConnectHBaseTable()<br />&nbsp;{<br />&nbsp;&nbsp;Configuration conf = new Configuration();&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; conf.set("hadoop.job.ugi", "root,root");&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />&nbsp;&nbsp;HBaseConfiguration config = new HBaseConfiguration();<br />&nbsp;&nbsp;try<br />&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;table = new HTable(config, "testtable");<br />&nbsp;&nbsp;}catch(Exception e){e.printStackTrace();}<br />&nbsp;}</p> <p>2.根据行名name获得一行数据，存入Result.注意HBase中的表数据是字节存储的。</p> <p>&nbsp;&nbsp; 下面的例子表示获得行名为name的行的famA列族col1列的数据。</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; String rowId&nbsp;= "name";<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Get&nbsp;get&nbsp;=&nbsp;new&nbsp;Get(rowId);<br />&nbsp; &nbsp; &nbsp;&nbsp;Result&nbsp;result&nbsp;=&nbsp;hTable.get(get);<br />&nbsp; &nbsp; &nbsp;&nbsp;byte[]&nbsp;value&nbsp;=&nbsp;result.getValue(famA,&nbsp;col1);<br />&nbsp; &nbsp; &nbsp;&nbsp;System.out.println(Bytes.toString(value));<br /></p> <p>3.向表中存数据</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 下面的例子表示写入一行。行名为abcd，famA列族col1列的数据为"hello world!"。</p> <p><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; byte[]&nbsp;rowId&nbsp;=&nbsp;Bytes.toBytes("abcd");<br />&nbsp; &nbsp; &nbsp;&nbsp;byte[]&nbsp;famA&nbsp;=&nbsp;Bytes.toBytes("famA");<br />&nbsp; &nbsp; &nbsp;&nbsp;byte[]&nbsp;col1&nbsp;=&nbsp;Bytes.toBytes("col1");<br />&nbsp; &nbsp; &nbsp;&nbsp;Put&nbsp;put&nbsp;=&nbsp;new&nbsp;Put(rowId).<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;add(famA,&nbsp;col1,&nbsp;Bytes.toBytes("hello world!"));<br />&nbsp; &nbsp; &nbsp; hTable.put(put);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></p> <p><span>4.扫描的用法（scan）：便于获得自己需要的数据，相当于SQL查询。</span></p> <p><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; byte[]&nbsp;famA&nbsp;=&nbsp;Bytes.toBytes("famA");<br />&nbsp; &nbsp; &nbsp;&nbsp;byte[]&nbsp;col1&nbsp;=&nbsp;Bytes.toBytes("col1");&nbsp;&nbsp;<br /><br />&nbsp; &nbsp; &nbsp;&nbsp;HTable&nbsp;hTable&nbsp;=&nbsp;new&nbsp;HTable("test");&nbsp;&nbsp;<br /></span></span></p> <p><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;//表示要查询的行名是从a开始，到z结束。<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Scan&nbsp;scan&nbsp;=&nbsp;new&nbsp;Scan(Bytes.toBytes("a"),&nbsp;Bytes.toBytes("z"));<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></span></p> <p><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;//用scan.setStartRow(Bytes.toBytes(""));设置起始行</span></span></p> <p><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //用scan.setStopRow(Bytes.toBytes(""));设置终止行</span></span></p>  <p><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //表示查询famA族col1列</span></span></p> <p><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; scan.addColumn(famA,&nbsp;col1);&nbsp;&nbsp;<br /></span></span></p> <p><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //注意，下面是filter的写法。相当于SQL的where子句</span></span></p> <p><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //表示famA族col1列的数据等于<span>"hello world!"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>SingleColumnValueFilter&nbsp;singleColumnValueFilterA&nbsp;=&nbsp;new&nbsp;SingleColumnValueFilter(<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;famA,&nbsp;col1,&nbsp;CompareOp.EQUAL,&nbsp;Bytes.toBytes("hello world!"));<br />&nbsp; &nbsp; &nbsp; singleColumnValueFilterA.setFilterIfMissing(true);&nbsp;&nbsp;<br /></span></span></p> <p><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //表示famA族col1列的数据等于<span>"hello hbase!"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>SingleColumnValueFilter&nbsp;singleColumnValueFilterB&nbsp;=&nbsp;new&nbsp;SingleColumnValueFilter(<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;famA,&nbsp;col1,&nbsp;CompareOp.EQUAL,&nbsp;Bytes.toBytes("hello hbase!"));<br />&nbsp; &nbsp; &nbsp; singleColumnValueFilterB.setFilterIfMissing(true);&nbsp;&nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></span></p> <p><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;//表示famA族col1列的数据是两者中的一个<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FilterList&nbsp;filter&nbsp;=&nbsp;new&nbsp;FilterList(Operator.MUST_PASS_ONE,&nbsp;Arrays<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;.asList((Filter)&nbsp;singleColumnValueFilterA,<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; singleColumnValueFilterB));&nbsp;&nbsp;<br /><br />&nbsp; &nbsp; &nbsp; scan.setFilter(filter);&nbsp;&nbsp;<br /><br />&nbsp; &nbsp; &nbsp;&nbsp;ResultScanner&nbsp;scanner&nbsp;=&nbsp;hTable.getScanner(scan);&nbsp;&nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;//遍历每个数据<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(Result&nbsp;result&nbsp;:&nbsp;scanner)&nbsp;{<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;System.out.println(Bytes.toString(result.getValue(famA,&nbsp;col1)));<br />&nbsp; &nbsp; &nbsp;&nbsp;}<br /></span></span></p> <p><span><span>5.上面的代码容易出错的地方在于，需要导入HBase的类所在的包。导入时需要选择包，由于类可能出现在HBase的各个子包中，所以要选择好，下面列出常用的包。尽量用HBase的包</span></span></p> <p><span><span>import org.apache.hadoop.conf.Configuration;<br />import org.apache.hadoop.hbase.HBaseConfiguration;<br />import org.apache.hadoop.hbase.client.Get;<br />import org.apache.hadoop.hbase.client.HTable;<br />import org.apache.hadoop.hbase.client.Put;<br />import org.apache.hadoop.hbase.client.Result;<br />import org.apache.hadoop.hbase.client.ResultScanner;<br />import org.apache.hadoop.hbase.client.Scan;<br />import org.apache.hadoop.hbase.filter.Filter;<br />import org.apache.hadoop.hbase.filter.FilterList;<br />import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;<br />import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;<br />import org.apache.hadoop.hbase.filter.FilterList.Operator;<br />import org.apache.hadoop.hbase.util.Bytes;</span></span></p> <p><span><span>import java.io.IOException;<br />import java.text.SimpleDateFormat;<br />import java.util.Arrays;<br />import java.util.Date;</span></span></p>  <p><span><span>6.下面列出HBase常用的操作</span></span></p> <p><span><span>（1）时间戳到时间的转换.单一的时间戳无法给出直观的解释。</span></span></p> <p><span><span>public String GetTimeByStamp(String timestamp)<br />&nbsp;{</span></span></p> <p><span><span>&nbsp;&nbsp;long datatime= Long.parseLong(timestamp);&nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp; Date date=new Date(datatime);&nbsp;&nbsp;&nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp; SimpleDateFormat&nbsp;&nbsp; format=new&nbsp;&nbsp; SimpleDateFormat("yyyy-MM-dd HH:MM:ss");&nbsp;&nbsp;&nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp; String timeresult=format.format(date);<br />&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("Time : "+timeresult);<br />&nbsp;&nbsp;&nbsp;&nbsp; return timeresult;<br />&nbsp;}</span></span></p> <p><span><span>（2）时间到时间戳的转换。注意时间是字符串格式。字符串与时间的相互转换，此不赘述</span></span><span><span>。</span></span></p> <p><span><span>public String GetStampByTime(String time)<br />&nbsp;{<br />&nbsp;&nbsp;String Stamp="";<br />&nbsp;&nbsp;SimpleDateFormat sdf=new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");<br />&nbsp;&nbsp;Date date;<br />&nbsp;&nbsp;try<br />&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;date=sdf.parse(time);<br />&nbsp;&nbsp;&nbsp;Stamp=date.getTime()+"000";<br />&nbsp;&nbsp;&nbsp;System.out.println(Stamp);<br />&nbsp;&nbsp;}catch(Exception e){e.printStackTrace();}<br />&nbsp;&nbsp;return Stamp;<br />&nbsp;}</span></span></p>   <p><span><span>上面就是我的一点心得。以后碰到什么问题，再来解决。</span></span></p>  <p><span><span>参考文献：<a href="http://www.nearinfinity.com/blogs/aaron_mccurry/using_hbase-dsl.html">http://www.nearinfinity.com/blogs/aaron_mccurry/using_hbase-dsl.html</a></span></span></p></div></div><img src ="http://www.blogjava.net/ivanwan/aggbug/352369.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2011-06-15 17:17 <a href="http://www.blogjava.net/ivanwan/archive/2011/06/15/352369.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBase性能调优</title><link>http://www.blogjava.net/ivanwan/archive/2011/06/15/352350.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Wed, 15 Jun 2011 05:39:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2011/06/15/352350.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/352350.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2011/06/15/352350.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/352350.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/352350.html</trackback:ping><description><![CDATA[<div><p>因<a href="http://hbase.apache.org/book.html#performance" target="_blank">官方Book Performance Tuning</a>部分章节没有按配置项进行索引，不能达到快速查阅的效果。所以我以配置项驱动，重新整理了原文，并补充一些自己的理解，如有错误，欢迎指正。</p> <h3>配置优化</h3> <p><strong>zookeeper.session.timeout</strong><br /> <strong>默认值</strong>：3分钟（180000ms）<br /> <strong>说明</strong>：RegionServer与Zookeeper间的连接超时时间。当超时时间到后，ReigonServer会 被Zookeeper从RS集群清单中移除，HMaster收到移除通知后，会对这台server负责的regions重新balance，让其他存活的 RegionServer接管.<br /> <strong>调优</strong>：<br /> 这个timeout决定了RegionServer是否能够及时的failover。设置成1分钟或更低，可以减少因等待超时而被延长的failover时间。<br /> 不过需要注意的是，对于一些Online应用，RegionServer的宕机到恢复时间本身就很短的（网络闪断，crash等故障，运维可快速介入）， 如果调低timeout时间，会得不偿失。因为当ReigonServer被正式从RS集群中移除时，HMaster就开始做balance了，当故障的 RS快速恢复后，这个balance动作是毫无意义的，反而会使负载不均匀，给RS带来更多负担。</p>  <p><strong>hbase.regionserver.handler.count</strong><br /> <strong>默认值</strong>：10<br /> <strong>说明</strong>：RegionServer的请求处理IO线程数。<br /> <strong>调优</strong>：<br /> 这个参数的调优与内存息息相关。<br /> 较少的IO线程，适用于处理单次请求内存消耗较高的Big PUT场景（大容量单次PUT或设置了较大cache的scan，均属于Big PUT）或ReigonServer的内存比较紧张的场景。<br /> 较多的IO线程，适用于单次请求内存消耗低，TPS要求非常高的场景。<br /> 这里需要注意的是如果server的region数量很少，大量的请求都落在一个region上，因快速充满memstore触发flush导致的读写锁会影响全局TPS，不是IO线程数越高越好。<br /> 压测时，开启<a title="Enabling RPC-level logging" href="http://hbase.apache.org/book.html#rpc.logging">Enabling RPC-level logging</a>，可以同时监控每次请求的内存消耗和GC的状况，最后通过多次压测结果来合理调节IO线程数。<br /> 这里是一个案例&nbsp;<a href="http://software.intel.com/en-us/articles/hadoop-and-hbase-optimization-for-read-intensive-search-applications/" target="_blank">Hadoop and HBase Optimization for Read Intensive Search Applications</a>，作者在SSD的机器上设置IO线程数为100，仅供参考。</p> <p><strong>hbase.hregion.max.filesize</strong><br /> <strong>默认值</strong>：256M<br /> <strong>说明</strong>：在当前ReigonServer上单个Reigon的大小，单个Region超过指定值时，这个Region会被自动split成更小的region。<br /> <strong>调优</strong>：<br /> 小region对split和compaction友好，因为拆分region或compact小region里的storefile速度很快，内存占用低。缺点是split和compaction会很频繁。<br /> 特别是数量较多的小region不停地split, compaction，会使响应时间波动很大，region数量太多不仅给管理上带来麻烦，甚至引发一些Hbase的bug。<br /> 一般512以下的都算小region。</p> <p>大region，则不太适合经常split和compaction，因为做一次compact和split会产生较长时间的停顿，对应用的读写性能冲击非常大。此外，大region意味着较大的storefile，compaction时对内存也是一个挑战。<br /> 当然，大region还是有其用武之地，你只要在某个访问量低峰的时间点统一做compact和split，大region就可以发挥优势了，毕竟它能保证绝大多数时间平稳的读写性能。</p> <p>既然split和compaction如此影响性能，有没有办法去掉？<br /> compaction是无法避免的，split倒是可以从自动调整为手动。<br /> 只要通过将这个参数值调大到某个很难达到的值，比如100G，就可以间接禁用自动split（RegionServer不会对未到达100G的region做split）。<br /> 再配合<a title="class in org.apache.hadoop.hbase.util" href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/RegionSplitter.html">RegionSplitter</a>这个工具，在需要split时，手动split。<br /> 手动split在灵活性和稳定性上比起自动split要高很多，相反，管理成本增加不多，比较推荐online实时系统使用。</p> <p>内存方面，小region在设置memstore的大小值上比较灵活，大region则过大过小都不行，过大会导致flush时app的IO wait增高，过小则因store file过多读性能降低。</p> <p><strong>hbase.regionserver.global.memstore.upperLimit/lowerLimit</strong></p> <p><strong>默认值：</strong>0.4/0.35<br /> <strong>upperlimit说明</strong>：hbase.hregion.memstore.flush.size 这个参数的作用是   当单个memstore达到指定值时，flush该memstore。但是，一台ReigonServer可能有成百上千个memstore，每个 memstore也许未达到flush.size，jvm的heap就不够用了。该参数就是为了限制memstores占用的总内存。<br /> 当ReigonServer内所有的memstore所占用的内存综合达到heap的40%时，HBase会强制block所有的更新并flush这些memstore以释放所有memstore占用的内存。<br /> <strong>lowerLimit说明</strong>：  同upperLimit，只不过当全局memstore的内存达到35%时，它不会flush所有的memstore，它会找一些内存占用较大的 memstore，个别flush，当然更新还是会被block。lowerLimit算是一个在全局flush前的补救措施。可以想象一下，如果 memstore需要在一段时间内全部flush，且这段时间内无法接受写请求，对HBase集群的性能影响是很大的。<br /> <strong>调优</strong>：这是一个Heap内存保护参数，默认值已经能适用大多数场景。它的调整一般是为了配合某些专属优化，比如读密集型应用，将读缓存开大，降低该值，腾出更多内存给其他模块使用。<br /> 这个参数会给使用者带来什么影响？<br /> 比如，10G内存，100个region，每个memstore  64M，假设每个region只有一个memstore，那么当100个memstore平均占用到50%左右时，就会达到lowerLimit的限制。 假设此时，其他memstore同样有很多的写请求进来。在那些大的region未flush完，就可能又超过了upperlimit，则所有 region都会被block，开始触发全局flush。</p> <p><strong>hfile.block.cache.size</strong></p> <p><strong>默认值</strong>：0.2<br /> <strong>说明</strong>：storefile的读缓存占用Heap的大小百分比，0.2表示20%。该值直接影响数据读的性能。<br /> <strong>调优</strong>：当然是越大越好，如果读比写少，开到0.4-0.5也没问题。如果读写较均衡，0.3左右。如果写比读多，果断 默认吧。设置这个值的时候，你同时要参考&nbsp;hbase.regionserver.global.memstore.upperLimit&nbsp;，该值是 memstore占heap的最大百分比，两个参数一个影响读，一个影响写。如果两值加起来超过80-90%，会有OOM的风险，谨慎设置。</p> <p><strong>hbase.hstore.blockingStoreFiles</strong></p> <p><strong>默认值：</strong>7<br /> <strong>说明</strong>：在compaction时，如果一个Store（Coulmn Family）内有超过7个storefile需要合并，则block所有的写请求，进行flush，限制storefile数量增长过快。<br /> <strong>调优</strong>：block请求会影响当前region的读写性能，将值设为单个region可以支撑的最大store  file数量会是个不错的选择。最大storefile数量可通过region size/memstore size来计算。如果你将region  size设为无限大，那么你需要预估一个region可能产生的最大storefile数。</p> <p><strong>hbase.hregion.memstore.block.multiplier</strong></p> <p><strong>默认值：</strong>2<br /> <strong>说明</strong>：当一个region里的memstore超过单个memstore.size两倍的大小时，block该 region的所有请求，进行flush，释放内存。虽然我们设置了memstore的总大小，比如64M，但想象一下，在最后63.9M的时候，我 Put了一个100M的数据或写请求量暴增，最后一秒钟put了1万次，此时memstore的大小会瞬间暴涨到超过预期的memstore.size。 这个参数的作用是当memstore的大小增至超过memstore.size时，block所有请求，遏制风险进一步扩大。<br /> <strong>调优</strong>：  这个参数的默认值还是比较靠谱的。如果你预估你的正常应用场景（不包括异常）不会出现突发写或写的量可控，那么保持默认值即可。如果正常情况下，你的写量 就会经常暴增，那么你应该调大这个倍数并调整其他参数值，比如hfile.block.cache.size和 hbase.regionserver.global.memstore.upperLimit/lowerLimit，以预留更多内存，防止HBase  server OOM。</p> <h3>其他</h3> <p><strong>启用LZO压缩</strong><br /> LZO对比Hbase默认的GZip，前者性能较高，后者压缩比较高，具体参见&nbsp;<strong><a href="http://wiki.apache.org/hadoop/UsingLzoCompression" target="_top">Using LZO Compression</a> 。</strong>对于想提高HBase读写性能的开发者，采用LZO是比较好的选择。对于非常在乎存储空间的开发者，则建议保持默认。</p> <p><strong>不要在一张表里定义太多的Column Family</strong></p> <p>Hbase目前不能良好的处理超过2-3个CF的表。因为某个CF在flush发生时，它邻近的CF也会因关联效应被触发flush，最终导致系统产生很多IO。</p> <p><strong>批量导入</strong></p> <p>在批量导入数据到Hbase前，你可以通过预先创建region，来平衡数据的负载。详见&nbsp;<a href="http://hbase.apache.org/book.html#precreate.regions" target="_blank">Table Creation: Pre-Creating Regions</a></p> <h3>Hbase客户端优化</h3> <p><strong>AutoFlush</strong></p> <p>将<a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html" target="_top">HTable</a>的setAutoFlush设为false，可以支持客户端批量更新。即当Put填满客户端flush缓存时，才发送到服务端。<br /> 默认是true。</p> <p><strong>Scan Caching</strong></p> <p>scanner一次缓存多少数据来scan（从服务端一次抓多少数据回来scan）。<br /> 默认值是 1，一次只取一条。</p> <p><strong>Scan Attribute Selection</strong></p> <p>scan时建议指定需要的Column Family，减少通信量，否则scan默认会返回整个row的所有数据（所有Coulmn Family）。</p> <p><strong>Close ResultScanners</strong></p> <p>通过scan取完数据后，记得要关闭ResultScanner，否则RegionServer可能会出现问题。</p> <p><strong>Optimal Loading of Row Keys</strong></p> <p>当你scan一张表的时候，返回结果只需要row key（不需要CF, qualifier,values,timestaps）时，你可以在scan实例中添加一个filterList，并设置 MUST_PASS_ALL操作，filterList中add&nbsp;<a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.html" target="_top">FirstKeyOnlyFilter</a>或<a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html" target="_top">KeyOnlyFilter</a>。这样可以减少网络通信量。</p> <p><strong>Turn off WAL on Puts</strong></p> <p>当Put某些非重要数据时，你可以设置writeToWAL(false)，来进一步提高写性能。writeToWAL(false)会在Put时放弃写WAL log。风险是，当RegionServer宕机时，可能你刚才Put的那些数据会丢失，且无法恢复。</p> <p><strong>启用Bloom Filter</strong></p> <p><a href="http://hbase.apache.org/book.html#blooms" target="_blank">Bloom Filter</a>通过空间换时间，提高读操作性能。</p> 			<p>转载请注明原文链接：<a href="http://kenwublog.com/hbase-performance-tuning" rel="bookmark">http://kenwublog.com/hbase-performance-tuning</a></p></div><img src ="http://www.blogjava.net/ivanwan/aggbug/352350.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2011-06-15 13:39 <a href="http://www.blogjava.net/ivanwan/archive/2011/06/15/352350.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBase Compound Indexes</title><link>http://www.blogjava.net/ivanwan/archive/2011/06/11/352094.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Sat, 11 Jun 2011 08:21:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2011/06/11/352094.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/352094.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2011/06/11/352094.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/352094.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/352094.html</trackback:ping><description><![CDATA[<div><p>We recently set up HBase and HBase-trx (from  https://github.com/hbase-trx) to use multiple-column indexes with this  code. &nbsp;After you compile it, just copy the jar and the hbase-trx jar  into your hbase&#8217;s lib folder and you should be good to to!</p> <p>When you create a composite index, you can see the metadata for the  index by looking at the table description. &nbsp;One of the properties will  read &#8220;INDEXES =&gt;&#8221; followed by index names and &#8216;family:qualifier&#8217;  style column names in the index.</p> <p>KeyGeneratorFactory:<br /> <code><br /> package com.ir.store.hbase.indexes;</code></p> <p><code> </code></p> <p><code>import java.util.List;</code></p> <p><code>import org.apache.hadoop.hbase.client.tableindexed.IndexKeyGenerator;</code></p> <p><code>public class KeyGeneratorFactory {</code></p><code> </code><p><code></code></p> <p><code> public static IndexKeyGenerator getInstance(List columns) {<br /> return new HBaseIndexKeyGenerator(columns);<br /> }<br /> }<br /> </code><br /> HBaseIndexKeyGenerator:<br /> <code><br /> package com.ir.store.hbase.indexes;</code></p> <p><code> </code></p> <p><code>import java.io.DataInput;<br /> import java.io.DataOutput;<br /> import java.io.IOException;<br /> import java.util.ArrayList;<br /> import java.util.List;<br /> import java.util.Map;<br /> import org.apache.hadoop.hbase.client.tableindexed.IndexKeyGenerator;<br /> import org.apache.hadoop.hbase.util.Bytes;</code></p> <p><code>public class HBaseIndexKeyGenerator extends Object implements IndexKeyGenerator {<br /> public static byte[] KEYSEPERATOR = "~;?".getBytes();</code></p> <p><code>private int columnCount;<br /> private List columnNames = new ArrayList();</code></p><code> </code><p>public HBaseIndexKeyGenerator(List memberColumns) {<br /> // For new key generators<br /> columnNames = memberColumns;<br /> columnCount = memberColumns.size();<br /> }</p> <p>public HBaseIndexKeyGenerator() {<br /> // Hollow constructor for deserializing -- should call readFields shortly<br /> columnCount = 0;<br /> }</p> <p>public void readFields(DataInput binaryInput) throws IOException {<br /> columnCount = binaryInput.readInt();<br /> for (int currentColumn = 0; currentColumn &lt; columnCount; currentColumn++)<br /> columnNames.add(Bytes.readByteArray(binaryInput));<br /> }</p> <p>public void write(DataOutput binaryOutput) throws IOException {<br /> binaryOutput.writeInt(columnCount);<br /> for (byte[] columnName : columnNames)<br /> Bytes.writeByteArray(binaryOutput, columnName);<br /> }</p> <p><code></code></p> <code> public byte[] createIndexKey(byte[] baseRowIdentifier, Map baseRowData) {<br /> byte[] indexRowIdentifier = null;<br /> for (byte[] columnName: columnNames) {<br /> if (indexRowIdentifier == null)<br /> indexRowIdentifier = baseRowData.get(columnName);<br /> else indexRowIdentifier = Bytes.add(indexRowIdentifier, HBaseIndexKeyGenerator.KEYSEPERATOR, baseRowData.get(columnName));<br /> }<br /> if (baseRowIdentifier != null)<br /> return Bytes.add(indexRowIdentifier, HBaseIndexKeyGenerator.KEYSEPERATOR, baseRowIdentifier);<br /> return indexRowIdentifier;<br /> }<br /> }</code></div><img src ="http://www.blogjava.net/ivanwan/aggbug/352094.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2011-06-11 16:21 <a href="http://www.blogjava.net/ivanwan/archive/2011/06/11/352094.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBase性能深度分析</title><link>http://www.blogjava.net/ivanwan/archive/2011/06/10/352071.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Fri, 10 Jun 2011 15:33:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2011/06/10/352071.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/352071.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2011/06/10/352071.html#Feedback</comments><slash:comments>1</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/352071.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/352071.html</trackback:ping><description><![CDATA[<div><span style="color: #333333; font-family: Georgia, 'Bitstream Charter', serif; font-size: 16px; line-height: 24px; "><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">对于Bigtable类型的分布式数据库应用来说，用户往往会对其性能状况有极大的兴趣，这其中又对实时数据插入性能更为关注。HBase作为Bigtable的一个实现，在这方面的性能会如何呢？这就需要通过测试数据来说话了。</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">数据插入性能测试的设计场景是这样的，取随机值的Rowkey长度为2000字节，固定值的Value长度为4000字节，由于单行Row插入速度太快，系统统计精度不够，所以将插入500行Row做一次耗时统计。</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">这里要对HBase的特点做个说明，首先是Rowkey值为何取随机数，这是因为HBase是对Rowkey进行排序的，随机Rowkey将被分配到不同的region上，这样才能发挥出分布式数据库的性能优点。而Value对于HBase来说不会进行任何解析，其数据是否变化，对性能是不应该有任何影响的。同时为了简单起见，所有的数据都将只插入到一个表格的同一个Column中。</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">在测试之初，需要对集群进行调优，关闭可能大量耗费内存、带宽以及CPU的服务，例如Apache的Http服务。保持集群的宁静度。此外，为了保证测试不受干扰，Hbase的集群系统需要被独立，以保证不与HDFS所在的Hadoop集群有所交叉。</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">那么做好一切准备，就开始进行数据灌入，客户端从Zookeeper上查询到Regionserver的地址后，开始源源不断的向Hbase的Regionserver上喂入Row。</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">这里，我写了一个通过JFreeChart来实时生成图片的程序，每3分钟，喂数据的客户端会将获取到的耗时统计打印在一张十字坐标图中，这些图又被保存在制定的web站点中，并通过http服务展示出来。在通过长时间不间断的测试后，我得到了如下图形：</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; "><a href="http://www.spnguru.com/wp-content/uploads/2010/11/clip_image002.jpg" style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; color: #0066cc; background-position: initial initial; background-repeat: initial initial; "><img size-full=""  wp-image-467"="" src="http://www.spnguru.com/wp-content/uploads/2010/11/clip_image002.jpg" alt="" width="554" height="346" style="margin-top: 0px; margin-right: auto; margin-bottom: 12px; margin-left: auto; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; border-top-style: none; border-right-style: none; border-bottom-style: none; border-left-style: none; border-width: initial; border-color: initial; max-width: 640px; clear: both; display: block; background-position: initial initial; background-repeat: initial initial; " /></a></p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">这个图形非常有特点，好似一条直线上，每隔一段时间就会泛起一个波浪，且两个高峰之间必有一个较矮的波浪。高峰的间隔则呈现出越来越大的趋势。而较矮的波浪恰好处于两高峰的中间位置。</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">为了解释这个现象，我对HDFS上Hbase所在的主目录下文件，以及被插入表格的region情况进行了实时监控，以期发现这些波浪上发生了什么事情。</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">回溯到客户端喂入数据的开始阶段，创建表格，在HDFS上便被创建了一个与表格同名的目录，该目录下将出现第一个region，region中会以family名创建一个目录，这个目录下才存在记录具体数据的文件。同时在该表表名目录下，还会生成一个&#8220;compaction.dir&#8221;目录，该目录将在family名目录下region文件超过指定数目时用于合并region。</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">当第一个region目录出现的时候，内存中最初被写入的数据将被保存到这个文件中，这个间隔是由选项&#8220;hbase.hregion.memstore.flush.size&#8221;决定的，默认是64MB，该region所在的Regionserver的内存中一旦有超过64MB的数据的时候，就将被写入到region文件中。这个文件将不断增殖，直到超过由&#8220;hbase.hregion.max.filesize&#8221;决定的文件大小时（默认是256MB，此时加上内存刷入的数据，实际最大可能到256+64M），该region将被执行split，立即被一切为二，其过程是在该目录下创建一个名为&#8220;.splits&#8221;的目录作为标记，然后由Regionserver将文件信息读取进来，分别写入到两个新的region目录中，最后再将老的region删除。这里的标记目录&#8220;.splits&#8221;将避免在split过程中发生其他操作，起到类似于多线程安全的锁功能。在新的region中，从老的region中切分出的数据独立为一个文件并不再接受新的数据（该文件大小超过了64M，最大可达到（256+64）/2=160MB），内存中新的数据将被保存到一个重新创建的文件中，该文件大小将为64MB。内存每刷新一次，region所在的目录下就将增加一个64M的文件，直到总文件数超过由&#8220;hbase.hstore.compactionThreshold&#8221;指定的数量时（默认为3），compaction过程就将被触发了。在上述值为3时，此时该region目录下，实际文件数只有两个，还有额外的一个正处于内存中将要被刷入到磁盘的过程中。Compaction过程是Hbase的一个大动作，Hbase不仅要将这些文件转移到&#8220;compaction.dir&#8221;目录进行压缩，而且在压缩后的文件超过256MB时，还必须立即进行split动作。这一系列行为在HDFS上可谓是翻山倒海，影响颇大。待Compaction结束之后，后续的split依然会持续进行一小段时间，直到所有的region都被切割分配完毕，Hbase才会恢复平静并等待下一次数据从内存写入到HDFS的到来。</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">理解了上述过程，则必然对HBase的数据插入性能为何是上图所示的曲线的原因一目了然。与X轴几乎平行的直线，表明数据正在被写入HBase的Regionserver所在机器的内存中。而较低的波峰意味着Regionserver正在将内存写入到HDFS上，较高的波峰意味着Regionserver不仅正在将内存刷入到HDFS，而且还在执行Compaction和Split两种操作。如果调整&#8220;hbase.hstore.compactionThreshold&#8221;的值为一个较大的数量，例如改成5，可以预见，在每两个高峰之间必然会等间隔的出现三次较低的波峰，并可预见到，高峰的高度将远超过上述值为3时的高峰高度（因为Compaction的工作更为艰巨）。由于region数量由少到多，而我们插入的Row的Rowkey是随机的，因此每一个region中的数据都会均匀的增加，同一段时间插入的数据将被分布到越来越多的region上，因此波峰之间的间隔时间也将会越来越长。</p><p style="margin-top: 0px; margin-right: 0px; margin-bottom: 24px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initial; border-color: initial; vertical-align: baseline; background-image: initial; background-attachment: initial; background-origin: initial; background-clip: initial; background-color: transparent; background-position: initial initial; background-repeat: initial initial; ">再次理解上述论述，我们可以推断出Hbase的数据插入性能实际上应该被分为三种情况，即直线状态、低峰状态和高峰状态。在这三种情况下得到的性能数据才是最终Hbase数据插入性能的真实描述。那么提供给用户的数据该是采取哪一个呢？我认为直线状态由于其所占时间会较长，尤其在用户写入数据的速度也许并不是那么快的情况下，所以这个状态下得到的性能数据结果更应该提供给用户。</p></span></div><img src ="http://www.blogjava.net/ivanwan/aggbug/352071.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2011-06-10 23:33 <a href="http://www.blogjava.net/ivanwan/archive/2011/06/10/352071.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBase的性能优化和相关测试</title><link>http://www.blogjava.net/ivanwan/archive/2011/06/10/352069.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Fri, 10 Jun 2011 15:14:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2011/06/10/352069.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/352069.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2011/06/10/352069.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/352069.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/352069.html</trackback:ping><description><![CDATA[<div><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">HBase的写效率还是很高的，但其随机读取效率并不高</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">可以采取一些优化措施来提高其性能，如：</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">1. 启用lzo压缩，见<a href="http://www.tech126.com/hadoop-lzo/" target="_blank" style="color: #006699; text-decoration: none; ">这里</a></p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">2. 增大hbase.regionserver.handler.count数为100</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">3. 增大hfile.block.cache.size为0.4，提高cache大小</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">4. 增大hbase.hstore.blockingStoreFiles为15</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">5. 启用BloomFilter，在HBase0,89中可以设置</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">6.Put时可以设置setAutoFlush为false，到一定数目后再flushCommits</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">&nbsp;</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">在14个Region Server的集群上，新建立一个lzo压缩表</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">测试的Put和Get的性能如下：</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">1. Put数据：</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 40px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">单线程灌入1.4亿数据，共花费50分钟，每秒能达到4万个，这个性能确实很好了，不过插入的value比较小，只有不到几十个字节</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 40px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">多线程put，没有测试，因为单线程的效率已经相当高了</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">2. Get数据：</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 40px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">在没有任何Block Cache，而且是Random Read的情况：</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 80px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">单线程平均每秒只能到250个左右</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 80px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">6个线程平均每秒能达到1100个左右</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 80px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">16个线程平均每秒能达到2500个左右</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 40px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">有BlockCache（曾经get过对应的row，而且还在cache中）的情况：</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 80px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">单线程平均每秒能到3600个左右</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 80px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">6个线程平均每秒能达到1.2万个左右</p><p style="padding-top: 0px; padding-right: 0px; padding-bottom: 15px; padding-left: 0px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 80px; color: #333333; font-family: 'Trebuchet MS', Tahoma, Arial; font-size: 13px; line-height: 19px; ">16个线程平均每秒能达到2.5万个左右</p></div><img src ="http://www.blogjava.net/ivanwan/aggbug/352069.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2011-06-10 23:14 <a href="http://www.blogjava.net/ivanwan/archive/2011/06/10/352069.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HADOOP报错Incompatible namespaceIDs</title><link>http://www.blogjava.net/ivanwan/archive/2011/06/09/351981.html</link><dc:creator>ivaneeo</dc:creator><author>ivaneeo</author><pubDate>Thu, 09 Jun 2011 06:20:00 GMT</pubDate><guid>http://www.blogjava.net/ivanwan/archive/2011/06/09/351981.html</guid><wfw:comment>http://www.blogjava.net/ivanwan/comments/351981.html</wfw:comment><comments>http://www.blogjava.net/ivanwan/archive/2011/06/09/351981.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ivanwan/comments/commentRss/351981.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ivanwan/services/trackbacks/351981.html</trackback:ping><description><![CDATA[<div><p>今早一来，突然发现使用-put命令往HDFS里传数据传不上去了，抱一大堆错误，然后我使用bin/hadoop dfsadmin -report查看系统状态</p> <p>admin@adw1:/home/admin/joe.wangh/hadoop-0.19.2&gt;bin/hadoop dfsadmin -report<br /> Configured Capacity: 0 (0 KB)<br /> Present Capacity: 0 (0 KB)<br /> DFS Remaining: 0 (0 KB)<br /> DFS Used: 0 (0 KB)<br /> DFS Used%: ?%<br /> <br /> -------------------------------------------------<br /> Datanodes available: 0 (0 total, 0 dead)</p>  <p>使用bin/stop-all.sh关闭HADOOP</p>  <p>admin@adw1:/home/admin/joe.wangh/hadoop-0.19.2&gt;bin/stop-all.sh<br /> stopping jobtracker<br /> 172.16.197.192: stopping tasktracker<br /> 172.16.197.193: stopping tasktracker<br /> stopping namenode<br /> <span style="color: #ff0000;">172.16.197.193: no datanode to stop<br /> 172.16.197.192: no datanode to stop</span> <br /> 172.16.197.191: stopping secondarynamenode</p>  <p>哦，看到了吧，发现datanode前面并没有启动起来。去DATANODE上查看一下日志</p> <p>admin@adw2:/home/admin/joe.wangh/hadoop-0.19.2/logs&gt;vi hadoop-admin-datanode-adw2.hst.ali.dw.alidc.net.log<br /> <br /> ************************************************************/<br /> 2010-07-21 10:12:11,987 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: <span style="color: #ff0000;">java.io.IOException:  Incompatible namespaceIDs in  /home/admin/joe.wangh/hadoop/data/dfs.data.dir: namenode namespaceID =  898136669; datanode namespaceID = 2127444065</span> <br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)<br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)<br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:288)<br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.server.datanode.DataNode.&lt;init&gt;(DataNode.java:206)<br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1239)<br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1194)<br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1202)<br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1324)<br /> ......</p>  <p>错误提示<span style="color: #000000;">namespaceIDs不一致。</span> </p>  <p><span style="color: #000000;">下面给出两种解决办法，我使用的是第二种。</span> </p> <p>   </p> <p align="left"><strong><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">Workaround 1: Start from scratch </span> </strong> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">I can testify that the following steps solve this error, but the side effects won't make you happy (me neither). The crude workaround I have found is to: </span> </p> <p style="margin-left: 36pt; text-align: left; text-indent: -18pt;" align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">1.<span style="font: 7pt &quot;Times New Roman&quot;;">&nbsp;&nbsp;&nbsp;&nbsp; </span> </span> <span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">stop the cluster </span> </p> <p style="margin-left: 36pt; text-align: left; text-indent: -18pt;" align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">2.<span style="font: 7pt &quot;Times New Roman&quot;;">&nbsp;&nbsp;&nbsp;&nbsp; </span> </span> <span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">delete the data directory on the problematic datanode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data </span> </p> <p style="margin-left: 36pt; text-align: left; text-indent: -18pt;" align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">3.<span style="font: 7pt &quot;Times New Roman&quot;;">&nbsp;&nbsp;&nbsp;&nbsp; </span> </span> <span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">reformat the namenode (NOTE: all HDFS data is lost during this process!) </span> </p> <p style="margin-left: 36pt; text-align: left; text-indent: -18pt;" align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">4.<span style="font: 7pt &quot;Times New Roman&quot;;">&nbsp;&nbsp;&nbsp;&nbsp; </span> </span> <span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">restart the cluster </span> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">When deleting all the HDFS data and starting from scratch does not sound like a good idea (it might be ok during the initial setup/testing), you might give the second approach a try. </span> </p> <p align="left"><a name="Workaround_2:_Updating_namespaceID_of_pr"></a> <strong><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">Workaround 2: Updating namespaceID of problematic datanodes </span> </strong> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">Big thanks to Jared Stehler for the following suggestion. I have not tested it myself yet, but feel free to try it out and send me your feedback. This workaround is "minimally invasive" as you only have to edit one file on the problematic datanodes: </span> </p> <p style="margin-left: 36pt; text-align: left; text-indent: -18pt;" align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">1.<span style="font: 7pt &quot;Times New Roman&quot;;">&nbsp;&nbsp;&nbsp;&nbsp; </span> </span> <span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">stop the datanode </span> </p> <p style="margin-left: 36pt; text-align: left; text-indent: -18pt;" align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">2.<span style="font: 7pt &quot;Times New Roman&quot;;">&nbsp;&nbsp;&nbsp;&nbsp; </span> </span> <span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">edit the value of namespaceID in &lt;dfs.data.dir&gt;/current/VERSION to match the value of the current namenode </span> </p> <p style="margin-left: 36pt; text-align: left; text-indent: -18pt;" align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">3.<span style="font: 7pt &quot;Times New Roman&quot;;">&nbsp;&nbsp;&nbsp;&nbsp; </span> </span> <span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">restart the datanode </span> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">If you followed the instructions in my tutorials, the full path of the relevant file is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data/current/VERSION (background: dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, and we set hadoop.tmp.dir to /usr/local/hadoop-datastore/hadoop-hadoop). </span> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">If you wonder how the contents of VERSION look like, here's one of mine: </span> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">#contents of &lt;dfs.data.dir&gt;/current/VERSION</span> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">namespaceID=393514426</span> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">storageID=DS-1706792599-10.10.10.1-50010-1204306713481</span> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">cTime=1215607609074</span> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">storageType=DATA_NODE</span> </p> <p align="left"><span style="font-size: 9pt; font-family: &quot;微软雅黑&quot;,&quot;sans-serif&quot;;">layoutVersion=-13</span> </p> <p align="left">&nbsp;</p> <p align="left">原因:每次namenode  format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode  format清空了namenode下的数据,但是没有晴空datanode下的数据,导致启动时失败,所要做的就是每次fotmat前,清空tmp一下 的所有目录.</p></div><img src ="http://www.blogjava.net/ivanwan/aggbug/351981.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ivanwan/" target="_blank">ivaneeo</a> 2011-06-09 14:20 <a href="http://www.blogjava.net/ivanwan/archive/2011/06/09/351981.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss>