﻿<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"><channel><title>BlogJava-paulwong-随笔分类-HBASE</title><link>http://www.blogjava.net/paulwong/category/52574.html</link><description /><language>zh-cn</language><lastBuildDate>Tue, 16 Dec 2014 04:47:55 GMT</lastBuildDate><pubDate>Tue, 16 Dec 2014 04:47:55 GMT</pubDate><ttl>60</ttl><item><title>simplehbase</title><link>http://www.blogjava.net/paulwong/archive/2014/12/15/421413.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Mon, 15 Dec 2014 10:26:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2014/12/15/421413.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/421413.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2014/12/15/421413.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/421413.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/421413.html</trackback:ping><description><![CDATA[<a target="_blank" href="https://github.com/zhang-xzhi/simplehbase/"  bluelink"="" tabindex="-1" style="border: 0px none; outline: none; color: #296bcc; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">https://github.com/zhang-xzhi/simplehbase/</a>&nbsp;<br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><a target="_blank" href="https://github.com/zhang-xzhi/simplehbase/wiki"  bluelink"="" tabindex="-1" style="border: 0px none; outline: none; color: #296bcc; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">https://github.com/zhang-xzhi/simplehbase/wiki</a>&nbsp;<br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">simplehbase的主要功能&nbsp;</span><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">数据类型映射：java类型和hbase的bytes之间的数据转换。&nbsp;</span><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">简单操作封装：封装了hbase的put,get,scan等操作为简单的java操作方式。&nbsp;</span><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">hbase query封装：封装了hbase的filter，可以使用sql-like的方式操作hbase。&nbsp;</span><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">动态query封装：类似于myibatis，可以使用xml配置动态语句查询hbase。&nbsp;</span><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">insert,update支持: 建立在hbase的checkAndPut之上。&nbsp;</span><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">hbase多版本支持：提供接口可以对hbase多版本数据进行查询,映射。&nbsp;</span><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">hbase批量操作支持。&nbsp;</span><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">hbase原生接口支持。&nbsp;</span><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">HTablePool管理。&nbsp;</span><br style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;" /><span style="color: #2e415c; font-family: Arial, Helvetica, sans-serif; background-color: #fefefe;">HTable count和sum功能。&nbsp;</span><img src ="http://www.blogjava.net/paulwong/aggbug/421413.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2014-12-15 18:26 <a href="http://www.blogjava.net/paulwong/archive/2014/12/15/421413.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>基于Solr的HBase多条件查询测试</title><link>http://www.blogjava.net/paulwong/archive/2014/12/04/421052.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Thu, 04 Dec 2014 11:02:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2014/12/04/421052.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/421052.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2014/12/04/421052.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/421052.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/421052.html</trackback:ping><description><![CDATA[<h2>背景：</h2><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">某电信项目中采用HBase来存储用户终端明细数据，供前台页面即时查询。HBase无可置疑拥有其优势，但其本身只对rowkey支持毫秒级的快速检索，对于多字段的组合查询却无能为力。针对HBase的多条件查询也有多种方案，但是这些方案要么太复杂，要么效率太低，本文只对基于Solr的HBase多条件查询方案进行测试和验证。</p><h2>原理：</h2><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">基于Solr的HBase多条件查询原理很简单，将HBase表中涉及条件过滤的字段和rowkey在Solr中建立索引，通过Solr的多条件查询快速获得符合过滤条件的rowkey值，拿到这些rowkey之后在HBASE中通过指定rowkey进行查询。</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;"><a href="http://static.oschina.net/uploads/img/201412/04175006_BbOr.png" target="_blank" style="padding: 0px; margin: 0px; color: #ff8373; outline: 0px; font-size: 12px;"><img alt="" src="http://static.oschina.net/uploads/img/201412/04175006_BbOr.png" style="padding: 5px; margin: 10px 0px; border: 1px solid #dddddd; max-width: 640px; cursor: pointer; background: #f4f7f9;" /></a></p><h2>测试环境：</h2><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">solr 4.0.0版本，使用其自带的jetty服务端容器，单节点；</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">hbase-0.94.2-cdh4.2.1，10台Lunux服务器组成的HBase集群。</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">HBase中2512万条数据172个字段；</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">Solr索引HBase中的100万条数据；</p><h2>测试结果：</h2><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">1、100万条数据在Solr中对8个字段建立索引。在Solr中最多8个过滤条件获取51316条数据的rowkey值，基本在57-80毫秒。根据Solr返回的rowkey值在HBase表中获取所有51316条数据12个字段值，耗时基本在15秒；</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">2、数据量同上，过滤条件同上，采用Solr分页查询，每次获取20条数据，Solr获得20个rowkey值耗时4-10毫秒，拿到Solr传入的rowkey值在HBase中获取对应20条12个字段的数据，耗时6毫秒。</p><h2>以下列出测试环境的搭建、以及相关代码实现过程。</h2><h3>一、Solr环境的搭建</h3><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">因为初衷只是测试Solr的使用，Solr的运行环境也只是用了其自带的jetty，而非大多人用的Tomcat；没有搭建Solr集群，只是一个单一的Solr服务端，也没有任何参数调优。</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">1）在<span style="padding: 0px; margin: 0px; line-height: 1.8;">Apache网站上下载Solr 4：<a href="http://lucene.apache.org/solr/downloads.html" rel="nofollow" style="padding: 0px; margin: 0px; color: #ff8373; outline: 0px; font-size: 12px;">http://lucene.apache.org/solr/downloads.html</a>，我们这里下载的是&#8220;<span style="padding: 0px; margin: 0px; line-height: 1.8;">apache-solr-4.0.0.tgz&#8221;；</span></span></p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;"><span style="padding: 0px; margin: 0px; line-height: 1.8;">2）在当前目录解压Solr压缩包：</span></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">&nbsp;-xvzf&nbsp;apache-solr-..tgz</pre><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">3）修改Solr的配置文件schema.xml，添加我们需要索引的多个字段（配置文件位于&#8220;/opt/apache-solr-4.0.0/example/solr/collection1/conf/&#8221;）</p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">&nbsp;&nbsp;&nbsp;&lt;field&nbsp;name="rowkey"&nbsp;type="string"&nbsp;indexed="true"&nbsp;stored="true"&nbsp;required="true"&nbsp;multiValued="false"&nbsp;/&gt;&nbsp; &nbsp;&nbsp;&nbsp;&lt;field&nbsp;name="time"&nbsp;type="string"&nbsp;indexed="true"&nbsp;stored="true"&nbsp;required="false"&nbsp;multiValued="false"&nbsp;/&gt; &nbsp;&nbsp;&nbsp;&lt;field&nbsp;name="tebid"&nbsp;type="string"&nbsp;indexed="true"&nbsp;stored="true"&nbsp;required="false"&nbsp;multiValued="false"&nbsp;/&gt; &nbsp;&nbsp;&nbsp;&lt;field&nbsp;name="tetid"&nbsp;type="string"&nbsp;indexed="true"&nbsp;stored="true"&nbsp;required="false"&nbsp;multiValued="false"&nbsp;/&gt; &nbsp;&nbsp;&nbsp;&lt;field&nbsp;name="puid"&nbsp;type="string"&nbsp;indexed="true"&nbsp;stored="true"&nbsp;required="false"&nbsp;multiValued="false"&nbsp;/&gt; &nbsp;&nbsp;&nbsp;&lt;field&nbsp;name="mgcvid"&nbsp;type="string"&nbsp;indexed="true"&nbsp;stored="true"&nbsp;required="false"&nbsp;multiValued="false"&nbsp;/&gt; &nbsp;&nbsp;&nbsp;&lt;field&nbsp;name="mtcvid"&nbsp;type="string"&nbsp;indexed="true"&nbsp;stored="true"&nbsp;required="false"&nbsp;multiValued="false"&nbsp;/&gt; &nbsp;&nbsp;&nbsp;&lt;field&nbsp;name="smaid"&nbsp;type="string"&nbsp;indexed="true"&nbsp;stored="true"&nbsp;required="false"&nbsp;multiValued="false"&nbsp;/&gt; &nbsp;&nbsp;&nbsp;&lt;field&nbsp;name="mtlkid"&nbsp;type="string"&nbsp;indexed="true"&nbsp;stored="true"&nbsp;required="false"&nbsp;multiValued="false"&nbsp;/&gt;</pre><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">另外关键的一点是修改原有的uniqueKey，本文设置HBase表的rowkey字段为Solr索引的uniqueKey：</p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">&lt;uniqueKey&gt;rowkey&lt;/uniqueKey&gt;</pre><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">type 参数代表索引数据类型，我这里将type全部设置为string是为了避免异常类型的数据导致索引建立失败，正常情况下应该根据实际字段类型设置，比如整型字段设置为int，更加有利于索引的建立和检索；</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">indexed 参数代表此字段是否建立索引，根据实际情况设置，建议不参与条件过滤的字段一律设置为false；</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">stored 参数代表是否存储此字段的值，建议根据实际需求只将需要获取值的字段设置为true，以免浪费存储，比如我们的场景只需要获取rowkey，那么只需把rowkey字段设置为true即可，其他字段全部设置flase；</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">required 参数代表此字段是否必需，如果数据源某个字段可能存在空值，那么此属性必需设置为false，不然Solr会抛出异常；</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">multiValued 参数代表此字段是否允许有多个值，通常都设置为false，根据实际需求可设置为true。</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">4）我们使用Solr自带的example来作为运行环境，定位到example目录，启动服务监听：</p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">cd&nbsp;/opt/apache-solr-4.0.0/example java&nbsp;-jar&nbsp;./start.jar</pre><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">如果启动成功，可以通过浏览器打开此页面：http://192.168.1.10:8983/solr/</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;"><a href="http://static.oschina.net/uploads/img/201412/04175007_0H1x.png" target="_blank" style="padding: 0px; margin: 0px; color: #ff8373; outline: 0px; font-size: 12px;"><img alt="" src="http://static.oschina.net/uploads/img/201412/04175007_0H1x.png" style="padding: 5px; margin: 10px 0px; border: 1px solid #dddddd; max-width: 640px; cursor: pointer; background: #f4f7f9;" /></a></p><h3>二、读取HBase源表的数据，在Solr中建立索引</h3><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">一种方案是通过HBase的普通API获取数据建立索引，此方案的缺点是效率较低每秒只能处理100多条数据（或许可以通过多线程提高效率）：</p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">package&nbsp;com.ultrapower.hbase.solrhbase;import&nbsp;java.io.IOException;import&nbsp;org.apache.hadoop.conf.Configuration;import&nbsp;org.apache.hadoop.hbase.HBaseConfiguration;import&nbsp;org.apache.hadoop.hbase.KeyValue;import&nbsp;org.apache.hadoop.hbase.client.HTable;import&nbsp;org.apache.hadoop.hbase.client.Result;import&nbsp;org.apache.hadoop.hbase.client.ResultScanner;import&nbsp;org.apache.hadoop.hbase.client.Scan;import&nbsp;org.apache.hadoop.hbase.util.Bytes;import&nbsp;org.apache.solr.client.solrj.SolrServerException;import&nbsp;org.apache.solr.client.solrj.impl.HttpSolrServer;import&nbsp;org.apache.solr.common.SolrInputDocument;public&nbsp;class&nbsp;SolrIndexer&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;/** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;@param&nbsp;args &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;@throws&nbsp;IOException &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;@throws&nbsp;SolrServerException&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/ &nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;static&nbsp;void&nbsp;main(String[]&nbsp;args)&nbsp;throws&nbsp;IOException, &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SolrServerException&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;final&nbsp;Configuration&nbsp;conf; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HttpSolrServer&nbsp;solrServer&nbsp;=&nbsp;new&nbsp;HttpSolrServer(&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"http://192.168.1.10:8983/solr");&nbsp;//&nbsp;因为服务端是用的Solr自带的jetty容器，默认端口号是8983  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf&nbsp;=&nbsp;HBaseConfiguration.create(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HTable&nbsp;table&nbsp;=&nbsp;new&nbsp;HTable(conf,&nbsp;"hb_app_xxxxxx");&nbsp;//&nbsp;这里指定HBase表名称 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Scan&nbsp;scan&nbsp;=&nbsp;new&nbsp;Scan(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.addFamily(Bytes.toBytes("d"));&nbsp;//&nbsp;这里指定HBase表的列族 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.setCaching(500); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.setCacheBlocks(false); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ResultScanner&nbsp;ss&nbsp;=&nbsp;table.getScanner(scan);  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println("start&nbsp;...");&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;int&nbsp;i&nbsp;=&nbsp;0;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;try&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(Result&nbsp;r&nbsp;:&nbsp;ss)&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SolrInputDocument&nbsp;solrDoc&nbsp;=&nbsp;new&nbsp;SolrInputDocument(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrDoc.addField("rowkey",&nbsp;new&nbsp;String(r.getRow()));&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(KeyValue&nbsp;kv&nbsp;:&nbsp;r.raw())&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;fieldName&nbsp;=&nbsp;new&nbsp;String(kv.getQualifier()); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;fieldValue&nbsp;=&nbsp;new&nbsp;String(kv.getValue());&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(fieldName.equalsIgnoreCase("time")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("tebid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("tetid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("puid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("mgcvid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("mtcvid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("smaid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("mtlkid"))&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrDoc.addField(fieldName,&nbsp;fieldValue); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrServer.add(solrDoc); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrServer.commit(true,&nbsp;true,&nbsp;true); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;i&nbsp;=&nbsp;i&nbsp;+&nbsp;1; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println("已经成功处理&nbsp;"&nbsp;+&nbsp;i&nbsp;+&nbsp;"&nbsp;条数据"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ss.close(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;table.close(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println("done&nbsp;!"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;catch&nbsp;(IOException&nbsp;e)&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;finally&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ss.close(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;table.close(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println("erro&nbsp;!"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;}  }</pre><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">另外一种方案是用到HBase的Mapreduce框架，分布式并行执行效率特别高，处理1000万条数据仅需5分钟，但是这种高并发需要对Solr服务器进行配置调优，不然会抛出服务器无法响应的异常：</p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">Error:&nbsp;org.apache.solr.common.SolrException:&nbsp;Server&nbsp;at&nbsp;http://192.168.1.10:8983/solr&nbsp;returned&nbsp;non&nbsp;ok&nbsp;status:503,&nbsp;message:Service&nbsp;Unavailable</pre><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;"><span style="padding: 0px; margin: 0px; line-height: 1.8; color: #3366ff;">MapReduce入口程序：</span></p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">package&nbsp;com.ultrapower.hbase.solrhbase;import&nbsp;java.io.IOException;import&nbsp;java.net.URISyntaxException;import&nbsp;org.apache.hadoop.conf.Configuration;import&nbsp;org.apache.hadoop.hbase.HBaseConfiguration;import&nbsp;org.apache.hadoop.hbase.client.Scan;import&nbsp;org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;import&nbsp;org.apache.hadoop.hbase.util.Bytes;import&nbsp;org.apache.hadoop.mapreduce.Job;import&nbsp;org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;public&nbsp;class&nbsp;SolrHBaseIndexer&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;static&nbsp;void&nbsp;usage()&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.err.println("输入参数:&nbsp;&lt;配置文件路径&gt;&nbsp;&lt;起始行&gt;&nbsp;&lt;结束行&gt;"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.exit(1); &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;static&nbsp;Configuration&nbsp;conf;&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;static&nbsp;void&nbsp;main(String[]&nbsp;args)&nbsp;throws&nbsp;IOException, &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;InterruptedException,&nbsp;ClassNotFoundException,&nbsp;URISyntaxException&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(args.length&nbsp;==&nbsp;0&nbsp;||&nbsp;args.length&nbsp;&gt;&nbsp;3)&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;usage(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;createHBaseConfiguration(args[0]); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ConfigProperties&nbsp;tutorialProperties&nbsp;=&nbsp;new&nbsp;ConfigProperties(args[0]); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;tbName&nbsp;=&nbsp;tutorialProperties.getHBTbName(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;tbFamily&nbsp;=&nbsp;tutorialProperties.getHBFamily();  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Job&nbsp;job&nbsp;=&nbsp;new&nbsp;Job(conf,&nbsp;"SolrHBaseIndexer"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;job.setJarByClass(SolrHBaseIndexer.class);  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Scan&nbsp;scan&nbsp;=&nbsp;new&nbsp;Scan();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(args.length&nbsp;==&nbsp;3)&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.setStartRow(Bytes.toBytes(args[1])); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.setStopRow(Bytes.toBytes(args[2])); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.addFamily(Bytes.toBytes(tbFamily)); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.setCaching(500);&nbsp;//&nbsp;设置缓存数据量来提高效率 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.setCacheBlocks(false);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;创建Map任务&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;TableMapReduceUtil.initTableMapperJob(tbName,&nbsp;scan, &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SolrHBaseIndexerMapper.class,&nbsp;null,&nbsp;null,&nbsp;job);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;不需要输出 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;job.setOutputFormatClass(NullOutputFormat.class);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;//&nbsp;job.setNumReduceTasks(0); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.exit(job.waitForCompletion(true)&nbsp;?&nbsp;0&nbsp;:&nbsp;1); &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;/** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;从配置文件读取并设置HBase配置信息 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;@param&nbsp;propsLocation &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;@return &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/ &nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;static&nbsp;void&nbsp;createHBaseConfiguration(String&nbsp;propsLocation)&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ConfigProperties&nbsp;tutorialProperties&nbsp;=&nbsp;new&nbsp;ConfigProperties( &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;propsLocation); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf&nbsp;=&nbsp;HBaseConfiguration.create(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("hbase.zookeeper.quorum",&nbsp;tutorialProperties.getZKQuorum()); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("hbase.zookeeper.property.clientPort", &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tutorialProperties.getZKPort()); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("hbase.master",&nbsp;tutorialProperties.getHBMaster()); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("hbase.rootdir",&nbsp;tutorialProperties.getHBrootDir()); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("solr.server",&nbsp;tutorialProperties.getSolrServer()); &nbsp;&nbsp;&nbsp;&nbsp;} }</pre><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;"><span style="padding: 0px; margin: 0px; line-height: 1.8; color: #3366ff;">对应的Mapper：</span></p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">package&nbsp;com.ultrapower.hbase.solrhbase;import&nbsp;java.io.IOException;import&nbsp;org.apache.hadoop.conf.Configuration;import&nbsp;org.apache.hadoop.hbase.KeyValue;import&nbsp;org.apache.hadoop.hbase.client.Result;import&nbsp;org.apache.hadoop.hbase.io.ImmutableBytesWritable;import&nbsp;org.apache.hadoop.hbase.mapreduce.TableMapper;import&nbsp;org.apache.hadoop.io.Text;import&nbsp;org.apache.solr.client.solrj.SolrServerException;import&nbsp;org.apache.solr.client.solrj.impl.HttpSolrServer;import&nbsp;org.apache.solr.common.SolrInputDocument;public&nbsp;class&nbsp;SolrHBaseIndexerMapper&nbsp;extends&nbsp;TableMapper&lt;Text,&nbsp;Text&gt;&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;void&nbsp;map(ImmutableBytesWritable&nbsp;key,&nbsp;Result&nbsp;hbaseResult, &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Context&nbsp;context)&nbsp;throws&nbsp;InterruptedException,&nbsp;IOException&nbsp;{  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Configuration&nbsp;conf&nbsp;=&nbsp;context.getConfiguration();  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HttpSolrServer&nbsp;solrServer&nbsp;=&nbsp;new&nbsp;HttpSolrServer(conf.get("solr.server")); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrServer.setDefaultMaxConnectionsPerHost(100); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrServer.setMaxTotalConnections(1000); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrServer.setSoTimeout(20000); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrServer.setConnectionTimeout(20000); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SolrInputDocument&nbsp;solrDoc&nbsp;=&nbsp;new&nbsp;SolrInputDocument();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;try&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrDoc.addField("rowkey",&nbsp;new&nbsp;String(hbaseResult.getRow()));&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(KeyValue&nbsp;rowQualifierAndValue&nbsp;:&nbsp;hbaseResult.list())&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;fieldName&nbsp;=&nbsp;new&nbsp;String( &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rowQualifierAndValue.getQualifier()); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;fieldValue&nbsp;=&nbsp;new&nbsp;String(rowQualifierAndValue.getValue());&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(fieldName.equalsIgnoreCase("time")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("tebid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("tetid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("puid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("mgcvid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("mtcvid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("smaid")&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;||&nbsp;fieldName.equalsIgnoreCase("mtlkid"))&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrDoc.addField(fieldName,&nbsp;fieldValue); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrServer.add(solrDoc); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;solrServer.commit(true,&nbsp;true,&nbsp;true); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;catch&nbsp;(SolrServerException&nbsp;e)&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.err.println("更新Solr索引异常:"&nbsp;+&nbsp;new&nbsp;String(hbaseResult.getRow())); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;} }</pre><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;"><span style="padding: 0px; margin: 0px; line-height: 1.8; color: #3366ff;">读取参数配置文件的辅助类：</span></p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">package&nbsp;com.ultrapower.hbase.solrhbase;import&nbsp;java.io.File;import&nbsp;java.io.FileReader;import&nbsp;java.io.IOException;import&nbsp;java.util.Properties;public&nbsp;class&nbsp;ConfigProperties&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;static&nbsp;Properties&nbsp;props;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;HBASE_ZOOKEEPER_QUORUM;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;HBASE_ZOOKEEPER_PROPERTY_CLIENT_PORT;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;HBASE_MASTER;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;HBASE_ROOTDIR;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;DFS_NAME_DIR;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;DFS_DATA_DIR;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;FS_DEFAULT_NAME;&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;SOLR_SERVER;&nbsp;//&nbsp;Solr服务器地址&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;HBASE_TABLE_NAME;&nbsp;//&nbsp;需要建立Solr索引的HBase表名称&nbsp;&nbsp;&nbsp;&nbsp;private&nbsp;String&nbsp;HBASE_TABLE_FAMILY;&nbsp;//&nbsp;HBase表的列族&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;ConfigProperties(String&nbsp;propLocation)&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;props&nbsp;=&nbsp;new&nbsp;Properties();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;try&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File&nbsp;file&nbsp;=&nbsp;new&nbsp;File(propLocation); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println("从以下位置加载配置文件：&nbsp;"&nbsp;+&nbsp;file.getAbsolutePath()); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileReader&nbsp;is&nbsp;=&nbsp;new&nbsp;FileReader(file); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;props.load(is);  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HBASE_ZOOKEEPER_QUORUM&nbsp;=&nbsp;props.getProperty("HBASE_ZOOKEEPER_QUORUM"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HBASE_ZOOKEEPER_PROPERTY_CLIENT_PORT&nbsp;=&nbsp;props.getProperty("HBASE_ZOOKEEPER_PROPERTY_CLIENT_PORT"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HBASE_MASTER&nbsp;=&nbsp;props.getProperty("HBASE_MASTER"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HBASE_ROOTDIR&nbsp;=&nbsp;props.getProperty("HBASE_ROOTDIR"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;DFS_NAME_DIR&nbsp;=&nbsp;props.getProperty("DFS_NAME_DIR"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;DFS_DATA_DIR&nbsp;=&nbsp;props.getProperty("DFS_DATA_DIR"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FS_DEFAULT_NAME&nbsp;=&nbsp;props.getProperty("FS_DEFAULT_NAME"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SOLR_SERVER&nbsp;=&nbsp;props.getProperty("SOLR_SERVER"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HBASE_TABLE_NAME&nbsp;=&nbsp;props.getProperty("HBASE_TABLE_NAME"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HBASE_TABLE_FAMILY&nbsp;=&nbsp;props.getProperty("HBASE_TABLE_FAMILY");  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;catch&nbsp;(IOException&nbsp;e)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;throw&nbsp;new&nbsp;RuntimeException("加载配置文件出错"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;catch&nbsp;(NullPointerException&nbsp;e)&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;throw&nbsp;new&nbsp;RuntimeException("文件不存在"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getZKQuorum()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;HBASE_ZOOKEEPER_QUORUM; &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getZKPort()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;HBASE_ZOOKEEPER_PROPERTY_CLIENT_PORT; &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getHBMaster()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;HBASE_MASTER; &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getHBrootDir()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;HBASE_ROOTDIR; &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getDFSnameDir()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;DFS_NAME_DIR; &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getDFSdataDir()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;DFS_DATA_DIR; &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getFSdefaultName()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;FS_DEFAULT_NAME; &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getSolrServer()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;SOLR_SERVER; &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getHBTbName()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;HBASE_TABLE_NAME; &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;String&nbsp;getHBFamily()&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;HBASE_TABLE_FAMILY; &nbsp;&nbsp;&nbsp;&nbsp;} }</pre><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;"><span style="padding: 0px; margin: 0px; line-height: 1.8; color: #3366ff;">参数配置文件&#8220;config.properties&#8221;：</span></p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">HBASE_ZOOKEEPER_QUORUM=slave-1,slave-2,slave-3,slave-4,slave-5HBASE_ZOOKEEPER_PROPERTY_CLIENT_PORT=2181HBASE_MASTER=master-1:60000HBASE_ROOTDIR=hdfs:///hbaseDFS_NAME_DIR=/opt/data/dfs/name DFS_DATA_DIR=/opt/data/d0/dfs2/data FS_DEFAULT_NAME=hdfs://192.168.1.10:9000SOLR_SERVER=http://192.168.1.10:8983/solrHBASE_TABLE_NAME=hb_app_m_user_te HBASE_TABLE_FAMILY=d</pre><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><h3>三、结合Solr进行HBase数据的多条件查询：</h3><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">可以通过web页面操作Solr索引，</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">查询：</p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">http://192.168.1.10:8983/solr/select?(time:201307&nbsp;AND&nbsp;tetid:1&nbsp;AND&nbsp;mgcvid:101&nbsp;AND&nbsp;smaid:101&nbsp;AND&nbsp;puid:102)</pre><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;"><a href="http://static.oschina.net/uploads/img/201412/04175007_Ayl0.png" target="_blank" style="padding: 0px; margin: 0px; color: #ff8373; outline: 0px; font-size: 12px;"><img alt="" src="http://static.oschina.net/uploads/img/201412/04175007_Ayl0.png" style="padding: 5px; margin: 10px 0px; border: 1px solid #dddddd; max-width: 640px; cursor: pointer; background: #f4f7f9;" /></a></p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">删除所有索引：</p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">http://192.168.1.10:8983/solr/update/?stream.body=&lt;delete&gt;&lt;query&gt;*:*&lt;/query&gt;&lt;/delete&gt;&amp;stream.contentType=text/xml;charset=utf-8&amp;commit=true</pre><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;"><span style="padding: 0px; margin: 0px; line-height: 1.8; color: #0000ff;">通过java客户端结合Solr查询HBase数据：</span></p><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><pre style="padding: 5px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px; line-height: 18px; font-size: 9pt; font-family: 'Courier New', Arial; border-width: 1px 1px 1px 5px; border-style: solid; border-color: #dddddd #dddddd #dddddd #6ce26c; color: #333333; background: #f6f6f6;">package&nbsp;com.ultrapower.hbase.solrhbase;import&nbsp;java.io.IOException;import&nbsp;java.nio.ByteBuffer;import&nbsp;java.util.ArrayList;import&nbsp;java.util.List;import&nbsp;org.apache.hadoop.conf.Configuration;import&nbsp;org.apache.hadoop.hbase.HBaseConfiguration;import&nbsp;org.apache.hadoop.hbase.client.Get;import&nbsp;org.apache.hadoop.hbase.client.HTable;import&nbsp;org.apache.hadoop.hbase.client.Result;import&nbsp;org.apache.hadoop.hbase.util.Bytes;import&nbsp;org.apache.solr.client.solrj.SolrQuery;import&nbsp;org.apache.solr.client.solrj.SolrServer;import&nbsp;org.apache.solr.client.solrj.SolrServerException;import&nbsp;org.apache.solr.client.solrj.impl.HttpSolrServer;import&nbsp;org.apache.solr.client.solrj.response.QueryResponse;import&nbsp;org.apache.solr.common.SolrDocument;import&nbsp;org.apache.solr.common.SolrDocumentList;public&nbsp;class&nbsp;QueryData&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;/** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;@param&nbsp;args &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;@throws&nbsp;SolrServerException&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;@throws&nbsp;IOException&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/ &nbsp;&nbsp;&nbsp;&nbsp;public&nbsp;static&nbsp;void&nbsp;main(String[]&nbsp;args)&nbsp;throws&nbsp;SolrServerException,&nbsp;IOException&nbsp;{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;final&nbsp;Configuration&nbsp;conf; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf&nbsp;=&nbsp;HBaseConfiguration.create(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HTable&nbsp;table&nbsp;=&nbsp;new&nbsp;HTable(conf,&nbsp;"hb_app_m_user_te"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Get&nbsp;get&nbsp;=&nbsp;null; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;List&lt;Get&gt;&nbsp;list&nbsp;=&nbsp;new&nbsp;ArrayList&lt;Get&gt;(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;url&nbsp;=&nbsp;"http://192.168.1.10:8983/solr"; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SolrServer&nbsp;server&nbsp;=&nbsp;new&nbsp;HttpSolrServer(url); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SolrQuery&nbsp;query&nbsp;=&nbsp;new&nbsp;SolrQuery("time:201307&nbsp;AND&nbsp;tetid:1&nbsp;AND&nbsp;mgcvid:101&nbsp;AND&nbsp;smaid:101&nbsp;AND&nbsp;puid:102"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;query.setStart(0);&nbsp;//数据起始行，分页用 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;query.setRows(10);&nbsp;//返回记录数，分页用 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;QueryResponse&nbsp;response&nbsp;=&nbsp;server.query(query); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SolrDocumentList&nbsp;docs&nbsp;=&nbsp;response.getResults(); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println("文档个数："&nbsp;+&nbsp;docs.getNumFound());&nbsp;//数据总条数也可轻易获取 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println("查询时间："&nbsp;+&nbsp;response.getQTime());&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(SolrDocument&nbsp;doc&nbsp;:&nbsp;docs)&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;get&nbsp;=&nbsp;new&nbsp;Get(Bytes.toBytes((String)&nbsp;doc.getFieldValue("rowkey"))); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;list.add(get); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Result[]&nbsp;res&nbsp;=&nbsp;table.get(list);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;byte[]&nbsp;bt1&nbsp;=&nbsp;null;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;byte[]&nbsp;bt2&nbsp;=&nbsp;null;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;byte[]&nbsp;bt3&nbsp;=&nbsp;null;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;byte[]&nbsp;bt4&nbsp;=&nbsp;null; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;str1&nbsp;=&nbsp;null; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;str2&nbsp;=&nbsp;null; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;str3&nbsp;=&nbsp;null; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;str4&nbsp;=&nbsp;null;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(Result&nbsp;rs&nbsp;:&nbsp;res)&nbsp;{ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bt1&nbsp;=&nbsp;rs.getValue("d".getBytes(),&nbsp;"3mpon".getBytes()); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bt2&nbsp;=&nbsp;rs.getValue("d".getBytes(),&nbsp;"3mponid".getBytes()); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bt3&nbsp;=&nbsp;rs.getValue("d".getBytes(),&nbsp;"amarpu".getBytes()); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bt4&nbsp;=&nbsp;rs.getValue("d".getBytes(),&nbsp;"amarpuid".getBytes());&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(bt1&nbsp;!=&nbsp;null&nbsp;&amp;&amp;&nbsp;bt1.length&gt;0)&nbsp;{str1&nbsp;=&nbsp;new&nbsp;String(bt1);}&nbsp;else&nbsp;{str1&nbsp;=&nbsp;"无数据";}&nbsp;//对空值进行new&nbsp;String的话会抛出异常 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(bt2&nbsp;!=&nbsp;null&nbsp;&amp;&amp;&nbsp;bt2.length&gt;0)&nbsp;{str2&nbsp;=&nbsp;new&nbsp;String(bt2);}&nbsp;else&nbsp;{str2&nbsp;=&nbsp;"无数据";}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(bt3&nbsp;!=&nbsp;null&nbsp;&amp;&amp;&nbsp;bt3.length&gt;0)&nbsp;{str3&nbsp;=&nbsp;new&nbsp;String(bt3);}&nbsp;else&nbsp;{str3&nbsp;=&nbsp;"无数据";}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(bt4&nbsp;!=&nbsp;null&nbsp;&amp;&amp;&nbsp;bt4.length&gt;0)&nbsp;{str4&nbsp;=&nbsp;new&nbsp;String(bt4);}&nbsp;else&nbsp;{str4&nbsp;=&nbsp;"无数据";} &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.print(new&nbsp;String(rs.getRow())&nbsp;+&nbsp;"&nbsp;"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.print(str1&nbsp;+&nbsp;"|"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.print(str2&nbsp;+&nbsp;"|"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.print(str3&nbsp;+&nbsp;"|"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println(str4&nbsp;+&nbsp;"|"); &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;table.close(); &nbsp;&nbsp;&nbsp;&nbsp;} }</pre><p style="padding: 0px; margin: 8px 0px; line-height: 22.5px; letter-spacing: 0.5px; font-size: 13px; color: #333333; font-family: Verdana, sans-serif, 宋体; background-color: #ffffff;"></p><h2>小结：</h2><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">通过测试发现，结合Solr索引可以很好的实现HBase的多条件查询，同时还能解决其两个难点：分页查询、数据总量统计。</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">实际场景中大多都是分页查询，分页查询返回的数据量很少，采用此种方案完全可以达到前端页面毫秒级的实时响应；若有大批量的数据交互，比如涉及到数据导出，实际上效率也是很高，十万数据仅耗时10秒。</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">另外，如果真的将Solr纳入使用，Solr以及HBase端都可以不断进行优化，比如可以搭建Solr集群，甚至可以采用SolrCloud基于hadoop的分布式索引服务。</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">总之，HBase不能多条件过滤查询的先天性缺陷，在Solr的配合之下可以得到较好的弥补，难怪诸如新蛋科技、国美电商、苏宁电商等互联网公司以及众多游戏公司，都使用Solr来支持快速查询。</p><p style="padding: 0px; margin-top: 10px; margin-bottom: 10px; line-height: 25px; color: #333333; font-stretch: normal; font-family: verdana, Arial, Helvetica, sans-serif; background-color: #ffffff;">----end</p><img src ="http://www.blogjava.net/paulwong/aggbug/421052.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2014-12-04 19:02 <a href="http://www.blogjava.net/paulwong/archive/2014/12/04/421052.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Simplehbase</title><link>http://www.blogjava.net/paulwong/archive/2014/07/15/415803.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Tue, 15 Jul 2014 00:35:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2014/07/15/415803.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/415803.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2014/07/15/415803.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/415803.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/415803.html</trackback:ping><description><![CDATA[<a href="https://github.com/zhang-xzhi/simplehbase/ " target="_blank">https://github.com/zhang-xzhi/simplehbase/ </a><br /><a href="https://github.com/zhang-xzhi/simplehbase/wiki " target="_blank">https://github.com/zhang-xzhi/simplehbase/wiki</a><br /><br /><br />## simplehbase简介 <br />simplehbase是java和hbase之间的轻量级中间件。 <br />主要包含以下功能。 <br />*  数据类型映射：java类型和hbase的bytes之间的数据转换。 <br />*  简单操作封装：封装了hbase的put,get,scan等操作为简单的java操作方式。 <br />*  hbase query封装：封装了hbase的filter，可以使用sql-like的方式操作hbase。 <br />*  动态query封装：类似于myibatis，可以使用xml配置动态语句查询hbase。 <br />*  insert,update支持: 建立在hbase的checkAndPut之上。 <br />*  hbase多版本支持：提供接口可以对hbase多版本数据进行查询,映射。 <br />*  hbase原生接口支持。 <br /><br /><br />### v0.9 <br />新增 <br /><br />支持HTable如下使用方式，对HTable可以定时flush。 <br />主要场景： <br />批量写入，但是flush可以配置为指定时间间隔进行。 <br />不降低批操作的吞吐，同时，有一定的实时性保证。 <br /><br />支持用户自定义htablePoolService。 <br />多个HTable可以使用同一个线程池。 <br /><br />intelligentScanSize功能,可以根据limit的值设定scan的cachingsize大小。 <br /><br /><br />### v0.8 <br />批量操作接口新增 <br /><div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #0000FF; ">public</span>&nbsp;&lt;T&gt;&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;putObjectList(List&lt;PutRequest&lt;T&gt;&gt;&nbsp;putRequestList);&nbsp;<br /><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;deleteObjectList(List&lt;RowKey&gt;&nbsp;rowKeyList,&nbsp;Class&lt;?&gt;&nbsp;type);&nbsp;<br /><span style="color: #0000FF; ">public</span>&nbsp;&lt;T&gt;&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;putObjectListMV(List&lt;PutRequest&lt;T&gt;&gt;&nbsp;putRequests,<span style="color: #0000FF; ">long</span>&nbsp;timestamp)&nbsp;<br /><span style="color: #0000FF; ">public</span>&nbsp;&lt;T&gt;&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;putObjectListMV(List&lt;PutRequest&lt;T&gt;&gt;&nbsp;putRequests,Date&nbsp;timestamp)&nbsp;<br /><span style="color: #0000FF; ">public</span>&nbsp;&lt;T&gt;&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;putObjectListMV(List&lt;PutRequest&lt;T&gt;&gt;&nbsp;putRequestList)&nbsp;<br /><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;deleteObjectMV(RowKey&nbsp;rowKey,&nbsp;Class&lt;?&gt;&nbsp;type,&nbsp;<span style="color: #0000FF; ">long</span>&nbsp;timeStamp)&nbsp;<br /><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;deleteObjectMV(RowKey&nbsp;rowKey,&nbsp;Class&lt;?&gt;&nbsp;type,&nbsp;Date&nbsp;timeStamp)&nbsp;<br /><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;deleteObjectListMV(List&lt;RowKey&gt;&nbsp;rowKeyList,&nbsp;Class&lt;?&gt;&nbsp;type,<span style="color: #0000FF; ">long</span>&nbsp;timeStamp)&nbsp;<br /><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;deleteObjectListMV(List&lt;RowKey&gt;&nbsp;rowKeyList,&nbsp;Class&lt;?&gt;&nbsp;type,Date&nbsp;timeStamp)&nbsp;<br /><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;deleteObjectListMV(List&lt;DeleteRequest&gt;&nbsp;deleteRequestList,Class&lt;?&gt;&nbsp;type);&nbsp;</div><br /><br />Util新增（前缀查询使用） <br /><div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;RowKey&nbsp;getEndRowKeyOfPrefix(RowKey&nbsp;prefixRowKey)&nbsp;</div><br />性能改进 <br />把get的实现从scan调回get。 <br /><br />### v0.7新增功能： <br />支持查询时主记录和关联的RowKey同时返回。&nbsp;<img src ="http://www.blogjava.net/paulwong/aggbug/415803.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2014-07-15 08:35 <a href="http://www.blogjava.net/paulwong/archive/2014/07/15/415803.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>NoSql存储日志数据之Spring+Logback+Hbase深度集成</title><link>http://www.blogjava.net/paulwong/archive/2014/07/05/415490.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 05 Jul 2014 15:14:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2014/07/05/415490.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/415490.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2014/07/05/415490.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/415490.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/415490.html</trackback:ping><description><![CDATA[<br /><a href="http://www.cnblogs.com/xguo/p/3298956.html" target="_blank">http://www.cnblogs.com/xguo/p/3298956.html</a><img src ="http://www.blogjava.net/paulwong/aggbug/415490.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2014-07-05 23:14 <a href="http://www.blogjava.net/paulwong/archive/2014/07/05/415490.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBase、Redis中关于“长事务”(Long Transaction)的一点讨论</title><link>http://www.blogjava.net/paulwong/archive/2013/08/24/403276.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 24 Aug 2013 14:39:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/24/403276.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/403276.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/24/403276.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/403276.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/403276.html</trackback:ping><description><![CDATA[<div style="margin: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714; color: #444444; font-family: 'Open Sans', Helvetica, Arial, sans-serif; background-color: #ffffff;"><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">首先解释下标题，可能命名不是那么严谨吧，大致的定义如下：</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">sometimes you are in a situation where you want to read a record, check what is in it, and depending on that update the record. The problem is that between the time you read a row and perform the update, someone else might have updated the row, so your update might be based on outdated information.</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">摘要一下：进程A读取了某行R，进行时间较长的计算操作，在这个计算过程中B对行R进行了更改。A计算完毕后，若直接写入，会覆盖B的修改结果。此时应令A写入失败。</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">以下的讨论整理自下述两个页面，表示感谢！</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;"><a href="http://www.ngdata.com/hbase-row-locks/" style="margin: 0px; padding: 0px; border: 0px; vertical-align: baseline; outline: none; color: #21759b;">http://www.ngdata.com/hbase-row-locks/</a></p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;"><a href="http://redis.io/topics/transactions" style="margin: 0px; padding: 0px; border: 0px; vertical-align: baseline; outline: none; color: #21759b;">http://redis.io/topics/transactions</a></p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">一个最简单、直接的思路是：Transaction + Row Lock。类似于传统DBMS的思路：首先开启行锁，新建一个Transaction，随后进行各种操作，最后commit，最最后解除行锁。看似很简单，也没什么Bug，但注意，若计算时间较长，整个DB就挂起了，不能执行任何操作。</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">BigTable的Paper中，对这类问题进行了讨论。</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">总体来说解决思路有三：</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">1、Rowlock，但是对于HBase来说，RegionLock更成熟。因为RowLock会长时间（从Transction开始到更新）占用一个线程。当并发量很大的时候，系统会挂掉。。。</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">2、ICV即HBase的incrementColumnValue()方法。</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">3、CAS即HBase的checkAndPut方法：在Put之前，先检查某个cell的值是否和value一样，一样再Put。注意，这里检查条件的Cell和要Put的Cell可以是不同的column，甚至是不同的row。。。</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">综上在HBASE中，使用上述CAS方法是较好的解决方案。</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">上面说了HBase，再来看一个轻量级的Redis：</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">Redis也支持事务，具体见：<a href="http://redis.io/topics/transactions" style="margin: 0px; padding: 0px; border: 0px; vertical-align: baseline; outline: none; color: #21759b;">http://redis.io/topics/transactions</a></p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">通过MULTI开始一个事务，EXEC执行一个事务。在两者之间可以&#8220;执行&#8221;多个命令，但并未被实际执行，而是被Queue起来，直到EXEC再一起执行。Redis保证：在一个事务EXEC的过程中，不会处理其他任何Client的请求（会被挂起）。注意这里是EXEC锁，而不是整个MULTI锁。所以并发性能还是有保障的。</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">为了支持Paper中CAS方案，Redis提供了WATCH命令：</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">So what is WATCH really about? It is a command that will make the EXEC conditional: we are asking Redis to perform the transaction only if no other client modified any of the WATCHed keys. Otherwise the transaction is not entered at all.</p><p style="margin: 0px 0px 1.714285714rem; padding: 0px; border: 0px; vertical-align: baseline; line-height: 1.714285714;">已经很显然了，更多具体的，读上述网页的文档吧。</p></div><img src ="http://www.blogjava.net/paulwong/aggbug/403276.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-24 22:39 <a href="http://www.blogjava.net/paulwong/archive/2013/08/24/403276.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>大数据平台架构设计资源</title><link>http://www.blogjava.net/paulwong/archive/2013/08/18/403001.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sun, 18 Aug 2013 10:27:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/18/403001.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/403001.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/18/403001.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/403001.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/403001.html</trackback:ping><description><![CDATA[!!!基于Hadoop的大数据平台实施记&#8212;&#8212;整体架构设计<br /><a href="http://blog.csdn.net/jacktan/article/details/9200979" target="_blank">http://blog.csdn.net/jacktan/article/details/9200979</a><br /><br /><br /><br /><br /><br /><br /><br /><img src ="http://www.blogjava.net/paulwong/aggbug/403001.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-18 18:27 <a href="http://www.blogjava.net/paulwong/archive/2013/08/18/403001.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>How to install Hadoop cluster(2 node cluster) and Hbase on Vmware Workstation. It also includes installing Pig and Hive in the appendix</title><link>http://www.blogjava.net/paulwong/archive/2013/08/17/402982.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 17 Aug 2013 14:23:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/17/402982.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/402982.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/17/402982.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/402982.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/402982.html</trackback:ping><description><![CDATA[By Tzu-Cheng Chuang 1-28-2011<br /><br /><br />Requires: Ubuntu10.04, hadoop0.20.2, zookeeper 3.3.2 HBase0.90.0<br />1. Download Ubuntu 10.04 desktop 32 bit from Ubuntu website.<br /><br />2. Install Ubuntu 10.04 with username: hadoop, password: password,&nbsp; disk size: 20GB, memory: 2048MB, 1 processor, 2 cores<br /><br />3. Install build-essential (for GNU C, C++ compiler)&nbsp;&nbsp;&nbsp; $ sudo apt-get install build-essential <br /><br />4. Install sun-jave-6-jdk<br />&nbsp;&nbsp;&nbsp; (1) Add the Canonical Partner Repository to your apt repositories<br />&nbsp;&nbsp;&nbsp; $ sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner"<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Update the source list<br />&nbsp;&nbsp;&nbsp; $ sudo apt-get update<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Install sun-java-6-jdk and make sure Sun&#8217;s java is the default jvm<br />&nbsp;&nbsp;&nbsp; $ sudo apt-get install sun-java6-jdk<br />&nbsp;&nbsp;&nbsp;&nbsp; (4) Set environment variable by modifying ~/.bashrc file, put the following two lines in the end of the file<br />&nbsp;&nbsp;&nbsp; export JAVA_HOME=/usr/lib/jvm/java-6-sun<br />&nbsp; &nbsp; export PATH=$PATH:$JAVA_HOME/bin&nbsp;<br /><br /> 5. Configure SSH server so that ssh to localhost doesn&#8217;t need a passphrase <br />&nbsp;&nbsp;&nbsp; (1) Install openssh server<br />&nbsp;&nbsp;&nbsp; $ sudo apt-get install openssh-server<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Generate RSA pair key<br />&nbsp;&nbsp;&nbsp; $ ssh-keygen &#8211;t ras &#8211;P ""<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Enable SSH access to local machine<br />&nbsp;&nbsp;&nbsp; $ cat ~/.ssh/id_rsa.pub &gt;&gt; ~/.ssh/authorized_keys <br /><br />6. Disable IPv6 by&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; modifying&nbsp; /etc/sysctl.conf file, put the following two lines in the end of the file<br /> #disable <br />ipv6 net.ipv6.conf.all.disable_ipv6 = 1 <br />net.ipv6.conf.default.disable_ipv6 = 1 <br />net.ipv6.conf.lo.disable_ipv6 = 1 <br /><br />7. Install hadoop<br />&nbsp;&nbsp;&nbsp; (1) Download hadoop-0.20.2.tar.gz(stable release on 1/25/2011)&nbsp; from Apache hadoop website&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (2) Extract hadoop archive file to /usr/local/&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (3) Make symbolic link&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (4) Modify /usr/local/hadoop/conf/hadoop-env.sh&nbsp;&nbsp;&nbsp; <br />Change from # The java implementation to use. Required. # export JAVA_HOME=/usr/lib/j2sdk1.5-sun To # The java implementation to use. Required. export JAVA_HOME=/usr/lib/jvm/java-6-sun<br />&nbsp;&nbsp;&nbsp;&nbsp; (5)Create /usr/local/hadoop-datastore folder&nbsp;&nbsp;&nbsp; <br />$ sudo mkdir /usr/local/hadoop-datastore<br /> $ sudo chown hadoop:hadoop /usr/local/hadoop-datastore<br /> $ sudo chmod 750 /usr/local/hadoop-datastore<br />&nbsp;&nbsp;&nbsp;&nbsp; (6)Put the following code in /usr/local/hadoop/conf/core-site.xml&nbsp;&nbsp;&nbsp; <br />hadoop.tmp.dir/usr/local/hadoop/tmp/dir/hadoop-${user.name}A base for other temporary directories.fs.default.namehdfs://master:54310The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.<br />&nbsp;&nbsp;&nbsp; (7) Put the following code in /usr/local/hadoop/conf/mapred-site.xml&nbsp;&nbsp;&nbsp; <br />mapred.job.trackermaster:54311The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.<br />&nbsp;&nbsp;&nbsp;&nbsp; (8) Put the following code in /usr/local/hadoop/conf/hdfs-site.xml&nbsp;&nbsp;&nbsp; <br />dfs.replication1Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.<br />&nbsp;&nbsp;&nbsp;&nbsp; (9) Add hadoop to environment variable by modifying ~/.bashrc&nbsp;&nbsp;&nbsp; <br />export HADOOP_HOME=/usr/local/hadoop export PATH=$HADOOP_HOME/bin:$PATH <br /><br />8. Restart Ubuntu Linux<br /><br />9. Copy this virtual machine to another folder. At least we have 2 copies of Ubuntu linux<br /><br />10. Modify /etc/hosts on both Linux Virtual Image machines, add in the following lines in the file. The IP address depends on each machine. We can use (ifconfig) to find out IP address.<br /> # /etc/hosts (for master AND slave) 192.168.0.1 master 192.168.0.2 slave&nbsp;&nbsp;&nbsp;&nbsp; Modify the following line, because it might cause Hbase to find out wrong ip.&nbsp;&nbsp;&nbsp; <br />192.168.0.1 ubuntu <br /><br />11. Check hadoop user access on both machines.<br />The hadoop user on the master (aka hadoop@master) must be able to connect a) to its own user account on the master &#8211; i.e. ssh master in this context and not necessarily ssh localhost &#8211; and b) to the hadoop user account on the slave (aka hadoop@slave)&nbsp; via a password-less SSH login. On both machines, make sure each one can connect to master, slave without typing passwords.<br /><br />12. Cluster configuration <br />&nbsp;&nbsp;&nbsp; (1) Modify /usr/local/hadoop/conf/masters<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; only on master machine&nbsp;&nbsp;&nbsp; master<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Modify /usr/local/hadoop/conf/slaves<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; only on master machine&nbsp;&nbsp;&nbsp; master slave<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Change &#8220;localhost&#8221; to &#8220;master&#8221; in /usr/local/conf/hadoop/conf/core-site.xml and /usr/local/hadoop/conf/mapred-site.xml<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; only on master machine&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (4) Change dfs.replication to &#8220;1&#8221; in /usr/local/conf/hadoop/conf/hdfs-site.xml<br />&nbsp;&nbsp;&nbsp; only on master machine&nbsp;&nbsp;&nbsp; <br /><br />13. Format the namenode only once and only on master machine <br />$ /usr/local/hadoop/bin/hadoop namenode &#8211;format <br /><br />14. Later on, start the multi-node cluster by typing following code only on master. So far, please don&#8217;t start hadoop yet. <br />$ /usr/local/hadoop/bin/start-dfs.sh $ /usr/local/hadoop/bin/start-mapred.sh <br /><br />15. Install zookeeper only on master node <br />&nbsp;&nbsp;&nbsp; (1) download zookeeper-3.3.2.tar.gz from Apache hadoop website&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (2) Extract&nbsp; zookeeper-3.3.2.tar.gz&nbsp;&nbsp;&nbsp; $ tar &#8211;xzf zookeeper-3-3.2.tar.gz<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Move folder zookeeper-3.3.2 to /home/hadoop/ and create a symbloink link<br />&nbsp;&nbsp;&nbsp; $ mv zookeeper-3.3.2 /home/hadoop/ ; ln &#8211;s /home/hadoop/zookeeper-3.3.2 /home/hadoop/zookeeper<br />&nbsp;&nbsp;&nbsp;&nbsp; (4) copy conf/zoo_sample.cfg to conf/zoo.cfg<br />&nbsp;&nbsp;&nbsp; $ cp conf/zoo_sample.cfg confg/zoo.cfg<br />&nbsp;&nbsp;&nbsp;&nbsp; (5) Modify conf/zoo.cfg&nbsp;&nbsp;&nbsp; dataDir=/home/hadoop/zookeeper/snapshot <br /><br />16. Install Hbase on both master and slave nodes, configure it as fully-distributed <br />&nbsp;&nbsp;&nbsp; (1) Download hbase-0.90.0.tar.gz from Apache hadoop website&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (2) Extract&nbsp; hbase-0.90.0.tar.gz&nbsp;&nbsp;&nbsp; $ tar &#8211;xzf hbase-0.90.0.tar.gz<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Move folder hbase-0.90.0 to /home/hadoop/ and create a symbloink link&nbsp;&nbsp;&nbsp; $ mv hbase-0.90.0 /home/hadoop/ ; ln &#8211;s /home/hadoop/hbase-0.90.0 /home/hadoop/hbase<br />&nbsp;&nbsp;&nbsp;&nbsp; (4) Edit /home/hadoop/hbase/conf/hbase-site.xml, put the following in between and hbase.rootdirhdfs://master:54310/hbase The directory shared by region servers. Should be fully-qualified to include the filesystem to use. E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR hbase.cluster.distributedtrueThe mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh) hbase.zookeeper.quorummasterComma separated list of servers in the ZooKeeper Quorum. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which we will start/stop ZooKeeper on.<br />&nbsp;&nbsp;&nbsp;&nbsp; (5) modify environment variables in /home/hadoop/hbase/conf/hbase-env.sh<br />&nbsp;&nbsp;&nbsp; export JAVA_HOME=/usr/lib/jvm/java-6-sun/<br /> export HBASE_IDENT_STRING=$HOSTNAME<br /> export HBASE_MANAGES_ZK=false<br />&nbsp;&nbsp;&nbsp;&nbsp; (6)Overwrite /home/hadoop/hbase/conf/regionservers<br />&nbsp; on both machines&nbsp;&nbsp;&nbsp; master slave<br />&nbsp;&nbsp;&nbsp;&nbsp; (7)copy /usr/local/hadoop-0.20.2/haoop-0.20.2-core.jar to /home/hadoop/hbase/lib/&nbsp; on both machines.<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; This is very important to fix version difference issue. Pay attention to its ownership and mode(755).&nbsp;&nbsp;&nbsp; <br /><br />17. Start zookeeper. It seems the zookeeper bundled with Hbase is not set up correctly. <br />$ /home/hadoop/zookeeper/bin/zkServer.sh start&nbsp;&nbsp;&nbsp;&nbsp; (Optional)We can test if zookeeper is running correctly by&nbsp; typing&nbsp;&nbsp;&nbsp;&nbsp; $ /home/hadoop/zookeeper/bin/zkCli.sh &#8211;server 127.0.0.1:2181 <br /><br />18. Start hadoop cluster <br />$ /usr/local/hadoop/bin/start-dfs.sh $ /usr/local/hadoop/bin/start-mapred.sh <br /><br />19. Start Hbase<br /> $ /home/hadoop/hbase/bin/start-hbase.sh <br /><br />20. Use Hbase shell<br /> $ /home/hadoop/hbase/bin/hbase shell&nbsp;&nbsp;&nbsp;&nbsp; Check if hbase is running smoothly<br />&nbsp;&nbsp;&nbsp; Open your browser, and type in the following.<br />&nbsp;&nbsp;&nbsp; http://localhost:60010&nbsp;&nbsp;&nbsp; <br /><br /><br />21. Later on, stop the multi-node cluster by typing following code only on master <br />&nbsp;&nbsp;&nbsp; (1) Stop Hbase&nbsp;&nbsp;&nbsp; $ /home/hadoop/hbase/bin/stop-hbase.sh<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Stop hadoop file system (HDFS)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br />$ /usr/local/hadoop/bin/stop-mapred.sh <br />$ /usr/local/hadoop/bin/stop-dfs.sh<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Stop zookeeper&nbsp;&nbsp;&nbsp;&nbsp; <br />$ /home/hadoop/zookeeper/bin/zkServer.sh stop <br /><br />Reference<br />http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/<br />http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/<br />http://wiki.apache.org/hadoop/Hbase/10Minutes<br />http://hbase.apache.org/book/quickstart.html<br />http://alans.se/blog/2010/hadoop-hbase-cygwin-windows-7-x64/<br /><br />Author<br />Tzu-Cheng Chuang <br /><br /><br />Appendix- Install Pig and Hive<br />1. Install Pig 0.8.0 on this cluster <br />&nbsp;&nbsp;&nbsp; (1) Download pig-0.8.0.tar.gz from Apache pig project website.&nbsp; Then extract the file and move it to /home/hadoop/&nbsp;&nbsp;&nbsp; <br />$ tar &#8211;xzf pig-0.8.0.tar.gz ; mv pig-0.8.0 /home/hadoop/<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Make symbolink link under pig-0.8.0/conf/&nbsp;&nbsp;&nbsp; <br />$ ln -s /usr/local/hadoop/conf/core-site.xml /home/hadoop/pig-0.8.0/conf/core-site.xml <br />$ ln -s /usr/local/hadoop/conf/mapred-site.xml /home/hadoop/pig-0.8.0/conf/mapred-site.xml <br />$ ln -s /usr/local/hadoop/conf/hdfs-site.xml /home/hadoop/pig-0.8.0/conf/hdfs-site.xml<br />&nbsp;&nbsp;&nbsp;&nbsp; 3) Start pig in map-reduce mode: $ /home/hadoop/pig-0.8.0/bin/pig<br />&nbsp;&nbsp;&nbsp;&nbsp; (4) Exit pig from grunt&gt;&nbsp;&nbsp;&nbsp; quit <br /><br />2. Install Hive on this cluster <br />&nbsp;&nbsp;&nbsp; (1) Download hive-0.6.0.tar.gz from Apache hive project website, and then extract the file and move it to /home/hadoop/&nbsp;&nbsp;&nbsp; $ tar &#8211;xzf hive-0.6.0.tar.gz ; mv hive-0.6.0 ~/<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Modify java heap size in hive-0.6.0/bin/ext/execHiveCmd.sh&nbsp; Change 4096 to 1024&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (3) Create /tmp and /user/hive/warehouse and set them chmod g+w in HDFS before a table can be created in Hive&nbsp;&nbsp;&nbsp; $ hadoop fs &#8211;mkdir /tmp $ hadoop fs &#8211;mkdir /user/hive/warehouse $ hadoop fs &#8211;chmod g+w /tmp $ hadoop fs &#8211;chmod g+w /user/hive/warehouse<br />&nbsp;&nbsp;&nbsp;&nbsp; (4) start Hive&nbsp;&nbsp;&nbsp;&nbsp; $ /home/hadoop/hive-0.6.0/bin/hive <br /><br />&nbsp;&nbsp;&nbsp;&nbsp; 3. (Optional)Load data by using Hive <br />&nbsp;&nbsp;&nbsp; Create a file /home/hadoop/customer.txt&nbsp;&nbsp;&nbsp; 1, Kevin 2, David 3, Brian 4, Jane 5, Alice&nbsp;&nbsp;&nbsp;&nbsp; After hive shell is started, type in&nbsp;&nbsp;&nbsp; &gt; CREATE TABLE IF NOT EXISTS customer(id INT, name STRING) &gt; ROW FORMAT delimited fields terminated by ',' &gt; STORED AS TEXTFILE; &gt;LOAD DATA INPATH '/home/hadoop/customer.txt' OVERWRITE INTO TABLE customer; &gt;SELECT customer.id, customer.name from customer;<br /><br /><a href="http://chuangtc.info/ParallelComputing/SetUpHadoopClusterOnVmwareWorkstation.htm" target="_blank">http://chuangtc.info/ParallelComputing/SetUpHadoopClusterOnVmwareWorkstation.htm</a><img src ="http://www.blogjava.net/paulwong/aggbug/402982.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-17 22:23 <a href="http://www.blogjava.net/paulwong/archive/2013/08/17/402982.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBASE界面工具</title><link>http://www.blogjava.net/paulwong/archive/2013/08/14/402775.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Wed, 14 Aug 2013 01:51:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/14/402775.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/402775.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/14/402775.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/402775.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/402775.html</trackback:ping><description><![CDATA[
hbaseexplorer<br />下载此0.6的WAR包时，要将lib下的jasper-runtime-5.5.23.jar和jasper-compiler-5.5.23.jar删掉，否则会报错<br /><a href="http://sourceforge.net/projects/hbaseexplorer/?source=dlp" target="_blank">http://sourceforge.net/projects/hbaseexplorer/?source=dlp</a><br /><br />HBaseXplorer<br /><a href="https://github.com/bit-ware/HBaseXplorer/downloads" target="_blank">https://github.com/bit-ware/HBaseXplorer/downloads</a><br /><br />HBase Manager<br /><a href="http://sourceforge.net/projects/hbasemanagergui/" target="_blank">http://sourceforge.net/projects/hbasemanagergui/</a> 
<img src ="http://www.blogjava.net/paulwong/aggbug/402775.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-14 09:51 <a href="http://www.blogjava.net/paulwong/archive/2013/08/14/402775.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Kettle - HADOOP数据转换工具</title><link>http://www.blogjava.net/paulwong/archive/2013/08/01/402269.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Thu, 01 Aug 2013 09:21:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/01/402269.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/402269.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/01/402269.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/402269.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/402269.html</trackback:ping><description><![CDATA[ETL（Extract-Transform-Load的缩写，即数据抽取、转换、装载的过程），对于企业或行业应用来说，我们经常会遇到各种数据的处理，转换，迁移，所以了解并掌握一种etl工具的使用，必不可少，这里我介绍一个我在工作中使用了3年左右的ETL工具Kettle,本着好东西不独享的想法，跟大家分享碰撞交流一下！在使用中我感觉这个工具真的很强大，支持图形化的GUI设计界面，然后可以以工作流的形式流转，在做一些简单或复杂的数据抽取、质量检测、数据清洗、数据转换、数据过滤等方面有着比较稳定的表现，其中最主要的我们通过熟练的应用它，减少了非常多的研发工作量，提高了我们的工作效率，不过对于我这个.net研发者来说唯一的遗憾就是这个工具是Java编写的。<br /><br /><a href="http://www.cnblogs.com/limengqiang/archive/2013/01/16/KettleApply1.html" target="_blank">http://www.cnblogs.com/limengqiang/archive/2013/01/16/KettleApply1.html</a><img src ="http://www.blogjava.net/paulwong/aggbug/402269.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-01 17:21 <a href="http://www.blogjava.net/paulwong/archive/2013/08/01/402269.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>一网打尽13款开源Java大数据工具</title><link>http://www.blogjava.net/paulwong/archive/2013/05/03/398700.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Fri, 03 May 2013 01:05:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/05/03/398700.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/398700.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/05/03/398700.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/398700.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/398700.html</trackback:ping><description><![CDATA[<p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>下面将介绍大数据领域支持Java的主流开源工具</strong>：</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce391277b5.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce391277b5.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>1.	HDFS</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">HDFS是Hadoop应用程序中主要的分布式储存系统， HDFS集群包含了一个NameNode（主节点），这个节点负责管理所有文件系统的元数据及存储了真实数据的DataNode（数据节点，可以有很多）。HDFS针对海量数据所设计，所以相比传统文件系统在大批量小文件上的优化，HDFS优化的则是对小批量大型文件的访问和存储。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce3c49ded6.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"></a><a href="http://cms.csdnimg.cn/article/201304/28/517ce3c49ded6.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce3c49ded6.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>2.	MapReduce</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Hadoop MapReduce是一个软件框架，用以轻松编写处理海量（TB级）数据的并行应用程序，以可靠和容错的方式连接<span style="line-height: 1.45em;">大型集群中</span><span style="line-height: 1.45em;">上万个节点（商用硬件）。</span></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce3ee64519.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce3ee64519.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>3.	HBase</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache HBase是Hadoop数据库，一个分布式、可扩展的大数据存储。它提供了大数据集上随机和实时的读/写访问，并针对了商用服务器集群上的大型表格做出优化&#8212;&#8212;上百亿行，上千万列。其核心是Google Bigtable论文的开源实现，分布式列式存储。就像Bigtable利用GFS（Google File System）提供的分布式数据存储一样，它是Apache Hadoop在HDFS基础上提供的一个类Bigatable。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce413366c7.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce413366c7.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>4.	Cassandra</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Cassandra是一个高性能、可线性扩展、高有效性数据库，可以运行在商用硬件或云基础设施上打造完美的任务关键性数据平台。在横跨数据中心的复制中，Cassandra同类最佳，为用户提供更低的延时以及更可靠的灾难备份。通过log-structured update、反规范化和物化视图的强支持以及强大的内置缓存，Cassandra的数据模型提供了方便的二级索引（column indexe）。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce4611885c.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce4611885c.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>5.	Hive</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Hive是Hadoop的一个数据仓库系统，促进了数据的综述（将结构化的数据文件映射为一张数据库表）、即席查询以及存储在Hadoop兼容系统中的大型数据集分析。Hive提供完整的SQL查询功能&#8212;&#8212;HiveQL语言，同时当使用这个语言表达一个<span style="line-height: 1.45em;">逻辑</span><span style="line-height: 1.45em;">变得低效和繁琐</span><span style="line-height: 1.45em;">时，HiveQL还允许传统的Map/Reduce程序员使用自己定制的Mapper和Reducer。</span></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce470085ed.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce470085ed.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>6.	Pig</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Pig是一个用于大型数据集分析的平台，它包含了一个用于数据分析应用的高级语言以及评估这些应用的基础设施。Pig应用的闪光特性在于它们的结构经得起大量的并行，也就是说让它们支撑起非常大的数据集。Pig的基础设施层包含了产生Map-Reduce任务的编译器。Pig的语言层当前包含了一个原生语言&#8212;&#8212;Pig Latin，开发的初衷是易于编程和保证可扩展性。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce47b8e077.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce47b8e077.jpg" border="0" alt="" style="vertical-align: middle; border: none; width: 99px; height: 99px; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>7.	Chukwa</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Chukwa是个开源的数据收集系统，用以监视大型分布系统。建立于HDFS和Map/Reduce框架之上，继承了Hadoop的可扩展性和稳定性。Chukwa同样包含了一个灵活和强大的工具包，用以显示、监视和分析结果，以保证数据的使用达到最佳效果。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce4870b072.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce4870b072.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>8.	Ambari</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Ambari是一个基于web的工具，用于配置、管理和监视Apache Hadoop集群，支持Hadoop HDFS,、Hadoop MapReduce、Hive、HCatalog,、HBase、ZooKeeper、Oozie、Pig和Sqoop。Ambari同样还提供了集群状况仪表盘，比如heatmaps和查看MapReduce、Pig、Hive应用程序的能力，以友好的用户界面对它们的性能特性进行诊断。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce49282930.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce49282930.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>9.	ZooKeeper</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache ZooKeeper是一个针对大型分布式系统的可靠协调系统，提供的功能包括：配置维护、命名服务、分布式同步、组服务等。ZooKeeper的目标就是封装好复杂易出错的关键服务，将简单易用的接口和性能高效、功能稳定的系统提供给用户。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce49e31e19.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce49e31e19.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>10.	Sqoop</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Sqoop是一个用来将Hadoop和关系型数据库中的数据相互转移的工具，可以将一个关系型数据库中数据导入Hadoop的HDFS中，也可以将HDFS中数据导入关系型数据库中。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce4b0d3c61.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce4b0d3c61.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>11.	Oozie</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Oozie是一个可扩展、可靠及可扩充的工作流调度系统，用以管理Hadoop作业。Oozie Workflow作业是活动的Directed Acyclical Graphs（DAGs）。Oozie Coordinator作业是由周期性的Oozie Workflow作业触发，周期一般决定于时间（频率）和数据可用性。Oozie与余下的Hadoop堆栈结合使用，开箱即用的支持多种类型Hadoop作业（比如：Java map-reduce、Streaming map-reduce、Pig、 Hive、Sqoop和Distcp）以及其它系统作业（比如Java程序和Shell脚本）。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce4bdedb23.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce4bdedb23.jpg" border="0" alt="" style="vertical-align: middle; border: none; width: 100px; height: 100px; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>12.	Mahout</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Mahout是个可扩展的机器学习和数据挖掘库，当前Mahout支持主要的4个用例：</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"></p><ul style="margin: 0px 0px 1em 20px; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">推荐挖掘：搜集用户动作并以此给用户推荐可能喜欢的事物。</span></li><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">聚集：收集文件并进行相关文件分组。</span></li><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">分类：从现有的分类文档中学习，寻找文档中的相似特征，并为无标签的文档进行正确的归类。</span></li><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">频繁项集挖掘：将一组项分组，并识别哪些个别项会经常一起出现。</span></li></ul><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce4cf93346.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce4cf93346.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>13.	HCatalog</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache HCatalog是Hadoop建立数据的映射表和存储管理服务，它包括：</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"></p><ul style="margin: 0px 0px 1em 20px; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">提供一个共享模式和数据类型机制。</span></li><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">提供一个抽象表，这样用户就不需要关注数据存储的方式和地址。</span></li><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">为类似Pig、MapReduce及Hive这些数据处理工具提供互操作性。</span></li></ul><img src ="http://www.blogjava.net/paulwong/aggbug/398700.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-05-03 09:05 <a href="http://www.blogjava.net/paulwong/archive/2013/05/03/398700.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Phoenix: HBase终于有SQL接口了～</title><link>http://www.blogjava.net/paulwong/archive/2013/02/19/395432.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Tue, 19 Feb 2013 15:15:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/02/19/395432.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/395432.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/02/19/395432.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/395432.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/395432.html</trackback:ping><description><![CDATA[这项利器是由CRM领域的领导Saleforce发布的。相当于HBase的JDBC。<br /><br />具体详见：<a href="https://github.com/forcedotcom/phoenix" target="_blank">https://github.com/forcedotcom/phoenix</a><br /><br />支持select，from，where，groupby，having，orderby和建表操作，未来将支持二级索引，join操作，动态列簇等功能。<br /><br />是建立在原生HBASE API基础上的，响应时间10M级别的数据是毫秒，100M级别是秒。<br /><br /><br /><div><a href="http://www.infoq.com/cn/news/2013/02/Phoenix-HBase-SQL" target="_blank">http://www.infoq.com/cn/news/2013/02/Phoenix-HBase-SQL</a></div><img src ="http://www.blogjava.net/paulwong/aggbug/395432.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-02-19 23:15 <a href="http://www.blogjava.net/paulwong/archive/2013/02/19/395432.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBASE读书笔记-基础功能</title><link>http://www.blogjava.net/paulwong/archive/2013/02/06/395168.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Wed, 06 Feb 2013 01:53:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/02/06/395168.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/395168.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/02/06/395168.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/395168.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/395168.html</trackback:ping><description><![CDATA[<ol>
     <li>HBASE的SHELL命令使用<br />
     <br />
     </li>
     <li>HBASE的JAVA CLIENT的使用<br /><br />新增和修改记录用PUT。<br /><br />PUT的执行流程：<br />首先会在内存中增加MEMSTORE，如果这个表有N个COLOUMN FAMILY，则会产生N个MEMSTORE，记录中的值属于不同的COLOUMN FAMILY的，会保存到不同的MEMSTORE中。MEMSTORE中的值不会马上FLUSH到文件中，而是到MEMSTORE满的时候再FLUSH，且FLUSH的时候不会写入已存在的HFILE中，而是新增一个HFILE去保存。另外会写WRITE AHEAD LOG，这是由于新增记录时不是马上写入HFILE的，如果中途出现DOWN机时，则HBASE重启时会根据这个LOG来恢复数据。<br /><br />删除记录用DELETE。<br /><br />删除时并不会将在HFILE中的内容删除，而是作一标记，然后在查询的时候可以不取这些记录。<br /><br />读取单条记录用GET。<br /><br />读取的时候会将记录保存到CAHE中，同样如果这个表有N个COLOUMN FAMILY，则会产生N个CAHE<br />，记录中的值属于不同的COLOUMN FAMILY的，会保存到不同的CAHE中。这样下次客户端再取记录时会综合CAHE和MEMSTORE来返回数据。<br /><br />新增表用HADMIN。<br /><br />查询多条记录用SCAN和FILTER。<br />
     <br />
     </li>
     <li>HBASE的分布式计算<br /><br />为什么会有分布式计算<br />前面的API是针对ONLINE的应用，即要求低延时的，相当于OLTP。而针对大量数据时这些API就不适用了。<br />如要针对全表数据进行分析时用SCAN，这样会将全表数据取回本地，如果数据量在100G时会耗几个小时，为了节省时间，引入多线程做法，但要引入多线程时，需遵从新算法：将全表数据分成N个段，每段用一个线程处理，处理完后，交结果合成，然后进行分析。<br /><br />如果数据量在200G或以上时间就加倍了，多线程的方式不能满足了，因此引入多进程方式，即将计算放在不同的物理机上处理，这时就要考虑每个物理机DOWN机时的处理方式等情况了，HADOOP的MAPREDUCE则是这种分布式计算的框架了，对于应用者而言，只须处理分散和聚合的算法，其他的无须考虑。<br /><br />HBASE的MAPREDUCE<br />使用TABLEMAP和TABLEREDUCE。<br /><br />HBASE的部署架构和组成的组件<br />架构在HADOOP和ZOOPKEEPER之上。<br /><br />HBASE的查询记录和保存记录的流程<br />说见前一编博文。<br /><br />HBASE作为数据来源地、保存地和共享数据源的处理方式<br />即相当于数据库中JOIN的算法：REDUCE SIDE JOIN、MAP SIDE JOIN。<br /></li>
</ol><img src ="http://www.blogjava.net/paulwong/aggbug/395168.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-02-06 09:53 <a href="http://www.blogjava.net/paulwong/archive/2013/02/06/395168.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>监控HBASE</title><link>http://www.blogjava.net/paulwong/archive/2013/02/04/395107.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Mon, 04 Feb 2013 07:08:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/02/04/395107.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/395107.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/02/04/395107.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/395107.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/395107.html</trackback:ping><description><![CDATA[@import url(http://www.blogjava.net/CuteSoft_Client/CuteEditor/Load.ashx?type=style&file=SyntaxHighlighter.css);@import url(/css/cuteeditor.css);
<div>Hadoop/Hbase是开源版的google Bigtable, GFS, MapReduce的实现，随着互联网的发展，大数据的处理显得越发重要，Hadoop/Hbase的用武之地也越发广泛。为了更好的使用Hadoop/Hbase系统，需要有一套完善的监控系统，来了解系统运行的实时状态，做到一切尽在掌握。Hadoop/Hbase有自己非常完善的metrics framework, 里面包种各种维度的系统指标的统计，另外，这套metrics framework设计的也非常不错，用户可以很方便地添加自定义的metrics。更为重要的一点是metrics的展示方式，目前它支持三种方式：一种是落地到本地文件，一种是report给Ganglia系统，另一种是通过JMX来展示。本文主要介绍怎么把Hadoop/Hbase的metrics report给Ganglia系统，通过浏览器来查看。<br />
<br />
介绍后面的内容之前有必要先简单介绍一下Ganglia系统。Ganglia是一个开源的用于系统监控的系统，它由三部分组成：gmond, gmetad, webfrontend, 三部分是这样分工的：<br />
<br />
gmond: 是一个守护进程，运行在每一个需要监测的节点上，收集监测统计，发送和接受在同一个组播或单播通道上的统计信息<br />
gmetad: 是一个守护进程，定期检查gmond，从那里拉取数据，并将他们的指标存储在RRD存储引擎中<br />
webfrontend: 安装在有gmetad运行的机器上，以便读取RRD文件，用来做前台展示<br />
<br />
简单总结它们三者的各自的功用，gmond收集数据各个node上的metrics数据，gmetad汇总gmond收集到的数据，webfrontend在前台展示gmetad汇总的数据。Ganglia缺省是对系统的一些metric进行监控，比如cpu/memory/net等。不过Hadoop/Hbase内部做了对Ganglia的支持，只需要简单的改配置就可以将Hadoop/Hbase的metrics也接入到ganglia系统中进行监控。<br />
<br />
接下来介绍如何把Hadoop/Hbase接入到Ganglia系统，这里的Hadoop/Hbase的版本号是0.94.2，早期的版本可能会有一些不同，请注意区别。Hbase本来是Hadoop下面的子项目，因此所用的metrics framework原本是同一套Hadoop metrics，但后面hadoop有了改进版本的metrics framework:metrics2(metrics version 2), Hadoop下面的项目都已经开始使用metrics2, 而Hbase成了Apache的顶级子项目，和Hadoop成为平行的项目后，目前还没跟进metrics2，它用的还是原始的metrics.因此这里需要把Hadoop和Hbase的metrics分开介绍。<br />
<br />
Hadoop接入Ganglia:<br />
<br />
1. Hadoop metrics2对应的配置文件为：hadoop-metrics2.properties<br />
2. hadoop metrics2中引用了source和sink的概念，source是用来收集数据的, sink是用来把source收集的数据consume的（包括落地文件，上报ganglia，JMX等）<br />
3. hadoop metrics2配置支持Ganglia:</div>
<div>
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->#*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30<br />
*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31<br />
&nbsp;<br />
*.sink.ganglia.period=10<br />
*.sink.ganglia.supportsparse=true<br />
*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both<br />
*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40<br />
&nbsp;<br />
#uncomment&nbsp;as&nbsp;your&nbsp;needs<br />
namenode.sink.ganglia.servers=10.235.6.156:8649<br />
#datanode.sink.ganglia.servers=10.235.6.156:8649<br />
#jobtracker.sink.ganglia.servers=10.0.3.99:8649<br />
#tasktracker.sink.ganglia.servers=10.0.3.99:8649<br />
#maptask.sink.ganglia.servers=10.0.3.99:8649<br />
#reducetask.sink.ganglia.servers=10.0.3.99:8649</div>
</div>
<br />
<div><br />
</div>
<div>这里需要注意的几点：<br />
<br />
(1) 因为Ganglia3.1与3.0不兼容，需要根据Ganglia的版本选择使用GangliaSink30或者GangliaSink31<br />
(2) period配置上报周期，单位是秒(s)<br />
(3) namenode.sink.ganglia.servers指定Ganglia gmetad所在的host:port，用来向其上报数据<br />
(4) 如果同一个物理机器上同时启动了多个hadoop进程(namenode/datanode, etc)，根据需要把相应的进程的sink.ganglia.servers配置好即可<br />
Hbase接入Ganglia:<br />
<br />
1. Hbase所用的hadoop metrics对应的配置文件是: hadoop-metrics.properties<br />
2. hadoop metrics里核心是Context，写文件有写文件的TimeStampingFileContext, 向Ganglia上报有GangliaContext/GangliaContext31<br />
3. hadoop metrics配置支持Ganglia:</div>
<div>
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->#&nbsp;Configuration&nbsp;of&nbsp;the&nbsp;"hbase"&nbsp;context&nbsp;for&nbsp;ganglia<br />
#&nbsp;Pick&nbsp;one:&nbsp;Ganglia&nbsp;3.0&nbsp;(former)&nbsp;or&nbsp;Ganglia&nbsp;3.1&nbsp;(latter)<br />
#&nbsp;hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext<br />
hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31<br />
hbase.period=10<br />
hbase.servers=10.235.6.156:8649</div>
</div>
<div><br />
</div>
<div>这里需要注意几点：<br />
<br />
(1) 因为Ganglia3.1和3.0不兼容，所以如果是3.1以前的版本，需要用GangliaContext, 如果是3.1版的Ganglia，需要用GangliaContext31<br />
(2) period的单位是秒(s)，通过period可以配置向Ganglia上报数据的周期<br />
(3) servers指定的是Ganglia gmetad所在的host:port，把数据上报到指定的gmetad<br />
(4) 对rpc和jvm相关的指标都可以进行类似的配置</div>
<div><br />
</div>
<div><br />
</div>
<div><br />
</div>
<div><br />
</div>
<div><br />
</div><img src ="http://www.blogjava.net/paulwong/aggbug/395107.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-02-04 15:08 <a href="http://www.blogjava.net/paulwong/archive/2013/02/04/395107.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBASE部署要点</title><link>http://www.blogjava.net/paulwong/archive/2013/02/04/395101.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Mon, 04 Feb 2013 04:10:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/02/04/395101.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/395101.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/02/04/395101.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/395101.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/395101.html</trackback:ping><description><![CDATA[<div>REGIONS SERVER和TASK TRACKER SERVER不要在同一台机器上，最好如果有MAPREDUCE JOB运行的话，应该分开两个CLUSTER，即两群不同的服务器上，这样MAPREDUCE 的线下负载不会影响到SCANER这些线上负载。</div>
<div><br />
</div>
<div>如果主要是做MAPREDUCE JOB的话，将REGIONS SERVER和TASK TRACKER SERVER放在一起是可以的。</div>
<div><br />
</div>
<div><br />
</div>
<div><span style="background-color: yellow; color: red; ">原始集群模式</span></div>
<div><br />
</div>
10个或以下节点，无MAPREDUCE JOB，主要用于低延迟的访问。每个节点上的配置为：CPU4-6CORE，内存24-32G，4个SATA硬盘。Hadoop NameNode, JobTracker, HBase Master, 和ZooKeeper全都在同一个NODE上。
<div><br />
</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">小型集群模式（10-20台服务器）</span></div>
<div><br />
</div>
HBase Master放在单独一台机器上, 以便于使用较低配置的机器。ZooKeeper也放在单独一台机器上，NameNode和JobTracker放在同一台机器上。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">中型集群模式（20-50台服务器）</span></div>
<div><br />
</div>
由于无须再节省费用，可以将HBase Master和ZooKeeper放在同一台机器上,&nbsp;ZooKeeper和HBase Master要三个实例。NameNode和JobTracker放在同一台机器上。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">大型集群模式（&gt;50台服务器）</span></div>
<div><br />
</div>
和中型集群模式相似，但ZooKeeper和HBase Master要五个实例。NameNode和Second&nbsp;NameNode要有足够大的内存。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">HADOOP MASTER节点</span></div>
<div><br />
</div>
NameNode和Second&nbsp;NameNode服务器配置要求：（小型）8CORE CPU，16G内存，1G网卡和SATA 硬盘，中弄再增加多16G内存，大型则再增加多32G内存。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">HBASE MASTER节点</span></div>
<div><br />
</div>
服务器配置要求：4CORE CPU，8-16G内存，1G网卡和2个SATA 硬盘，一个用于操作系统，另一个用于HBASE MASTER LOGS。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">HADOOP DATA NODES和HBASE REGION SERVER节点</span></div>
<div><br />
</div>
DATA NODE和REGION SERVER应在同一台服务器上，且不应该和TASK TRACKER在一起。服务器配置要求：8-12CORE CPU，24-32G内存，1G网卡和12*1TB SATA 硬盘，一个用于操作系统，另一个用于HBASE MASTER LOGS。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">ZOOPKEEPERS节点</span></div>
<div><br />
</div>
服务器配置和HBASE MASTER相似，也可以与HBASE MASTER放在一起，但就要多增加一个硬盘单独给ZOOPKEEPER使用。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">安装各节点</span></div>
<div><br />
</div>
JVM配置：</div>
-Xmx8g&#8212;设置HEAP的最大值到8G，不建议设到15 GB.<br />
-Xms8g&#8212;设置HEAP的最小值到8GS.<br />
-Xmn128m&#8212;设置新生代的值到128 MB，默认值太小。<br />
-XX:+UseParNewGC&#8212;设置对于新生代的垃圾回收器类型，这种类型是会停止JAVA进程，然后再进行回收的，但由于新生代体积比较小，持续时间通常只有几毫秒，因此可以接受。<br />
-XX:+UseConcMarkSweepGC&#8212;设置老生代的垃圾回收类型，如果用新生代的那个会不合适，即会导致JAVA进程停止的时间太长，用这种不会停止JAVA进程，而是在JAVA进程运行的同时，并行的进行回收。<br />
-XX:CMSInitiatingOccupancyFraction&#8212;设置CMS回收器运行的频率。<br />
<div><br />
</div>
<div><br />
</div>
<div><br />
</div>
<div><br />
</div><img src ="http://www.blogjava.net/paulwong/aggbug/395101.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-02-04 12:10 <a href="http://www.blogjava.net/paulwong/archive/2013/02/04/395101.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBASE读书笔记</title><link>http://www.blogjava.net/paulwong/archive/2013/02/01/395020.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Fri, 01 Feb 2013 05:55:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/02/01/395020.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/395020.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/02/01/395020.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/395020.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/395020.html</trackback:ping><description><![CDATA[<div>GET、PUT是ONLINE的操作，MAPREDUCE是OFFLINE的操作</div>
<div></div><br/><br/>
<div><span style="color: #0000ff; background-color: yellow;">HDFS写流程</span></div>
<div>客户端收到要保存文件的请求后，将文件以64M为单位拆成若干份BLOCK，形成一个列表，即由几个BLOCK组成，将这些信息告诉NAME NODE，我要保存这个，NAME NODE算出一个列表，哪段BLOCK应该写到哪个DATA NODE，客户端将第一个BLOCK传到第一个节点DATA NODE A，通知其保存，同时让它通知DATA NODE D和DATA NODE B也保存一份，DATA NODE D收到信息后进行了保存，同时通知DATA NODE B保存一份，DATA NODE B保存完成后则通知客户端保存完成，客户端再去向NAME NODE中取下一个BLOCK要保存的位置，重复以上的动作，直到所有的BLOCK都保存完成。</div>
<div></div><br/>
<div><span style="color: #0000ff; background-color: yellow;">HDFS读流程</span></div>
<div>客户端向NAME NODE请求读一个文件，NAME NODE返回这个文件所构成的所有BLOCK的DATA NODE IP及BLOCK ID，客户端并行的向各DATA NODE发出请求，要取某个BLOCK ID的BLOCK，DATA NODE发回所要的BLOCK给客户端，客户端收集到所有的BLOCK后，整合成一个完整的文件后，此流程结束。<br />
<br />
<br />
</div>
<div></div>
<div><span style="color: #0000ff; background-color: yellow;">MAPREDUCE流程</span></div>
<div>输入数据 -- 非多线程了，而是多进程的挑选数据，即将输入数据分成多块，每个进程处理一块 -- 分组 -- 多进程的汇集数据 -- 输出</div>
<div><br />
<span style="color: #0000ff; background-color: yellow;">HBASE表结构</span></div>
<div>HBASE中将一个大表数据分成不同的小表，每个小表叫REGION，存放REGION的服务器叫REGIONSERVER，一个REGIONSERVER可以存放多个REGION。通常REGIONSERVER和DATA NODE是在同一服务器，以减少NETWORK IO。</div>
<div></div>
<div>-ROOT-表存放于MASTER SERVER上，记录了一共有多少个REGIONSERVER，每个REGION SERVER上都有一个.META.表，上面记录了本REGION SERVER放有哪几个表的哪几个REGION。如果要知道某个表共有几个REGION，就得去所有的REGION SERVER上查.META.表，进行汇总才能得知。</div>
<div></div>
<div>客户端如果要查ROW009的信息，先去咨询ZOOPKEEPER，-ROOT-表在哪里，然后问-ROOT-表，哪个.META.知道这个信息，然后去问.META.表，哪个REGION有这个信息，然后去那个REGION问ROW009的信息，然后那个REGION返回此信息。<br />
</div>
<br />
<br />
<div><span style="color: #0000ff; background-color: yellow;">HBASE MAPREDUCE</span></div>
<div>一个REGION一个MAP任务，而任务里的map方法执行多少次，则由查询出来的记录有多少条，则执行多少次。</div>
<div>REDUCE任务负责向REGION写数据，但写到哪个REGION则由那个KEY归属哪个REGION管，则写到哪个REGION，有可能REDUCE任务会和所有的REGION SERVER交互。<br />
</div>
<br />
<br />
<div><span style="color: #0000ff; background-color: yellow;">在HBASE的MAPREDUCE JOB中使用JOIN</span></div>
<div>REDUCE-SIDE JOIN<br />
利用现有的SHUTTLE分组机制，在REDUCE阶段做JOIN，但由于MAP阶段数据大，可能会有性能问题。</div>
<div>MAP-SIDE JOIN</div>
<div>将数据较少的一表读到一公共文件中，然后在MPA方法中循环另一表的数据，再将要的数据从公共文件中读取。这样可以减少SHUTTLE和SORT的时间，同时也不需要REDUCE任务。</div>
<div></div>
<img src ="http://www.blogjava.net/paulwong/aggbug/395020.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-02-01 13:55 <a href="http://www.blogjava.net/paulwong/archive/2013/02/01/395020.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Cassandra VS. HBase 全文zz</title><link>http://www.blogjava.net/paulwong/archive/2013/01/30/394902.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Tue, 29 Jan 2013 16:22:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/30/394902.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394902.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/30/394902.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394902.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394902.html</trackback:ping><description><![CDATA[<div id="content" mod-cs-content="" text-content=""  clearfix"="" style="zoom: 1; width: 758px; overflow: hidden; line-height: 1.5; margin: 7px 0px 10px; color: #454545; font-family: tahoma, helvetica, arial;"><p style="margin: 0px; padding: 0px;">摘取了一部分，全文请查看</p><p style="margin: 0px; padding: 0px;"><a href="http://blog.csdn.net/anghlq/article/details/6538115" target="_blank" style="color: #3fa7cb;"></a></p><p style="margin: 0px; padding: 0px;"></p><p style="margin: 0px; padding: 0px;"><strong><a href="http://blog.sina.com.cn/s/blog_633f4ab20100r9nm.html" target="_blank" style="color: #3fa7cb;">http://blog.sina.com.cn/s/blog_633f4ab20100r9nm.html</a><br /></strong></p><p style="margin: 0px; padding: 0px;"><strong>背景</strong></p><p style="margin: 0px; padding: 0px;">&#8220;这是最好的时代，也是最坏的时代。&#8221;&nbsp;</p><p style="margin: 0px; padding: 0px;">每个时代的人都在这么形容自己所处的时代。在一次次IT浪潮下面，有人觉得当下乏味无聊，有人却能锐意进取，找到突破。数据存储这个话题自从有了计算机之后，就一直是一个有趣或者无聊的主题。上世纪七十年代，关系数据库理论的出现，造就了一批又一批传奇，并推动整个世界信息化到了一个新的高度。而进入新千年以来，随着SNS等应用的出现，传统的SQL数据库已经越来越不适应海量数据的处理了。于是，这几年NoSQL数据库的呼声也越来越高。</p><p style="margin: 0px; padding: 0px;">在NoSQL数据库当中，呼声最高的是HBase和Cassandra两个。虽然严格意义上来说，两者服务的目的有所不同，侧重点也不尽相同，但是作为当前开源NoSQL数据库的佼佼者，两者经常被用来做各种比较。</p><p style="margin: 0px; padding: 0px;">去年十月，Facebook推出了他的新的Message系统。Facebook宣布他们采用HBase作为后台存储系统。这引起了一片喧哗声。因为Cassandra恰恰是Facebook开发，并且于2008年开源。这让很多人惊呼，是否是Cassandra已经被Facebook放弃了？HBase在这场NoSQL数据库的角力当中取得了决定性的胜利？本文打算主要从技术角度分析，HBase和Cassandra的异同，并非要给出任何结论，只是共享自己研究的一些结果。</p><p style="margin: 0px; padding: 0px;">&nbsp;</p><p style="margin: 0px; padding: 0px;"><strong>选手简介</strong></p><p style="margin: 0px; padding: 0px;"><strong>HBase</strong></p><p style="margin: 0px; padding: 0px;">HBase是一个开源的分布式存储系统。他可以看作是Google的Bigtable的开源实现。如同Google的Bigtable使用Google File System一样，HBase构建于和Google File System类似的Hadoop HDFS之上。</p><p style="margin: 0px; padding: 0px;"><strong>Cassandra</strong></p><p style="margin: 0px; padding: 0px;">Cassandra可以看作是Amazon Dynamo的开源实现。和Dynamo不同之处在于，Cassandra结合了Google Bigtable的ColumnFamily的数据模型。可以简单地认为，Cassandra是一个P2P的，高可靠性并具有丰富的数据模型的分布式文件系统。</p><p style="margin: 0px; padding: 0px;"></p><p style="margin: 0px; padding: 0px;"><strong>分布式文件系统的指标</strong></p><p style="margin: 0px; padding: 0px;">根据UC Berkeley的教授Eric Brewer于2000年提出猜测- CAP定理，一个分布式计算机系统，不可能同时满足以下三个指标：</p>Consistency 所有节点在同一时刻保持同一状态Availability 某个节点失败，不会影响系统的正常运行Partition tolerance 系统可以因为网络故障等原因被分裂成小的子系统，而不影响系统的运行<p style="margin: 0px; padding: 0px;">&nbsp;</p><p style="margin: 0px; padding: 0px;">Brewer教授推测，任何一个系统，同时只能满足以上两个指标。</p><p style="margin: 0px; padding: 0px;">在2002年，MIT的Seth Gilbert和Nancy Lynch发表正式论文论证了CAP定理。</p><p style="margin: 0px; padding: 0px;">&nbsp;</p><p style="margin: 0px; padding: 0px;">而HBase和Cassandra两者都属于分布式计算机系统。但是其设计的侧重点则有所不同。HBase继承于Bigtable的设计，侧重于CA。而Cassandra则继承于Dynamo的设计，侧重于AP。</p><p style="margin: 0px; padding: 0px;"></p>。。。。。。。。。。。。。。。。。。。<p style="margin: 0px; padding: 0px;"></p><p style="margin: 0px; padding: 0px;"><strong>特性比较</strong></p><p style="margin: 0px; padding: 0px;">由于HBase和Cassandra的数据模型比较接近，所以这里就不再比较两者之间数据模型的异同了。接下来主要比较双方在数据一致性、多拷贝复制的特性。</p><p style="margin: 0px; padding: 0px;"><strong>HBase</strong></p><p style="margin: 0px; padding: 0px;">HBase保证写入的一致性。当一份数据被要求复制N份的时候，只有N份数据都被真正复制到N台服务器上之后，客户端才会成功返回。如果在复制过程中出现失败，所有的复制都将失败。连接上任何一台服务器的客户端都无法看到被复制的数据。HBase提供行锁，但是不提供多行锁和事务。HBase基于HDFS，因此数据的多份复制功能和可靠性将由HDFS提供。HBase和MapReduce天然集成。</p><p style="margin: 0px; padding: 0px;"><strong>Cassandra</strong></p><p style="margin: 0px; padding: 0px;">写入的时候，有多种模式可以选择。当一份数据模式被要求复制N份的时候，可以立即返回，可以成功复制到一个服务器之后返回，可以等到全部复制到N份服务器之后返回，还可以设定一个复制到quorum份服务器之后返回。Quorum后面会有具体解释。复制不会失败。最终所有节点数据都将被写入。而在未被完全写入的时间间隙，连接到不同服务器的客户端有可能读到不同的数据。在集群里面，所有的服务器都是等价的。不存在任何一个单点故障。节点和节点之间通过Gossip协议互相通信。写入顺序按照timestamp排序，不提供行锁。新版本的Cassandra已经集成了MapReduce了。</p><p style="margin: 0px; padding: 0px;">相对于配置Cassandra，配置HBase是一个艰辛、复杂充满陷阱的工作。Facebook关于为何采取HBase，里面有一句，大意是，Facebook长期以来一直关注HBase的开发并且有一只专门的经验丰富的HBase维护的team来负责HBase的安装和维护。可以想象，Facebook内部关于使用HBase和Cassandra有过激烈的斗争，最终人数更多的HBase&nbsp;team占据了上风。对于大公司来说，养一只相对庞大的类似DBA的team来维护HBase不算什么大的开销，但是对于小公司，这实在不是一个可以负担的起的开销。</p><p style="margin: 0px; padding: 0px;">另外HBase在高可靠性上有一个很大的缺陷，就是HBase依赖HDFS。HDFS是Google File&nbsp;System的复制品，NameNode是HDFS的单点故障点。而到目前为止，HDFS还没有加入NameNode的自我恢复功能。不过我相信，Facebook在内部一定有恢复NameNode的手段，只是没有开源出来而已。</p><p style="margin: 0px; padding: 0px;">相反，Cassandra的P2P和去中心化设计，没有可能出现单点故障。从设计上来看，Cassandra比HBase更加可靠。</p><p style="margin: 0px; padding: 0px;"><strong>关于数据一致性，实际上，Cassandra也可以以牺牲响应时间的代价来获得和HBase一样的一致性。而且，通过对Quorum的合适的设置，可以在响应时间和数据一致性得到一个很好的折衷值。</strong></p>Cassandra优缺点<p style="margin: 0px; padding: 0px;">主要表现在：</p><p style="margin: 0px; padding: 0px;">配置简单，不需要多模块协同操作。功能灵活性强，数据一致性和性能之间，可以根据应用不同而做不同的设置。&nbsp;可靠性更强，没有单点故障。</p><p style="margin: 0px; padding: 0px;">尽管如此，Cassandra就没有弱点吗？当然不是，Cassandra有一个致命的弱点。</p><p style="margin: 0px; padding: 0px;"></p><p style="margin: 0px; padding: 0px;">这就是存储大文件。虽然说，Cassandra的设计初衷就不是存储大文件，但是Amazon的S3实际上就是基于Dynamo构建的，总是会让人想入非非地让Cassandra去存储超大文件。而和Cassandra不同，HBase基于HDFS，HDFS的设计初衷就是存储超大规模文件并且提供最大吞吐量和最可靠的可访问性。因此，从这一点来说，Cassandra由于背后不是一个类似HDFS的超大文件存储的文件系统，对于存储那种巨大的（几百T甚至P）的超大文件目前是无能为力的。而且就算由Client手工去分割，这实际上是非常不明智和消耗Client CPU的工作的。</p><p style="margin: 0px; padding: 0px;">因此，如果我们要构建一个类似Google的搜索引擎，最少，HDFS是我们所必不可少的。虽然目前HDFS的NameNode还是一个单点故障点，但是相应的Hack可以让NameNode变得更皮实。基于HDFS的HBase相应地，也更适合做搜索引擎的背后倒排索引数据库。事实上，Lucene和HBase的结合，远比Lucene结合Cassandra的项目Lucandra要顺畅和高效的多。（Lucandra要求Cassandra使用OrderPreservingPartitioner,这将可能导致Key的分布不均匀，而无法做负载均衡，产生访问热点机器）。</p><p style="margin: 0px; padding: 0px;">&nbsp;</p><p style="margin: 0px; padding: 0px;">所以我的结论是，在这个需求多样化的年代，没有赢者通吃的事情。而且我也越来越不相信在工程界存在一劳永逸和一成不变的解决方案。<strong>当你仅仅是存储海量增长的消息数据，存储海量增长的图片，小视频的时候，你要求数据不能丢失，你要求人工维护尽可能少，你要求能迅速通过添加机器扩充存储，那么毫无疑问，Cassandra现在是占据上风的。</strong></p><p style="margin: 0px; padding: 0px;">但是<strong>如果你希望构建一个超大规模的搜索引擎，产生超大规模的倒排索引文件（当然是逻辑上的文件，真实文件实际上被切分存储于不同的节点上），那么目前HDFS+HBase是你的首选。</strong></p><p style="margin: 0px; padding: 0px;">就让这个看起来永远正确的结论结尾吧，上帝的归上帝，凯撒的归凯撒。大家都有自己的地盘，野百合也会有春天的！</p></div><img src ="http://www.blogjava.net/paulwong/aggbug/394902.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-30 00:22 <a href="http://www.blogjava.net/paulwong/archive/2013/01/30/394902.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>NOSQL之旅---HBase(转)</title><link>http://www.blogjava.net/paulwong/archive/2013/01/29/394901.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Tue, 29 Jan 2013 15:50:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/29/394901.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394901.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/29/394901.html#Feedback</comments><slash:comments>1</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394901.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394901.html</trackback:ping><description><![CDATA[<a href="http://www.jdon.com/38244" target="_blank">http://www.jdon.com/38244</a><br /><br />最近因为项目原因，研究了Cassandra,Hbase等几个NoSQL数据库，最终决定采用HBase。在这里，我就向大家分享一下自己对HBase的理解。<br /><br />在说HBase之前，我想再唠叨几句。做互联网应用的哥们儿应该都清楚，互联网应用这东西，你没办法预测你的系统什么时候会被多少人访问，你面临的用户到底有多少，说不定今天你的用户还少，明天系统用户就变多了，结果您的系统应付不过来了了，不干了，这岂不是咱哥几个的悲哀，说时髦点就叫&#8220;杯具啊&#8221;。<br /><br />其实说白了，这些就是事先没有认清楚互联网应用什么才是最重要的。从系统架构的角度来说，互联网应用更加看重系统性能以及伸缩性，而传统企业级应用都是比较看重数据完整性和数据安全性。那么我们就来说说互联网应用伸缩性这事儿.对于伸缩性这事儿，哥们儿我也写了几篇博文，想看的兄弟可以参考我以前的博文，对于web server,app server的伸缩性，我在这里先不说了，因为这部分的伸缩性相对来说比较容易一点，我主要来回顾一些一个慢慢变大的互联网应用如何应对数据库这一层的伸缩。<br /><br />首先刚开始，人不多，压力也不大,搞一台数据库服务器就搞定了，此时所有的东东都塞进一个Server里，包括web server,app server,db server,但是随着人越来越多，系统压力越来越多，这个时候可能你把web server,app server和db server分离了，好歹这样可以应付一阵子，但是随着用户量的不断增加，你会发现，数据库这哥们不行了，速度老慢了，有时候还会宕掉，所以这个时候，你得给数据库这哥们找几个伴，这个时候Master-Salve就出现了，这个时候有一个Master Server专门负责接收写操作，另外的几个Salve Server专门进行读取，这样Master这哥们终于不抱怨了，总算读写分离了，压力总算轻点了,这个时候其实主要是对读取操作进行了水平扩张，通过增加多个Salve来克服查询时CPU瓶颈。一般这样下来，你的系统可以应付一定的压力，但是随着用户数量的增多，压力的不断增加，你会发现Master server这哥们的写压力还是变的太大，没办法，这个时候怎么办呢？你就得切分啊，俗话说&#8220;只有切分了，才会有伸缩性嘛&#8221;，所以啊，这个时候只能分库了，这也是我们常说的数据库&#8220;垂直切分&#8221;，比如将一些不关联的数据存放到不同的库中，分开部署，这样终于可以带走一部分的读取和写入压力了，Master又可以轻松一点了，但是随着数据的不断增多，你的数据库表中的数据又变的非常的大，这样查询效率非常低，这个时候就需要进行&#8220;水平分区&#8221;了，比如通过将User表中的数据按照10W来划分，这样每张表不会超过10W了。<br /><br />综上所述，一般一个流行的web站点都会经历一个从单台DB，到主从复制，到垂直分区再到水平分区的痛苦的过程。其实数据库切分这事儿，看起来原理貌似很简单，如果真正做起来，我想凡是sharding过数据库的哥们儿都深受其苦啊。对于数据库伸缩的文章，哥们儿可以看看后面的参考资料介绍。<br /><br />好了，从上面的那一堆废话中，我们也发现数据库存储水平扩张scale out是多么痛苦的一件事情，不过幸好技术在进步，业界的其它弟兄也在努力，09年这一年出现了非常多的NoSQL数据库，更准确的应该说是No relation数据库，这些数据库多数都会对非结构化的数据提供透明的水平扩张能力，大大减轻了哥们儿设计时候的压力。下面我就拿Hbase这分布式列存储系统来说说。<br /><br />一 Hbase是个啥东东？ <br />在说Hase是个啥家伙之前，首先我们来看看两个概念，面向行存储和面向列存储。面向行存储，我相信大伙儿应该都清楚，我们熟悉的RDBMS就是此种类型的，面向行存储的数据库主要适合于事务性要求严格场合，或者说面向行存储的存储系统适合OLTP，但是根据CAP理论，传统的RDBMS，为了实现强一致性，通过严格的ACID事务来进行同步，这就造成了系统的可用性和伸缩性方面大大折扣，而目前的很多NoSQL产品，包括Hbase，它们都是一种最终一致性的系统，它们为了高的可用性牺牲了一部分的一致性。好像，我上面说了面向列存储，那么到底什么是面向列存储呢？Hbase,Casandra,Bigtable都属于面向列存储的分布式存储系统。看到这里，如果您不明白Hbase是个啥东东，不要紧，我再总结一下下：<br /><br />Hbase是一个面向列存储的分布式存储系统，它的优点在于可以实现高性能的并发读写操作，同时Hbase还会对数据进行透明的切分，这样就使得存储本身具有了水平伸缩性。<br /><br /><br />二 Hbase数据模型 <br />HBase,Cassandra的数据模型非常类似，他们的思想都是来源于Google的Bigtable，因此这三者的数据模型非常类似，唯一不同的就是Cassandra具有Super cloumn family的概念，而Hbase目前我没发现。好了，废话少说，我们来看看Hbase的数据模型到底是个啥东东。<br /><br />在Hbase里面有以下两个主要的概念，Row key,Column Family，我们首先来看看Column family,Column family中文又名&#8220;列族&#8221;，Column family是在系统启动之前预先定义好的，每一个Column Family都可以根据&#8220;限定符&#8221;有多个column.下面我们来举个例子就会非常的清晰了。<br /><br />假如系统中有一个User表，如果按照传统的RDBMS的话，User表中的列是固定的，比如schema 定义了name,age,sex等属性，User的属性是不能动态增加的。但是如果采用列存储系统，比如Hbase，那么我们可以定义User表，然后定义info 列族，User的数据可以分为：info:name = zhangsan,info:age=30,info:sex=male等，如果后来你又想增加另外的属性，这样很方便只需要info:newProperty就可以了。<br /><br />也许前面的这个例子还不够清晰，我们再举个例子来解释一下，熟悉SNS的朋友，应该都知道有好友Feed，一般设计Feed，我们都是按照&#8220;某人在某时做了标题为某某的事情&#8221;，但是同时一般我们也会预留一下关键字，比如有时候feed也许需要url，feed需要image属性等，这样来说，feed本身的属性是不确定的，因此如果采用传统的关系数据库将非常麻烦，况且关系数据库会造成一些为null的单元浪费，而列存储就不会出现这个问题，在Hbase里，如果每一个column 单元没有值，那么是占用空间的。下面我们通过两张图来形象的表示这种关系：<br /><br /><br /><br /><br />上图是传统的RDBMS设计的Feed表，我们可以看出feed有多少列是固定的，不能增加，并且为null的列浪费了空间。但是我们再看看下图，下图为Hbase，Cassandra,Bigtable的数据模型图，从下图可以看出，Feed表的列可以动态的增加，并且为空的列是不存储的，这就大大节约了空间，关键是Feed这东西随着系统的运行，各种各样的Feed会出现，我们事先没办法预测有多少种Feed，那么我们也就没有办法确定Feed表有多少列，因此Hbase,Cassandra,Bigtable的基于列存储的数据模型就非常适合此场景。说到这里，采用Hbase的这种方式，还有一个非常重要的好处就是Feed会自动切分，当Feed表中的数据超过某一个阀值以后，Hbase会自动为我们切分数据，这样的话，查询就具有了伸缩性，而再加上Hbase的弱事务性的特性，对Hbase的写入操作也将变得非常快。<br /><br /><br /><br />上面说了Column family，那么我之前说的Row key是啥东东，其实你可以理解row key为RDBMS中的某一个行的主键，但是因为Hbase不支持条件查询以及Order by等查询，因此Row key的设计就要根据你系统的查询需求来设计了额。我还拿刚才那个Feed的列子来说，我们一般是查询某个人最新的一些Feed，因此我们Feed的Row key可以有以下三个部分构成&lt;userId&gt;&lt;timestamp&gt;&lt;feedId&gt;，这样以来当我们要查询某个人的最进的Feed就可以指定Start Rowkey为&lt;userId&gt;&lt;0&gt;&lt;0&gt;，End Rowkey为&lt;userId&gt;&lt;Long.MAX_VALUE&gt;&lt;Long.MAX_VALUE&gt;来查询了，同时因为Hbase中的记录是按照rowkey来排序的，这样就使得查询变得非常快。<br /><br /><br />三 Hbase的优缺点 <br />1 列的可以动态增加，并且列为空就不存储数据,节省存储空间.<br /><br />2 Hbase自动切分数据，使得数据存储自动具有水平scalability.<br /><br />3 Hbase可以提供高并发读写操作的支持<br /><br />Hbase的缺点：<br /><br />1 不能支持条件查询，只支持按照Row key来查询.<br /><br />2 暂时不能支持Master server的故障切换,当Master宕机后,整个存储系统就会挂掉.<br /><br /><br /><br />关于数据库伸缩性的一点资料：<br /><a href="http://www.jurriaanpersyn.com/archives/2009/02/12/database-sharding-at-netlog-with-mysql-and-php/" target="_blank">http://www.jurriaanpersyn.com/archives/2009/02/12/database-sharding-at-netlog-with-mysql-and-php/</a><br /><br /><a href="http://adam.blog.heroku.com/past/2009/7/6/sql_databases_dont_scale/" target="_blank">http://adam.blog.heroku.com/past/2009/7/6/sql_databases_dont_scale/</a><img src ="http://www.blogjava.net/paulwong/aggbug/394901.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-29 23:50 <a href="http://www.blogjava.net/paulwong/archive/2013/01/29/394901.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Windows环境下用ECLIPSE提交MAPREDUCE JOB至远程HBASE中运行</title><link>http://www.blogjava.net/paulwong/archive/2013/01/29/394851.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Mon, 28 Jan 2013 16:19:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/29/394851.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394851.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/29/394851.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394851.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394851.html</trackback:ping><description><![CDATA[<ol>
     <li>假设远程HADOOP主机名为ubuntu，则应在hosts文件中加上192.168.58.130 &nbsp; &nbsp; &nbsp; ubuntu<br />
     <br /><br />
     </li>
     <li>新建MAVEN项目，加上相应的配置<br />
     pom.xml<br />
     <div style="background-color: #eeeeee; font-size: 13px; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all;"><!--<br />
     <br />
     Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
     http://www.CodeHighlighter.com/<br />
     <br />
     --><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">project&nbsp;</span><span style="color: #FF0000; ">xmlns</span><span style="color: #0000FF; ">="http://maven.apache.org/POM/4.0.0"</span><span style="color: #FF0000; ">&nbsp;xmlns:xsi</span><span style="color: #0000FF; ">="http://www.w3.org/2001/XMLSchema-instance"</span><span style="color: #FF0000; "><br />
     &nbsp;&nbsp;xsi:schemaLocation</span><span style="color: #0000FF; ">="http://maven.apache.org/POM/4.0.0&nbsp;http://maven.apache.org/xsd/maven-4.0.0.xsd"</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">modelVersion</span><span style="color: #0000FF; ">&gt;</span>4.0.0<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">modelVersion</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>com.cloudputing<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>bigdata<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>1.0<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">packaging</span><span style="color: #0000FF; ">&gt;</span>jar<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">packaging</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>bigdata<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">url</span><span style="color: #0000FF; ">&gt;</span>http://maven.apache.org<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">url</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">properties</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">project</span><span style="color: #FF0000; ">.build.sourceEncoding</span><span style="color: #0000FF; ">&gt;</span>UTF-8<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">project.build.sourceEncoding</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">properties</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependencies</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>junit<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>junit<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>3.8.1<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">scope</span><span style="color: #0000FF; ">&gt;</span>test<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">scope</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>org.springframework.data<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>spring-data-hadoop<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>0.9.0.RELEASE<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>org.apache.hbase<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>hbase<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>0.94.1<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; ">&nbsp;&lt;dependency&gt;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;groupId&gt;org.apache.hbase&lt;/groupId&gt;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;artifactId&gt;hbase&lt;/artifactId&gt;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;version&gt;0.90.2&lt;/version&gt;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/dependency&gt;&nbsp;</span><span style="color: #008000; ">--&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>org.apache.hadoop<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>hadoop-core<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>1.0.3<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>org.springframework<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>spring-test<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>3.0.5.RELEASE<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependencies</span><span style="color: #0000FF; ">&gt;</span><br />
     <span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">project</span><span style="color: #0000FF; ">&gt;</span></div>
     </li>
     <br /><br />
     <li>
     <div>hbase-site.xml<br />
     <div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br />
     <br />
     Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
     http://www.CodeHighlighter.com/<br />
     <br />
     --><span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml&nbsp;version="1.0"</span><span style="color: #0000FF; ">?&gt;</span><br />
     <span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml-stylesheet&nbsp;type="text/xsl"&nbsp;href="configuration.xsl"</span><span style="color: #0000FF; ">?&gt;</span><br />
     <span style="color: #008000; ">&lt;!--</span><span style="color: #008000; "><br />
     /**<br />
     &nbsp;*&nbsp;Copyright&nbsp;2010&nbsp;The&nbsp;Apache&nbsp;Software&nbsp;Foundation<br />
     &nbsp;*<br />
     &nbsp;*&nbsp;Licensed&nbsp;to&nbsp;the&nbsp;Apache&nbsp;Software&nbsp;Foundation&nbsp;(ASF)&nbsp;under&nbsp;one<br />
     &nbsp;*&nbsp;or&nbsp;more&nbsp;contributor&nbsp;license&nbsp;agreements.&nbsp;&nbsp;See&nbsp;the&nbsp;NOTICE&nbsp;file<br />
     &nbsp;*&nbsp;distributed&nbsp;with&nbsp;this&nbsp;work&nbsp;for&nbsp;additional&nbsp;information<br />
     &nbsp;*&nbsp;regarding&nbsp;copyright&nbsp;ownership.&nbsp;&nbsp;The&nbsp;ASF&nbsp;licenses&nbsp;this&nbsp;file<br />
     &nbsp;*&nbsp;to&nbsp;you&nbsp;under&nbsp;the&nbsp;Apache&nbsp;License,&nbsp;Version&nbsp;2.0&nbsp;(the<br />
     &nbsp;*&nbsp;"License");&nbsp;you&nbsp;may&nbsp;not&nbsp;use&nbsp;this&nbsp;file&nbsp;except&nbsp;in&nbsp;compliance<br />
     &nbsp;*&nbsp;with&nbsp;the&nbsp;License.&nbsp;&nbsp;You&nbsp;may&nbsp;obtain&nbsp;a&nbsp;copy&nbsp;of&nbsp;the&nbsp;License&nbsp;at<br />
     &nbsp;*<br />
     &nbsp;*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;http://www.apache.org/licenses/LICENSE-2.0<br />
     &nbsp;*<br />
     &nbsp;*&nbsp;Unless&nbsp;required&nbsp;by&nbsp;applicable&nbsp;law&nbsp;or&nbsp;agreed&nbsp;to&nbsp;in&nbsp;writing,&nbsp;software<br />
     &nbsp;*&nbsp;distributed&nbsp;under&nbsp;the&nbsp;License&nbsp;is&nbsp;distributed&nbsp;on&nbsp;an&nbsp;"AS&nbsp;IS"&nbsp;BASIS,<br />
     &nbsp;*&nbsp;WITHOUT&nbsp;WARRANTIES&nbsp;OR&nbsp;CONDITIONS&nbsp;OF&nbsp;ANY&nbsp;KIND,&nbsp;either&nbsp;express&nbsp;or&nbsp;implied.<br />
     &nbsp;*&nbsp;See&nbsp;the&nbsp;License&nbsp;for&nbsp;the&nbsp;specific&nbsp;language&nbsp;governing&nbsp;permissions&nbsp;and<br />
     &nbsp;*&nbsp;limitations&nbsp;under&nbsp;the&nbsp;License.<br />
     &nbsp;*/<br />
     </span><span style="color: #008000; ">--&gt;</span><br />
     <span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.rootdir<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>hdfs://ubuntu:9000/hbase<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; ">&nbsp;在构造JOB时，会新建一文件夹来准备所需文件。<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;如果这一段没写，则默认本地环境为LINUX，将用LINUX命令去实施，在WINDOWS环境下会出错&nbsp;</span><span style="color: #008000; ">--&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>mapred.job.tracker<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>ubuntu:9001<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.cluster.distributed<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>true<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; ">&nbsp;此处会向ZOOKEEPER咨询JOB&nbsp;TRACKER的可用IP&nbsp;</span><span style="color: #008000; ">--&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.zookeeper.quorum<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>ubuntu<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property&nbsp;</span><span style="color: #FF0000; ">skipInDoc</span><span style="color: #0000FF; ">="true"</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.defaults.for.version<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>0.94.1<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     <span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span></div>
     </div>
     </li>
     <br /><br />
     <li>测试文件：MapreduceTest.java<br />
     <div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br />
     <br />
     Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
     http://www.CodeHighlighter.com/<br />
     <br />
     --><span style="color: #0000FF; ">package</span>&nbsp;com.cloudputing.mapreduce;<br />
     <br />
     <span style="color: #0000FF; ">import</span>&nbsp;java.io.IOException;<br />
     <br />
     <span style="color: #0000FF; ">import</span>&nbsp;junit.framework.TestCase;<br />
     <br />
     <span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;MapreduceTest&nbsp;<span style="color: #0000FF; ">extends</span>&nbsp;TestCase{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;testReadJob()&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException,&nbsp;InterruptedException,&nbsp;ClassNotFoundException<br />
     &nbsp;&nbsp;&nbsp;&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MapreduceRead.read();<br />
     &nbsp;&nbsp;&nbsp;&nbsp;}<br />
     <br />
     }</div>
     </li>
<br /><br />
     <li>
     <div>MapreduceRead.java</div>
     <div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br />
     <br />
     Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
     http://www.CodeHighlighter.com/<br />
     <br />
     --><span style="color: #0000FF; ">package</span>&nbsp;com.cloudputing.mapreduce;<br />
     <br />
     <span style="color: #0000FF; ">import</span>&nbsp;java.io.IOException;<br />
     <br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.conf.Configuration;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.fs.FileSystem;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.fs.Path;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.HBaseConfiguration;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.client.Result;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.client.Scan;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.io.ImmutableBytesWritable;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.mapreduce.TableMapper;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.util.Bytes;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.io.Text;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapreduce.Job;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;<br />
     <br />
     <span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;MapreduceRead&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;read()&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException,&nbsp;InterruptedException,&nbsp;ClassNotFoundException<br />
     &nbsp;&nbsp;&nbsp;&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;Add&nbsp;these&nbsp;statements.&nbsp;XXX<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File&nbsp;jarFile&nbsp;=&nbsp;EJob.createTempJar("target/classes");<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EJob.addClasspath("D:/PAUL/WORK/WORK-SPACES/TEST1/cloudputing/src/main/resources");<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ClassLoader&nbsp;classLoader&nbsp;=&nbsp;EJob.getClassLoader();<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Thread.currentThread().setContextClassLoader(classLoader);</span><span style="color: #008000; "><br />
     </span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Configuration&nbsp;config&nbsp;=&nbsp;HBaseConfiguration.create();<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;addTmpJar("file:/D:/PAUL/WORK/WORK-SPACES/TEST1/cloudputing/target/bigdata-1.0.jar",config);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Job&nbsp;job&nbsp;=&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;Job(config,&nbsp;"ExampleRead");<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;And&nbsp;add&nbsp;this&nbsp;statement.&nbsp;XXX<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;((JobConf)&nbsp;job.getConfiguration()).setJar(jarFile.toString());<br />
     <br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;TableMapReduceUtil.addDependencyJars(job);<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;TableMapReduceUtil.addDependencyJars(job.getConfiguration(),<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MapreduceRead.class,MyMapper.class);</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;job.setJarByClass(MapreduceRead.<span style="color: #0000FF; ">class</span>);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;class&nbsp;that&nbsp;contains&nbsp;mapper</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Scan&nbsp;scan&nbsp;=&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;Scan();<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.setCaching(500);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;1&nbsp;is&nbsp;the&nbsp;default&nbsp;in&nbsp;Scan,&nbsp;which&nbsp;will&nbsp;be&nbsp;bad&nbsp;for&nbsp;MapReduce&nbsp;jobs</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.setCacheBlocks(<span style="color: #0000FF; ">false</span>);&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;don't&nbsp;set&nbsp;to&nbsp;true&nbsp;for&nbsp;MR&nbsp;jobs<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;set&nbsp;other&nbsp;scan&nbsp;attrs</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;TableMapReduceUtil.initTableMapperJob(<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"wiki",&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;input&nbsp;HBase&nbsp;table&nbsp;name</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;Scan&nbsp;instance&nbsp;to&nbsp;control&nbsp;CF&nbsp;and&nbsp;attribute&nbsp;selection</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MapreduceRead.MyMapper.<span style="color: #0000FF; ">class</span>,&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;mapper</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">null</span>,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;mapper&nbsp;output&nbsp;key&nbsp;</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">null</span>,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;mapper&nbsp;output&nbsp;value</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;job);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;job.setOutputFormatClass(NullOutputFormat.<span style="color: #0000FF; ">class</span>);&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;because&nbsp;we&nbsp;aren't&nbsp;emitting&nbsp;anything&nbsp;from&nbsp;mapper<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;DistributedCache.addFileToClassPath(new&nbsp;Path("hdfs:</span><span style="color: #008000; ">//</span><span style="color: #008000; ">node.tracker1:9000/user/root/lib/stat-analysis-mapred-1.0-SNAPSHOT.jar"),job.getConfiguration());</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">boolean</span>&nbsp;b&nbsp;=&nbsp;job.waitForCompletion(<span style="color: #0000FF; ">true</span>);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">if</span>&nbsp;(!b)&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">throw</span>&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;IOException("error&nbsp;with&nbsp;job!");<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">/**</span><span style="color: #008000; "><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;为Mapreduce添加第三方jar包<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;</span><span style="color: #808080; ">@param</span><span style="color: #008000; ">&nbsp;jarPath<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;举例：D:/Java/new_java_workspace/scm/lib/guava-r08.jar<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;</span><span style="color: #808080; ">@param</span><span style="color: #008000; ">&nbsp;conf<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;</span><span style="color: #808080; ">@throws</span><span style="color: #008000; ">&nbsp;IOException<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #008000; ">*/</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;addTmpJar(String&nbsp;jarPath,&nbsp;Configuration&nbsp;conf)&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.setProperty("path.separator",&nbsp;":");<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileSystem&nbsp;fs&nbsp;=&nbsp;FileSystem.getLocal(conf);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;newJarPath&nbsp;=&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;Path(jarPath).makeQualified(fs).toString();<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;tmpjars&nbsp;=&nbsp;conf.get("tmpjars");<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">if</span>&nbsp;(tmpjars&nbsp;==&nbsp;<span style="color: #0000FF; ">null</span>&nbsp;||&nbsp;tmpjars.length()&nbsp;==&nbsp;0)&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("tmpjars",&nbsp;newJarPath);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;<span style="color: #0000FF; ">else</span>&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("tmpjars",&nbsp;tmpjars&nbsp;+&nbsp;":"&nbsp;+&nbsp;newJarPath);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;MyMapper&nbsp;<span style="color: #0000FF; ">extends</span>&nbsp;TableMapper&lt;Text,&nbsp;Text&gt;&nbsp;{<br />
     <br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;map(ImmutableBytesWritable&nbsp;row,&nbsp;Result&nbsp;value,<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Context&nbsp;context)&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;InterruptedException,&nbsp;IOException&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;val1&nbsp;=&nbsp;getValue(value.getValue(Bytes.toBytes("text"),&nbsp;Bytes.toBytes("qual1")));<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;val2&nbsp;=&nbsp;getValue(value.getValue(Bytes.toBytes("text"),&nbsp;Bytes.toBytes("qual2")));<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println(val1&nbsp;+&nbsp;"&nbsp;--&nbsp;"&nbsp;+&nbsp;val2);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">private</span>&nbsp;String&nbsp;getValue(<span style="color: #0000FF; ">byte</span>&nbsp;[]&nbsp;value)<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">return</span>&nbsp;value&nbsp;==&nbsp;<span style="color: #0000FF; ">null</span>?&nbsp;"null"&nbsp;:&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;String(value);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;<br />
     <br />
     }</div>
     </li>
</ol><img src ="http://www.blogjava.net/paulwong/aggbug/394851.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-29 00:19 <a href="http://www.blogjava.net/paulwong/archive/2013/01/29/394851.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>某hadoop视频教程内容</title><link>http://www.blogjava.net/paulwong/archive/2013/01/05/393807.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 05 Jan 2013 04:59:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/05/393807.html</guid><description><![CDATA[@import url(http://www.blogjava.net/CuteSoft_Client/CuteEditor/Load.ashx?type=style&file=SyntaxHighlighter.css);@import url(/css/cuteeditor.css);
第1章节: <br />
&gt; Hadoop背景 <br />
&gt; HDFS设计目标 <br />
&gt; HDFS不适合的场景 <br />
&gt; HDFS架构详尽分析 <br />
&gt; MapReduce的基本原理 <br />
<br />
<br />
<br />
第2章节 <br />
&gt; Hadoop的版本介绍 <br />
&gt; 安装单机版Hadoop <br />
&gt; 安装Hadoop集群 <br />
<br />
<br />
<br />
第3章节 <br />
&gt; HDFS命令行基本操作 <br />
&gt; Namenode的工作机制 <br />
&gt; HDFS基本配置管理 <br />
<br />
<br />
<br />
第4章节 <br />
&gt; HDFS应用实战：图片服务器(1) - 系统设计 <br />
&gt; 应用的环境搭建 php + bootstrap + java <br />
&gt; 使用Hadoop Java API实现向HDFS写入文件 <br />
<br />
<br />
<br />
第5章节 <br />
&gt; HDFS应用实战：图片服务器(2) <br />
&gt; 使用Hadoop Java API实现读取HDFS中的文件 <br />
&gt; 使用Hadoop Java API实现获取HDFS目录列表 <br />
&gt; 使用Hadoop Java API实现删除HDFS中的文件 <br />
<br />
<br />
第6章节 <br />
&gt; MapReduce的基本原理 <br />
&gt; MapReduce的运行过程 <br />
&gt; 搭建MapReduce的java开发环境 <br />
&gt; 使用MapReduce的java接口实现WordCount <br />
<br />
<br />
<br />
第7章节 <br />
&gt; WordCount运算过程分析 <br />
&gt; MapReduce的combiner <br />
&gt; 使用MapReduce实现数据去重 <br />
&gt; 使用MapReduce实现数据排序 <br />
&gt; 使用MapReduce实现数据平均成绩计算 <br />
<br />
<br />
<br />
第8章节 <br />
&gt; HBase详细介绍 <br />
&gt; HBase的系统架构 <br />
&gt; HBase的表结构，RowKey，列族和时间戳 <br />
&gt; HBase中的Master，Region以及Region Server <br />
<br />
<br />
第9章节 <br />
&gt; 使用HBase实现微博应用（1） <br />
&gt; 用户注册，登陆和注销的设计 <br />
&gt; 搭建环境 struts2 + jsp + bootstrap + jquery + HBase Java API <br />
&gt; HBase和用户相关的表结构设计 <br />
&gt; 用户注册的实现 <br />
<br />
<br />
<br />
第10章节 <br />
&gt; 使用HBase实现微博应用（2） <br />
&gt; 使用session实现用户登录和注销 <br />
&gt; &#8220;关注"功能的设计 <br />
&gt; &#8220;关注"功能的表结构设计 <br />
&gt; &#8220;关注"功能的实现 <br />
<br />
<br />
第11章节 <br />
&gt; 使用HBase实现微博应用（3） <br />
&gt; &#8220;发微博"功能的设计 <br />
&gt; &#8220;发微博"功能的表结构设计 <br />
&gt; &#8220;发微博"功能的实现 <br />
&gt; 展现整个应用的运行 <br />
<br />
<br />
<br />
第12章节 <br />
&gt; HBase与MapReduce介绍 <br />
&gt; HBase如何使用MapReduce <br />
<br />
<br />
<br />
第13章节 <br />
<br />
&gt; HBase应用实战：话单查询与统计（1） <br />
&gt; 应用的整体设计 <br />
&gt; 开发环境搭建 <br />
&gt; 表结构设计 <br />
<br />
<br />
<br />
第14章节 <br />
&gt; HBase应用实战：话单查询与统计（2） <br />
&gt; 话单入库单设计与实现 <br />
&gt; 话单查询的设计与实现 <br />
<br />
<br />
<br />
第15章节 <br />
&gt; HBase应用实战：话单查询与统计（3） <br />
&gt; 统计功能设计 <br />
&gt; 统计功能实现 <br />
<br />
<br />
<br />
第16章节 <br />
&gt; 深入MapReduce（1） <br />
&gt; split的实现详解 <br />
&gt; 自定义输入的实现 <br />
&gt; 实例讲解 <br />
<br />
<br />
<br />
第17章节 <br />
&gt; 深入MapReduce（2） <br />
&gt; Reduce的partition <br />
&gt; 实例讲解 <br />
<br />
<br />
<br />
第18章节 <br />
&gt; Hive入门 <br />
&gt; 安装Hive <br />
&gt; 使用Hive向HDFS存入结构化数据 <br />
&gt; Hive的基本使用 <br />
<br />
<br />
第19章节 <br />
&gt; 使用MySql作为Hive的元数据库 <br />
&gt; Hive结合MapReduce <br />
<br />
<br />
<br />
第20章节 <br />
&gt; Hive应用实战:数据统计（1） <br />
&gt; 应用设计，表结构设计 <br />
<br />
<br />
<br />
第21章节 <br />
&gt; Hive应用实战：数据统计（2） <br />
&gt; 数据录入与统计的实现&nbsp;<img src ="http://www.blogjava.net/paulwong/aggbug/393807.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-05 12:59 <a href="http://www.blogjava.net/paulwong/archive/2013/01/05/393807.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBase的一些应用设计tip</title><link>http://www.blogjava.net/paulwong/archive/2013/01/02/393701.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Wed, 02 Jan 2013 15:09:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/02/393701.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/393701.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/02/393701.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/393701.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/393701.html</trackback:ping><description><![CDATA[1，对于HBase的存储设计，要考虑它的存储结构是：rowkey+columnFamily:columnQualifier+timestamp(version)+value = KeyValue in HBase，一个KeyValue依次按照rowkey，columnkey和timestamp有序。一个rowkey加一个column信息定位了hbase表的一个逻辑的行结构。
<div><a href="http://www.blogjava.net/images/blogjava_net/changedi/Windows-Live-Writer/HBasetip_10C32/0XJJ%7B2%25~G~%5BG%5DJBPMW%7DYE~A_2.jpg"><img title="0XJJ{2%~G~[G]JBPMW}YE~A" border="0" alt="0XJJ{2%~G~[G]JBPMW}YE~A" src="http://www.blogjava.net/images/blogjava_net/changedi/Windows-Live-Writer/HBasetip_10C32/0XJJ%7B2%25~G~%5BG%5DJBPMW%7DYE~A_2.jpg" width="600" height="327" /></a>&nbsp;
<div><br />
</div>
<div>2，从逻辑存储结构到实际的物理存储结构要经历一个fold过程，所有的columnFamily下的内容被有序的合并，因为HBase把一个ColumnFamily存储为一个StoreFile。
<div><br />
</div>
<div>
3，把HBase的查询等价为一个逐层过滤的行为，那么在设计存储时就应该明白，使设计越趋向单一的keyvalue性能会越好；如果是因为复杂的业务逻辑导致查询需要确定rowkey、column、timestamp，甚至更夸张的是用到了HBase的Filter在server端做value的处理，那么整个性能会非常低。&nbsp;</div>
<div><br />
</div>
<div>4，因此在表结构设计时，HBase里有tall narrow和flat wide两种设计模式，前者行多列少，整个表结构高且窄；后者行少列多，表结构平且宽；但是由于HBase只能在行的边界做split，因此如果选择flat wide的结构，那么在特殊行变的超级大（超过file或region的上限）时，那么这种行为会导致compaction，而这样做是要把row读内存的~~因此，强烈推荐使用tall narrow模式设计表结构，这样结构更趋近于keyvalue，性能更好。&nbsp;</div>
<div><br />
</div>
<div>5，一种优雅的行设计叫做partial row scan，我们一般rowkey会设计为&lt;key1&gt;-&lt;key2&gt;-&lt;key3&gt;...，每个key都是查询条件，中间用某种分隔符分开，对于只想查key1的所有这样的情况，在不使用filter的情况下（更高性能），我们可以为每个key设定一个起始和结束的值，比如key1作为开始，key1+1作为结束，这样scan的时候可以通过设定start row和stop row就能查到所有的key1的value，同理迭代，每个子key都可以这样被设计到rowkey中。&nbsp;</div>
<div><br />
</div>
<div>6，对于分页查询，推荐的设计方式也不是利用filter，而是在scan中通过offset和limit的设定来模拟类似RDBMS的分页。具体过程就是首先定位start row，接着跳过offset行，读取limit行，最后关闭scan，整个流程结束。&nbsp;</div>
<div><br />
</div>
<div>7，对于带有时间范围的查询，一种设计是把时间放到一个key的位置，这样设计有个弊端就是查询时一定要先知道查询哪个维度的时间范围值，而不能直接通过时间查询所有维度的值；另一种设计是把timestamp放到前面，同时利用hashcode或者MD5这样的形式将其打散，这样对于实时的时序数据，因为将其打散导致自动分到其他region可以提供更好的并发写优势。&nbsp;</div>
<div><br />
</div>
<div>8，对于读写的平衡，下面这张图更好的说明了key的设计：salting等价于hash，promoted等价于在key中加入其他维度，而random就是MD这样的形式了。</div>
<div>&nbsp;
<a href="http://www.blogjava.net/images/blogjava_net/changedi/Windows-Live-Writer/HBasetip_10C32/VN%7BYX%60@%5B2P9AQ%5B@(2U8N9%7B0_2.jpg" style="color: #006b95; text-decoration: none; "><img title="VN{YX`@[2P9AQ[@(2U8N9{0" border="0" alt="VN{YX`@[2P9AQ[@(2U8N9{0" src="http://www.blogjava.net/images/blogjava_net/changedi/Windows-Live-Writer/HBasetip_10C32/VN%7BYX%60@%5B2P9AQ%5B@(2U8N9%7B0_2.jpg" width="500" height="308" style="border-width: 0px; padding: 0px 0px 1px; background-image: none; display: inline;" /></a>
<div><br />
</div>
9，还有一种高级的设计方式是利用column来当做RDBMS类似二级索引的应用设计，rowkey的存储达到一定程度后，利用column的有序，完成类似索引的设计，比如，一个CF叫做data存放数据本身，ColumnQualifier是一个MD5形式的index，而value是实际的数据；再建一个CF叫做index存储刚才的MD5，这个index的CF的ColumnQualifier是真正的索引字段（比如名字或者任意的表字段，这样可以允许多个），而value是这个索引字段的MD5。每次查询时就可以先在index里找到这个索引（查询条件不同，选择的索引字段不同），然后利用这个索引到data里找到数据，两次查询实现真正的复杂条件业务查询。</div>
<div><br />
</div>
<div>
10，实现二级索引还有其他途径，</div>
<div>比如：</div>
<div>1，客户端控制，即一次读取将所有数据取回，在客户端做各种过滤操作，优点自然是控制力比较强，但是缺点在性能和一致性的保证上；</div>
<div>2，Indexed-Transactional HBase，这是个开源项目，扩展了HBase，在客户端和服务端加入了扩展实现了事务和二级索引；</div>
<div>3，Indexed-HBase；</div>
<div>4，Coprocessor。&nbsp;</div>
<div><br />
</div>
<div>11，HBase集成搜索的方式有多种：1，客户端控制，同上；2，Lucene；3，HBasene，4，Coprocessor。&nbsp;</div>
<div><br />
</div>
<div>12，HBase集成事务的方式：1，ITHBase；2，ZooKeeper，通过分布式锁。&nbsp;</div>
<div><br />
</div>
<div>13，timestamp虽然叫这个名字，但是完全可以存放任何内容来形成用户自定义的版本信息。
</div>
</div>
</div><img src ="http://www.blogjava.net/paulwong/aggbug/393701.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-02 23:09 <a href="http://www.blogjava.net/paulwong/archive/2013/01/02/393701.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBase性能优化方法总结</title><link>http://www.blogjava.net/paulwong/archive/2012/11/29/392232.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Thu, 29 Nov 2012 13:43:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2012/11/29/392232.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/392232.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2012/11/29/392232.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/392232.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/392232.html</trackback:ping><description><![CDATA[<br />
本文主要是从HBase应用程序设计与开发的角度，总结几种常用的性能优化方法。有关HBase系统配置级别的优化，这里涉及的不多，这部分可以参考：淘宝Ken Wu同学的博客。<br />
<br />
1. 表的设计<br />
1.1 Pre-Creating Regions<br />
默认情况下，在创建HBase表的时候会自动创建一个region分区，当导入数据的时候，所有的HBase客户端都向这一个region写数据，直到这个region足够大了才进行切分。一种可以加快批量写入速度的方法是通过预先创建一些空的regions，这样当数据写入HBase时，会按照region分区情况，在集群内做数据的负载均衡。<br />
<br />
有关预分区，详情参见：Table Creation: Pre-Creating Regions，下面是一个例子：<br />
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->publicstaticbooleancreateTable(HBaseAdmin&nbsp;admin,&nbsp;HTableDescriptor&nbsp;table,<span style="color: #0000FF; ">byte</span>[][]&nbsp;splits)<br />
throwsIOException&nbsp;{<br />
&nbsp;&nbsp;<span style="color: #0000FF; ">try</span>{<br />
&nbsp;&nbsp;&nbsp;&nbsp;admin.createTable(table,&nbsp;splits);<br />
&nbsp;&nbsp;&nbsp;&nbsp;returntrue;<br />
&nbsp;&nbsp;}<span style="color: #0000FF; ">catch</span>(TableExistsException&nbsp;e)&nbsp;{<br />
&nbsp;&nbsp;&nbsp;&nbsp;logger.info("table&nbsp;"+&nbsp;table.getNameAsString()&nbsp;+"&nbsp;already&nbsp;exists");<br />
&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;the&nbsp;table&nbsp;already&nbsp;exists<img src="http://www.blogjava.net/Images/dot.gif" alt="" /></span><span style="color: #008000; "><br />
</span>&nbsp;&nbsp;&nbsp;&nbsp;returnfalse;<br />
&nbsp;&nbsp;}<br />
}<br />
&nbsp;<br />
publicstaticbyte[][]&nbsp;getHexSplits(String&nbsp;startKey,&nbsp;String&nbsp;endKey,intnumRegions)&nbsp;{<br />
&nbsp;&nbsp;<span style="color: #0000FF; ">byte</span>[][]&nbsp;splits&nbsp;=newbyte[numRegions-1][];<br />
&nbsp;&nbsp;BigInteger&nbsp;lowestKey&nbsp;=newBigInteger(startKey,16);<br />
&nbsp;&nbsp;BigInteger&nbsp;highestKey&nbsp;=newBigInteger(endKey,16);<br />
&nbsp;&nbsp;BigInteger&nbsp;range&nbsp;=&nbsp;highestKey.subtract(lowestKey);<br />
&nbsp;&nbsp;BigInteger&nbsp;regionIncrement&nbsp;=&nbsp;range.divide(BigInteger.valueOf(numRegions));<br />
&nbsp;&nbsp;lowestKey&nbsp;=&nbsp;lowestKey.add(regionIncrement);<br />
&nbsp;&nbsp;<span style="color: #0000FF; ">for</span>(inti=0;&nbsp;i&nbsp;&lt;&nbsp;numRegions-1;i++)&nbsp;{<br />
&nbsp;&nbsp;&nbsp;&nbsp;BigInteger&nbsp;key&nbsp;=&nbsp;lowestKey.add(regionIncrement.multiply(BigInteger.valueOf(i)));<br />
&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">byte</span>[]&nbsp;b&nbsp;=&nbsp;String.format("%016x",&nbsp;key).getBytes();<br />
&nbsp;&nbsp;&nbsp;&nbsp;splits[i]&nbsp;=&nbsp;b;<br />
&nbsp;&nbsp;}<br />
&nbsp;&nbsp;returnsplits;<br />
}</div>
<br />
1.2 Row Key<br />
HBase中row key用来检索表中的记录，支持以下三种方式：<br />
<br />
通过单个row key访问：即按照某个row key键值进行get操作；<br />
通过row key的range进行scan：即通过设置startRowKey和endRowKey，在这个范围内进行扫描；<br />
全表扫描：即直接扫描整张表中所有行记录。<br />
在HBase中，row key可以是任意字符串，最大长度64KB，实际应用中一般为10~100bytes，存为byte[]字节数组，一般设计成定长的。<br />
<br />
row key是按照字典序存储，因此，设计row key时，要充分利用这个排序特点，将经常一起读取的数据存储到一块，将最近可能会被访问的数据放在一块。<br />
<br />
举个例子：如果最近写入HBase表中的数据是最可能被访问的，可以考虑将时间戳作为row key的一部分，由于是字典序排序，所以可以使用Long.MAX_VALUE &#8211; timestamp作为row key，这样能保证新写入的数据在读取时可以被快速命中。<br />
<br />
1.3 Column Family<br />
不要在一张表里定义太多的column family。目前Hbase并不能很好的处理超过2~3个column family的表。因为某个column family在flush的时候，它邻近的column family也会因关联效应被触发flush，最终导致系统产生更多的I/O。感兴趣的同学可以对自己的HBase集群进行实际测试，从得到的测试结果数据验证一下。<br />
<br />
1.4 In Memory<br />
创建表的时候，可以通过HColumnDescriptor.setInMemory(true)将表放到RegionServer的缓存中，保证在读取的时候被cache命中。<br />
<br />
1.5 Max Version<br />
创建表的时候，可以通过HColumnDescriptor.setMaxVersions(int maxVersions)设置表中数据的最大版本，如果只需要保存最新版本的数据，那么可以设置setMaxVersions(1)。<br />
<br />
1.6 Time To Live<br />
创建表的时候，可以通过HColumnDescriptor.setTimeToLive(int timeToLive)设置表中数据的存储生命期，过期数据将自动被删除，例如如果只需要存储最近两天的数据，那么可以设置setTimeToLive(2 * 24 * 60 * 60)。<br />
<br />
1.7 Compact &amp; Split<br />
在HBase中，数据在更新时首先写入WAL 日志(HLog)和内存(MemStore)中，MemStore中的数据是排序的，当MemStore累计到一定阈值时，就会创建一个新的MemStore，并且将老的MemStore添加到flush队列，由单独的线程flush到磁盘上，成为一个StoreFile。于此同时， 系统会在zookeeper中记录一个redo point，表示这个时刻之前的变更已经持久化了(minor compact)。<br />
<br />
StoreFile是只读的，一旦创建后就不可以再修改。因此Hbase的更新其实是不断追加的操作。当一个Store中的StoreFile达到一定的阈值后，就会进行一次合并(major compact)，将对同一个key的修改合并到一起，形成一个大的StoreFile，当StoreFile的大小达到一定阈值后，又会对 StoreFile进行分割(split)，等分为两个StoreFile。<br />
<br />
由于对表的更新是不断追加的，处理读请求时，需要访问Store中全部的StoreFile和MemStore，将它们按照row key进行合并，由于StoreFile和MemStore都是经过排序的，并且StoreFile带有内存中索引，通常合并过程还是比较快的。<br />
<br />
实际应用中，可以考虑必要时手动进行major compact，将同一个row key的修改进行合并形成一个大的StoreFile。同时，可以将StoreFile设置大些，减少split的发生。<br />
<br />
2. 写表操作<br />
2.1 多HTable并发写<br />
创建多个HTable客户端用于写操作，提高写数据的吞吐量，一个例子：<br />
<br />
<br />
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->staticfinalConfiguration&nbsp;conf&nbsp;=&nbsp;HBaseConfiguration.create();<br />
staticfinalString&nbsp;table_log_name&nbsp;=&nbsp;&#8220;user_log&#8221;;<br />
wTableLog&nbsp;=newHTable[tableN];<br />
<span style="color: #0000FF; ">for</span>(inti&nbsp;=0;&nbsp;i&nbsp;&lt;&nbsp;tableN;&nbsp;i++)&nbsp;{<br />
wTableLog[i]&nbsp;=newHTable(conf,&nbsp;table_log_name);<br />
wTableLog[i].setWriteBufferSize(5*1024*1024);<span style="color: #008000; ">//</span><span style="color: #008000; ">5MB</span><span style="color: #008000; "><br />
</span>wTableLog[i].setAutoFlush(<span style="color: #0000FF; ">false</span>);<br />
}</div>
2.2 HTable参数设置<br />
2.2.1 Auto Flush<br />
通过调用HTable.setAutoFlush(false)方法可以将HTable写客户端的自动flush关闭，这样可以批量写入数据到HBase，而不是有一条put就执行一次更新，只有当put填满客户端写缓存时，才实际向HBase服务端发起写请求。默认情况下auto flush是开启的。<br />
<br />
2.2.2 Write Buffer<br />
通过调用HTable.setWriteBufferSize(writeBufferSize)方法可以设置HTable客户端的写buffer大小，如果新设置的buffer小于当前写buffer中的数据时，buffer将会被flush到服务端。其中，writeBufferSize的单位是byte字节数，可以根据实际写入数据量的多少来设置该值。<br />
<br />
2.2.3 WAL Flag<br />
在HBae中，客户端向集群中的RegionServer提交数据时（Put/Delete操作），首先会先写WAL（Write Ahead Log）日志（即HLog，一个RegionServer上的所有Region共享一个HLog），只有当WAL日志写成功后，再接着写MemStore，然后客户端被通知提交数据成功；如果写WAL日志失败，客户端则被通知提交失败。这样做的好处是可以做到RegionServer宕机后的数据恢复。<br />
<br />
因此，对于相对不太重要的数据，可以在Put/Delete操作时，通过调用Put.setWriteToWAL(false)或Delete.setWriteToWAL(false)函数，放弃写WAL日志，从而提高数据写入的性能。<br />
<br />
值得注意的是：谨慎选择关闭WAL日志，因为这样的话，一旦RegionServer宕机，Put/Delete的数据将会无法根据WAL日志进行恢复。<br />
<br />
2.3 批量写<br />
通过调用HTable.put(Put)方法可以将一个指定的row key记录写入HBase，同样HBase提供了另一个方法：通过调用HTable.put(List&lt;Put&gt;)方法可以将指定的row key列表，批量写入多行记录，这样做的好处是批量执行，只需要一次网络I/O开销，这对于对数据实时性要求高，网络传输RTT高的情景下可能带来明显的性能提升。<br />
<br />
2.4 多线程并发写<br />
在客户端开启多个HTable写线程，每个写线程负责一个HTable对象的flush操作，这样结合定时flush和写buffer（writeBufferSize），可以既保证在数据量小的时候，数据可以在较短时间内被flush（如1秒内），同时又保证在数据量大的时候，写buffer一满就及时进行flush。下面给个具体的例子：<br />
<br />
<br />
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
--><span style="color: #0000FF; ">for</span>(inti&nbsp;=0;&nbsp;i&nbsp;&lt;&nbsp;threadN;&nbsp;i++)&nbsp;{<br />
Thread&nbsp;th&nbsp;=newThread()&nbsp;{<br />
publicvoidrun()&nbsp;{<br />
<span style="color: #0000FF; ">while</span>(<span style="color: #0000FF; ">true</span>)&nbsp;{<br />
<span style="color: #0000FF; ">try</span>{<br />
sleep(1000);<span style="color: #008000; ">//</span><span style="color: #008000; ">1&nbsp;second</span><span style="color: #008000; "><br />
</span>}<span style="color: #0000FF; ">catch</span>(InterruptedException&nbsp;e)&nbsp;{<br />
e.printStackTrace();<br />
}<br />
<span style="color: #0000FF; ">synchronized</span>(wTableLog[i])&nbsp;{<br />
<span style="color: #0000FF; ">try</span>{<br />
wTableLog[i].flushCommits();<br />
}<span style="color: #0000FF; ">catch</span>(IOException&nbsp;e)&nbsp;{<br />
e.printStackTrace();<br />
}<br />
}<br />
}<br />
}<br />
};<br />
th.setDaemon(<span style="color: #0000FF; ">true</span>);<br />
th.start();<br />
}</div>
3. 读表操作<br />
3.1 多HTable并发读<br />
创建多个HTable客户端用于读操作，提高读数据的吞吐量，一个例子：<br />
<br />
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->staticfinalConfiguration&nbsp;conf&nbsp;=&nbsp;HBaseConfiguration.create();<br />
staticfinalString&nbsp;table_log_name&nbsp;=&nbsp;&#8220;user_log&#8221;;<br />
rTableLog&nbsp;=newHTable[tableN];<br />
<span style="color: #0000FF; ">for</span>(inti&nbsp;=0;&nbsp;i&nbsp;&lt;&nbsp;tableN;&nbsp;i++)&nbsp;{<br />
rTableLog[i]&nbsp;=newHTable(conf,&nbsp;table_log_name);<br />
rTableLog[i].setScannerCaching(50);<br />
}</div>
<br />
3.2 HTable参数设置<br />
3.2.1 Scanner Caching<br />
通过调用HTable.setScannerCaching(int scannerCaching)可以设置HBase scanner一次从服务端抓取的数据条数，默认情况下一次一条。通过将此值设置成一个合理的值，可以减少scan过程中next()的时间开销，代价是scanner需要通过客户端的内存来维持这些被cache的行记录。<br />
<br />
3.2.2 Scan Attribute Selection<br />
scan时指定需要的Column Family，可以减少网络传输数据量，否则默认scan操作会返回整行所有Column Family的数据。<br />
<br />
3.2.3 Close ResultScanner<br />
通过scan取完数据后，记得要关闭ResultScanner，否则RegionServer可能会出现问题（对应的Server资源无法释放）。<br />
<br />
3.3 批量读<br />
通过调用HTable.get(Get)方法可以根据一个指定的row key获取一行记录，同样HBase提供了另一个方法：通过调用HTable.get(List)方法可以根据一个指定的row key列表，批量获取多行记录，这样做的好处是批量执行，只需要一次网络I/O开销，这对于对数据实时性要求高而且网络传输RTT高的情景下可能带来明显的性能提升。<br />
<br />
3.4 多线程并发读<br />
在客户端开启多个HTable读线程，每个读线程负责通过HTable对象进行get操作。下面是一个多线程并发读取HBase，获取店铺一天内各分钟PV值的例子：<br />
<br />
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->publicclassDataReaderServer&nbsp;{<br />
<span style="color: #008000; ">//</span><span style="color: #008000; ">获取店铺一天内各分钟PV值的入口函数</span><span style="color: #008000; "><br />
</span>publicstaticConcurrentHashMap&nbsp;getUnitMinutePV(longuid,longstartStamp,longendStamp){<br />
longmin&nbsp;=&nbsp;startStamp;<br />
intcount&nbsp;=&nbsp;(<span style="color: #0000FF; ">int</span>)((endStamp&nbsp;-&nbsp;startStamp)&nbsp;/&nbsp;(60*1000));<br />
List&nbsp;lst&nbsp;=newArrayList();<br />
<span style="color: #0000FF; ">for</span>(inti&nbsp;=0;&nbsp;i&nbsp;&lt;=&nbsp;count;&nbsp;i++)&nbsp;{<br />
min&nbsp;=&nbsp;startStamp&nbsp;+&nbsp;i&nbsp;*60*1000;<br />
lst.add(uid&nbsp;+"_"+&nbsp;min);<br />
}<br />
returnparallelBatchMinutePV(lst);<br />
}<br />
<span style="color: #008000; ">//</span><span style="color: #008000; ">多线程并发查询，获取分钟PV值</span><span style="color: #008000; "><br />
</span>privatestaticConcurrentHashMap&nbsp;parallelBatchMinutePV(List&nbsp;lstKeys){<br />
ConcurrentHashMap&nbsp;hashRet&nbsp;=newConcurrentHashMap();<br />
intparallel&nbsp;=3;<br />
List&lt;List&lt;String&gt;&gt;&nbsp;lstBatchKeys&nbsp;=<span style="color: #0000FF; ">null</span>;<br />
<span style="color: #0000FF; ">if</span>(lstKeys.size()&nbsp;&lt;&nbsp;parallel&nbsp;){<br />
lstBatchKeys&nbsp;=newArrayList&lt;List&lt;String&gt;&gt;(1);<br />
lstBatchKeys.add(lstKeys);<br />
}<br />
<span style="color: #0000FF; ">else</span>{<br />
lstBatchKeys&nbsp;=newArrayList&lt;List&lt;String&gt;&gt;(parallel);<br />
<span style="color: #0000FF; ">for</span>(inti&nbsp;=0;&nbsp;i&nbsp;&lt;&nbsp;parallel;&nbsp;i++&nbsp;){<br />
List&nbsp;lst&nbsp;=newArrayList();<br />
lstBatchKeys.add(lst);<br />
}<br />
<br />
<span style="color: #0000FF; ">for</span>(inti&nbsp;=0;&nbsp;i&nbsp;&lt;&nbsp;lstKeys.size()&nbsp;;&nbsp;i&nbsp;++&nbsp;){<br />
lstBatchKeys.get(i%parallel).add(lstKeys.get(i));<br />
}<br />
}<br />
<br />
List&nbsp;&gt;&gt;&nbsp;futures&nbsp;=newArrayList&nbsp;&gt;&gt;(5);<br />
<br />
ThreadFactoryBuilder&nbsp;builder&nbsp;=newThreadFactoryBuilder();<br />
builder.setNameFormat("ParallelBatchQuery");<br />
ThreadFactory&nbsp;factory&nbsp;=&nbsp;builder.build();<br />
ThreadPoolExecutor&nbsp;executor&nbsp;=&nbsp;(ThreadPoolExecutor)&nbsp;Executors.newFixedThreadPool(lstBatchKeys.size(),&nbsp;factory);<br />
<br />
<span style="color: #0000FF; ">for</span>(List&nbsp;keys&nbsp;:&nbsp;lstBatchKeys){<br />
Callable&lt;&nbsp;ConcurrentHashMap&nbsp;&gt;&nbsp;callable&nbsp;=newBatchMinutePVCallable(keys);<br />
FutureTask&lt;&nbsp;ConcurrentHashMap&nbsp;&gt;&nbsp;future&nbsp;=&nbsp;(FutureTask&lt;&nbsp;ConcurrentHashMap&nbsp;&gt;)&nbsp;executor.submit(callable);<br />
futures.add(future);<br />
}<br />
executor.shutdown();<br />
<br />
<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;Wait&nbsp;for&nbsp;all&nbsp;the&nbsp;tasks&nbsp;to&nbsp;finish</span><span style="color: #008000; "><br />
</span><span style="color: #0000FF; ">try</span>{<br />
booleanstillRunning&nbsp;=&nbsp;!executor.awaitTermination(<br />
5000000,&nbsp;TimeUnit.MILLISECONDS);<br />
<span style="color: #0000FF; ">if</span>(stillRunning)&nbsp;{<br />
<span style="color: #0000FF; ">try</span>{<br />
executor.shutdownNow();<br />
}<span style="color: #0000FF; ">catch</span>(Exception&nbsp;e)&nbsp;{<br />
<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;TODO&nbsp;Auto-generated&nbsp;catch&nbsp;block</span><span style="color: #008000; "><br />
</span>e.printStackTrace();<br />
}<br />
}<br />
}<span style="color: #0000FF; ">catch</span>(InterruptedException&nbsp;e)&nbsp;{<br />
<span style="color: #0000FF; ">try</span>{<br />
Thread.currentThread().interrupt();<br />
}<span style="color: #0000FF; ">catch</span>(Exception&nbsp;e1)&nbsp;{<br />
<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;TODO&nbsp;Auto-generated&nbsp;catch&nbsp;block</span><span style="color: #008000; "><br />
</span>e1.printStackTrace();<br />
}<br />
}<br />
<br />
<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;Look&nbsp;for&nbsp;any&nbsp;exception</span><span style="color: #008000; "><br />
</span><span style="color: #0000FF; ">for</span>(Future&nbsp;f&nbsp;:&nbsp;futures)&nbsp;{<br />
<span style="color: #0000FF; ">try</span>{<br />
<span style="color: #0000FF; ">if</span>(f.get()&nbsp;!=<span style="color: #0000FF; ">null</span>)<br />
{<br />
hashRet.putAll((ConcurrentHashMap)f.get());<br />
}<br />
}<span style="color: #0000FF; ">catch</span>(InterruptedException&nbsp;e)&nbsp;{<br />
<span style="color: #0000FF; ">try</span>{<br />
Thread.currentThread().interrupt();<br />
}<span style="color: #0000FF; ">catch</span>(Exception&nbsp;e1)&nbsp;{<br />
<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;TODO&nbsp;Auto-generated&nbsp;catch&nbsp;block</span><span style="color: #008000; "><br />
</span>e1.printStackTrace();<br />
}<br />
}<span style="color: #0000FF; ">catch</span>(ExecutionException&nbsp;e)&nbsp;{<br />
e.printStackTrace();<br />
}<br />
}<br />
<br />
returnhashRet;<br />
}<br />
<span style="color: #008000; ">//</span><span style="color: #008000; ">一个线程批量查询，获取分钟PV值</span><span style="color: #008000; "><br />
</span>protectedstaticConcurrentHashMap&nbsp;getBatchMinutePV(List&nbsp;lstKeys){<br />
ConcurrentHashMap&nbsp;hashRet&nbsp;=<span style="color: #0000FF; ">null</span>;<br />
List&nbsp;lstGet&nbsp;=newArrayList();<br />
String[]&nbsp;splitValue&nbsp;=<span style="color: #0000FF; ">null</span>;<br />
<span style="color: #0000FF; ">for</span>(String&nbsp;s&nbsp;:&nbsp;lstKeys)&nbsp;{<br />
splitValue&nbsp;=&nbsp;s.split("_");<br />
longuid&nbsp;=&nbsp;Long.parseLong(splitValue[0]);<br />
longmin&nbsp;=&nbsp;Long.parseLong(splitValue[1]);<br />
<span style="color: #0000FF; ">byte</span>[]&nbsp;key&nbsp;=newbyte[16];<br />
Bytes.putLong(key,0,&nbsp;uid);<br />
Bytes.putLong(key,8,&nbsp;min);<br />
Get&nbsp;g&nbsp;=newGet(key);<br />
g.addFamily(fp);<br />
lstGet.add(g);<br />
}<br />
Result[]&nbsp;res&nbsp;=<span style="color: #0000FF; ">null</span>;<br />
<span style="color: #0000FF; ">try</span>{<br />
res&nbsp;=&nbsp;tableMinutePV[rand.nextInt(tableN)].get(lstGet);<br />
}<span style="color: #0000FF; ">catch</span>(IOException&nbsp;e1)&nbsp;{<br />
logger.error("tableMinutePV&nbsp;exception,&nbsp;e="+&nbsp;e1.getStackTrace());<br />
}<br />
<br />
<span style="color: #0000FF; ">if</span>(res&nbsp;!=<span style="color: #0000FF; ">null</span>&amp;&amp;&nbsp;res.length&nbsp;&gt;0)&nbsp;{<br />
hashRet&nbsp;=newConcurrentHashMap(res.length);<br />
<span style="color: #0000FF; ">for</span>(Result&nbsp;re&nbsp;:&nbsp;res)&nbsp;{<br />
<span style="color: #0000FF; ">if</span>(re&nbsp;!=<span style="color: #0000FF; ">null</span>&amp;&amp;&nbsp;!re.isEmpty())&nbsp;{<br />
<span style="color: #0000FF; ">try</span>{<br />
<span style="color: #0000FF; ">byte</span>[]&nbsp;key&nbsp;=&nbsp;re.getRow();<br />
<span style="color: #0000FF; ">byte</span>[]&nbsp;value&nbsp;=&nbsp;re.getValue(fp,&nbsp;cp);<br />
<span style="color: #0000FF; ">if</span>(key&nbsp;!=<span style="color: #0000FF; ">null</span>&amp;&amp;&nbsp;value&nbsp;!=<span style="color: #0000FF; ">null</span>)&nbsp;{<br />
hashRet.put(String.valueOf(Bytes.toLong(key,<br />
Bytes.SIZEOF_LONG)),&nbsp;String.valueOf(Bytes<br />
.toLong(value)));<br />
}<br />
}<span style="color: #0000FF; ">catch</span>(Exception&nbsp;e2)&nbsp;{<br />
logger.error(e2.getStackTrace());<br />
}<br />
}<br />
}<br />
}<br />
<br />
returnhashRet;<br />
}<br />
}<br />
<span style="color: #008000; ">//</span><span style="color: #008000; ">调用接口类，实现Callable接口</span><span style="color: #008000; "><br />
</span>classBatchMinutePVCallableimplementsCallable&gt;{<br />
privateList&nbsp;keys;<br />
<br />
publicBatchMinutePVCallable(List&nbsp;lstKeys&nbsp;)&nbsp;{<br />
<span style="color: #0000FF; ">this</span>.keys&nbsp;=&nbsp;lstKeys;<br />
}<br />
<br />
publicConcurrentHashMap&nbsp;call()throwsException&nbsp;{<br />
returnDataReadServer.getBatchMinutePV(keys);<br />
}<br />
}</div>
<br />
3.5 缓存查询结果<br />
对于频繁查询HBase的应用场景，可以考虑在应用程序中做缓存，当有新的查询请求时，首先在缓存中查找，如果存在则直接返回，不再查询HBase；否则对HBase发起读请求查询，然后在应用程序中将查询结果缓存起来。至于缓存的替换策略，可以考虑LRU等常用的策略。<br />
<br />
3.6 Blockcache<br />
HBase上Regionserver的内存分为两个部分，一部分作为Memstore，主要用来写；另外一部分作为BlockCache，主要用于读。<br />
<br />
写请求会先写入Memstore，Regionserver会给每个region提供一个Memstore，当Memstore满64MB以后，会启动 flush刷新到磁盘。当Memstore的总大小超过限制时（heapsize * hbase.regionserver.global.memstore.upperLimit * 0.9），会强行启动flush进程，从最大的Memstore开始flush直到低于限制。<br />
<br />
读请求先到Memstore中查数据，查不到就到BlockCache中查，再查不到就会到磁盘上读，并把读的结果放入BlockCache。由于BlockCache采用的是LRU策略，因此BlockCache达到上限(heapsize * hfile.block.cache.size * 0.85)后，会启动淘汰机制，淘汰掉最老的一批数据。<br />
<br />
一个Regionserver上有一个BlockCache和N个Memstore，它们的大小之和不能大于等于heapsize * 0.8，否则HBase不能启动。默认BlockCache为0.2，而Memstore为0.4。对于注重读响应时间的系统，可以将 BlockCache设大些，比如设置BlockCache=0.4，Memstore=0.39，以加大缓存的命中率。<br />
<br />
有关BlockCache机制，请参考这里：HBase的Block cache，HBase的blockcache机制，hbase中的缓存的计算与使用。<br />
<br />
4.数据计算<br />
4.1 服务端计算<br />
Coprocessor运行于HBase RegionServer服务端，各个Regions保持对与其相关的coprocessor实现类的引用，coprocessor类可以通过RegionServer上classpath中的本地jar或HDFS的classloader进行加载。<br />
<br />
目前，已提供有几种coprocessor：<br />
<br />
Coprocessor：提供对于region管理的钩子，例如region的open/close/split/flush/compact等；<br />
RegionObserver：提供用于从客户端监控表相关操作的钩子，例如表的get/put/scan/delete等；<br />
Endpoint：提供可以在region上执行任意函数的命令触发器。一个使用例子是RegionServer端的列聚合，这里有代码示例。<br />
以上只是有关coprocessor的一些基本介绍，本人没有对其实际使用的经验，对它的可用性和性能数据不得而知。感兴趣的同学可以尝试一下，欢迎讨论。<br />
<br />
4.2 写端计算<br />
4.2.1 计数<br />
HBase本身可以看作是一个可以水平扩展的Key-Value存储系统，但是其本身的计算能力有限（Coprocessor可以提供一定的服务端计算），因此，使用HBase时，往往需要从写端或者读端进行计算，然后将最终的计算结果返回给调用者。举两个简单的例子：<br />
<br />
PV计算：通过在HBase写端内存中，累加计数，维护PV值的更新，同时为了做到持久化，定期（如1秒）将PV计算结果同步到HBase中，这样查询端最多会有1秒钟的延迟，能看到秒级延迟的PV结果。<br />
分钟PV计算：与上面提到的PV计算方法相结合，每分钟将当前的累计PV值，按照rowkey + minute作为新的rowkey写入HBase中，然后在查询端通过scan得到当天各个分钟以前的累计PV值，然后顺次将前后两分钟的累计PV值相减，就得到了当前一分钟内的PV值，从而最终也就得到当天各个分钟内的PV值。<br />
<br />
4.2.2 去重<br />
对于UV的计算，就是个去重计算的例子。分两种情况：<br />
<br />
如果内存可以容纳，那么可以在Hash表中维护所有已经存在的UV标识，每当新来一个标识时，通过快速查找Hash确定是否是一个新的UV，若是则UV值加1，否则UV值不变。另外，为了做到持久化或提供给查询接口使用，可以定期（如1秒）将UV计算结果同步到HBase中。<br />
如果内存不能容纳，可以考虑采用Bloom Filter来实现，从而尽可能的减少内存的占用情况。除了UV的计算外，判断URL是否存在也是个典型的应用场景。<br />
<br />
4.3 读端计算<br />
如果对于响应时间要求比较苛刻的情况（如单次http请求要在毫秒级时间内返回），个人觉得读端不宜做过多复杂的计算逻辑，尽量做到读端功能单一化：即从HBase RegionServer读到数据（scan或get方式）后，按照数据格式进行简单的拼接，直接返回给前端使用。当然，如果对于响应时间要求一般，或者业务特点需要，也可以在读端进行一些计算逻辑。<br />
<br />
5.总结<br />
作为一个Key-Value存储系统，HBase并不是万能的，它有自己独特的地方。因此，基于它来做应用时，我们往往需要从多方面进行优化改进（表设计、读表操作、写表操作、数据计算等），有时甚至还需要从系统级对HBase进行配置调优，更甚至可以对HBase本身进行优化。这属于不同的层次范畴。<br />
<br />
总之，概括来讲，对系统进行优化时，首先定位到影响你的程序运行性能的瓶颈之处，然后有的放矢进行针对行的优化。如果优化后满足你的期望，那么就可以停止优化；否则继续寻找新的瓶颈之处，开始新的优化，直到满足性能要求。<br />
<br />
以上就是从项目开发中总结的一点经验，如有不对之处，欢迎大家不吝赐教。&nbsp;<img src ="http://www.blogjava.net/paulwong/aggbug/392232.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2012-11-29 21:43 <a href="http://www.blogjava.net/paulwong/archive/2012/11/29/392232.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Submitting a Hadoop MapReduce job to a remote JobTracker</title><link>http://www.blogjava.net/paulwong/archive/2012/10/03/388988.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Wed, 03 Oct 2012 07:06:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2012/10/03/388988.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/388988.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2012/10/03/388988.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/388988.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/388988.html</trackback:ping><description><![CDATA[
<div class="entry-meta"><span class="meta-prep meta-prep-author"><br />Posted on</span> <a title="10:03 pm" href="http://pcbje.com/2012/08/submitting-hadoop-mapreduce-jobs-to-a-remote-jobtracker/" rel="bookmark"><span class="entry-date">August 31, 2012</span></a> <span class="meta-sep">by</span> <span class="author vcard"><a class="url fn n" title="View all posts by pcbje" href="http://pcbje.com/author/pcbje/">pcbje</a><br /></span></div><!-- .entry-meta --><div class="entry-content">While messing around with MapReduce code, I&#8217;ve found it to be a bit tedious having to generate the jarfile, copy it to the machine running the JobTracker, and then run the job every time the job has been altered. I should be able to run my jobs directly from my development environment, as illustrated in the figure below. This post explains how I&#8217;ve &#8220;solved&#8221; this problem. This may also help when integrating Hadoop with other applications. I do by no means claim that this is the proper way to do it, but it does the trick for me.<br /><div style="width: 555px;" id="attachment_134" class="wp-caption aligncenter"><a href="http://pcbje.com/wp-content/uploads/2012/08/hadoop-setup.png"><img class="size-full wp-image-134" title="hadoop-setup" border="0" alt="" src="http://pcbje.com/wp-content/uploads/2012/08/hadoop-setup.png" width="545" height="155" /></a><p class="wp-caption-text">My Hadoop infrastructure</p></div><br />I assume that you have a (single-node) Hadoop 1.0.3 cluster properly installed on a dedicated or virtual machine. In this example, the JobTracker and HDFS resides on IP address 192.168.102.131.Let&#8217;s start out with a simple job that does nothing except to start up and terminate:<br /><br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 255);">package</span><span style="color: rgb(0, 0, 0);"> com.pcbje.hadoopjobs;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> java.io.IOException;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> java.util.Date;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> java.util.Iterator;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.conf.Configuration;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.fs.Path;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.io.IntWritable;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.io.LongWritable;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.io.Text;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.mapred.FileInputFormat;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.mapred.FileOutputFormat;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.mapred.JobClient;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.mapred.JobConf;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.mapred.MapReduceBase;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.mapred.Mapper;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.mapred.OutputCollector;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.mapred.Reporter;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.mapreduce.Job;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">import</span><span style="color: rgb(0, 0, 0);"> org.apache.hadoop.mapred.Reducer;<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /><br /><img id="Codehighlighter1_772_1741_Open_Image" onclick="this.style.display='none'; Codehighlighter1_772_1741_Open_Text.style.display='none'; Codehighlighter1_772_1741_Closed_Image.style.display='inline'; Codehighlighter1_772_1741_Closed_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedBlockStart.gif"><img style="display: none;" id="Codehighlighter1_772_1741_Closed_Image" onclick="this.style.display='none'; Codehighlighter1_772_1741_Closed_Text.style.display='none'; Codehighlighter1_772_1741_Open_Image.style.display='inline'; Codehighlighter1_772_1741_Open_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ContractedBlock.gif"></span><span style="color: rgb(0, 0, 255);">public</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">class</span><span style="color: rgb(0, 0, 0);"> MyFirstJob </span><span style="border: 1px solid rgb(128, 128, 128); display: none; background-color: rgb(255, 255, 255);" id="Codehighlighter1_772_1741_Closed_Text"><img alt="" src="http://www.blogjava.net/Images/dot.gif" /></span><span id="Codehighlighter1_772_1741_Open_Text"><span style="color: rgb(0, 0, 0);">{<br /><img id="Codehighlighter1_834_1303_Open_Image" onclick="this.style.display='none'; Codehighlighter1_834_1303_Open_Text.style.display='none'; Codehighlighter1_834_1303_Closed_Image.style.display='inline'; Codehighlighter1_834_1303_Closed_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedSubBlockStart.gif"><img style="display: none;" id="Codehighlighter1_834_1303_Closed_Image" onclick="this.style.display='none'; Codehighlighter1_834_1303_Closed_Text.style.display='none'; Codehighlighter1_834_1303_Open_Image.style.display='inline'; Codehighlighter1_834_1303_Open_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ContractedSubBlock.gif">    </span><span style="color: rgb(0, 0, 255);">public</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">static</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">void</span><span style="color: rgb(0, 0, 0);"> main(String[] args) </span><span style="color: rgb(0, 0, 255);">throws</span><span style="color: rgb(0, 0, 0);"> Exception </span><span style="border: 1px solid rgb(128, 128, 128); display: none; background-color: rgb(255, 255, 255);" id="Codehighlighter1_834_1303_Closed_Text"><img alt="" src="http://www.blogjava.net/Images/dot.gif" /></span><span id="Codehighlighter1_834_1303_Open_Text"><span style="color: rgb(0, 0, 0);">{<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" />        Configuration config </span><span style="color: rgb(0, 0, 0);">=</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">new</span><span style="color: rgb(0, 0, 0);"> Configuration();<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" /><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" />        JobConf job </span><span style="color: rgb(0, 0, 0);">=</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">new</span><span style="color: rgb(0, 0, 0);"> JobConf(config);<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" />        job.setJarByClass(MyFirstJob.</span><span style="color: rgb(0, 0, 255);">class</span><span style="color: rgb(0, 0, 0);">);<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" />        job.setJobName(</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">My first job</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">);<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" /><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" />        FileInputFormat.setInputPaths(job, </span><span style="color: rgb(0, 0, 255);">new</span><span style="color: rgb(0, 0, 0);"> Path(args[</span><span style="color: rgb(0, 0, 0);">0</span><span style="color: rgb(0, 0, 0);">));<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" />        FileOutputFormat.setOutputPath(job, </span><span style="color: rgb(0, 0, 255);">new</span><span style="color: rgb(0, 0, 0);"> Path(args[</span><span style="color: rgb(0, 0, 0);">1</span><span style="color: rgb(0, 0, 0);">]));<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" /><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" />        job.setMapperClass(MyFirstJob.MyFirstMapper.</span><span style="color: rgb(0, 0, 255);">class</span><span style="color: rgb(0, 0, 0);">);<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" />        job.setReducerClass(MyFirstJob.MyFirstReducer.</span><span style="color: rgb(0, 0, 255);">class</span><span style="color: rgb(0, 0, 0);">);<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" /><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" />        JobClient.runJob(job);<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedSubBlockEnd.gif" />    }</span></span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" /><br /><img id="Codehighlighter1_1385_1520_Open_Image" onclick="this.style.display='none'; Codehighlighter1_1385_1520_Open_Text.style.display='none'; Codehighlighter1_1385_1520_Closed_Image.style.display='inline'; Codehighlighter1_1385_1520_Closed_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedSubBlockStart.gif"><img style="display: none;" id="Codehighlighter1_1385_1520_Closed_Image" onclick="this.style.display='none'; Codehighlighter1_1385_1520_Closed_Text.style.display='none'; Codehighlighter1_1385_1520_Open_Image.style.display='inline'; Codehighlighter1_1385_1520_Open_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ContractedSubBlock.gif">    </span><span style="color: rgb(0, 0, 255);">private</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">static</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">class</span><span style="color: rgb(0, 0, 0);"> MyFirstMapper </span><span style="color: rgb(0, 0, 255);">extends</span><span style="color: rgb(0, 0, 0);"> MapReduceBase </span><span style="color: rgb(0, 0, 255);">implements</span><span style="color: rgb(0, 0, 0);"> Mapper </span><span style="border: 1px solid rgb(128, 128, 128); display: none; background-color: rgb(255, 255, 255);" id="Codehighlighter1_1385_1520_Closed_Text"><img alt="" src="http://www.blogjava.net/Images/dot.gif" /></span><span id="Codehighlighter1_1385_1520_Open_Text"><span style="color: rgb(0, 0, 0);">{<br /><img id="Codehighlighter1_1503_1514_Open_Image" onclick="this.style.display='none'; Codehighlighter1_1503_1514_Open_Text.style.display='none'; Codehighlighter1_1503_1514_Closed_Image.style.display='inline'; Codehighlighter1_1503_1514_Closed_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedSubBlockStart.gif"><img style="display: none;" id="Codehighlighter1_1503_1514_Closed_Image" onclick="this.style.display='none'; Codehighlighter1_1503_1514_Closed_Text.style.display='none'; Codehighlighter1_1503_1514_Open_Image.style.display='inline'; Codehighlighter1_1503_1514_Open_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ContractedSubBlock.gif">        </span><span style="color: rgb(0, 0, 255);">public</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">void</span><span style="color: rgb(0, 0, 0);"> map(LongWritable key, Text value, OutputCollector output, Reporter reporter) </span><span style="color: rgb(0, 0, 255);">throws</span><span style="color: rgb(0, 0, 0);"> IOException </span><span style="border: 1px solid rgb(128, 128, 128); display: none; background-color: rgb(255, 255, 255);" id="Codehighlighter1_1503_1514_Closed_Text"><img alt="" src="http://www.blogjava.net/Images/dot.gif" /></span><span id="Codehighlighter1_1503_1514_Open_Text"><span style="color: rgb(0, 0, 0);">{<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" /><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedSubBlockEnd.gif" />        }</span></span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedSubBlockEnd.gif" />    }</span></span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" /><br /><img id="Codehighlighter1_1604_1739_Open_Image" onclick="this.style.display='none'; Codehighlighter1_1604_1739_Open_Text.style.display='none'; Codehighlighter1_1604_1739_Closed_Image.style.display='inline'; Codehighlighter1_1604_1739_Closed_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedSubBlockStart.gif"><img style="display: none;" id="Codehighlighter1_1604_1739_Closed_Image" onclick="this.style.display='none'; Codehighlighter1_1604_1739_Closed_Text.style.display='none'; Codehighlighter1_1604_1739_Open_Image.style.display='inline'; Codehighlighter1_1604_1739_Open_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ContractedSubBlock.gif">    </span><span style="color: rgb(0, 0, 255);">private</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">static</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">class</span><span style="color: rgb(0, 0, 0);"> MyFirstReducer </span><span style="color: rgb(0, 0, 255);">extends</span><span style="color: rgb(0, 0, 0);"> MapReduceBase </span><span style="color: rgb(0, 0, 255);">implements</span><span style="color: rgb(0, 0, 0);"> Reducer </span><span style="border: 1px solid rgb(128, 128, 128); display: none; background-color: rgb(255, 255, 255);" id="Codehighlighter1_1604_1739_Closed_Text"><img alt="" src="http://www.blogjava.net/Images/dot.gif" /></span><span id="Codehighlighter1_1604_1739_Open_Text"><span style="color: rgb(0, 0, 0);">{<br /><img id="Codehighlighter1_1722_1733_Open_Image" onclick="this.style.display='none'; Codehighlighter1_1722_1733_Open_Text.style.display='none'; Codehighlighter1_1722_1733_Closed_Image.style.display='inline'; Codehighlighter1_1722_1733_Closed_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedSubBlockStart.gif"><img style="display: none;" id="Codehighlighter1_1722_1733_Closed_Image" onclick="this.style.display='none'; Codehighlighter1_1722_1733_Closed_Text.style.display='none'; Codehighlighter1_1722_1733_Open_Image.style.display='inline'; Codehighlighter1_1722_1733_Open_Text.style.display='inline';" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ContractedSubBlock.gif">        </span><span style="color: rgb(0, 0, 255);">public</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">void</span><span style="color: rgb(0, 0, 0);"> reduce(Text key, Iterator values, OutputCollector output, Reporter reporter) </span><span style="color: rgb(0, 0, 255);">throws</span><span style="color: rgb(0, 0, 0);"> IOException </span><span style="border: 1px solid rgb(128, 128, 128); display: none; background-color: rgb(255, 255, 255);" id="Codehighlighter1_1722_1733_Closed_Text"><img alt="" src="http://www.blogjava.net/Images/dot.gif" /></span><span id="Codehighlighter1_1722_1733_Open_Text"><span style="color: rgb(0, 0, 0);">{<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/InBlock.gif" /><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedSubBlockEnd.gif" />        }</span></span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedSubBlockEnd.gif" />    }</span></span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/ExpandedBlockEnd.gif" />}</span></span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span></div><br />Now, most of the examples you find online typically shows you a local mode setup where all the components of Hadoop (HDFS, JobTracker, etc) run on the same machine. A typical mapred-site.xml configuration might look like:<br /><br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">configuration</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />    </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />        </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">mapred.job.tracker</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />        </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">localhost:9001</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />    </span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">configuration</span><span style="color: rgb(0, 0, 255);">&gt;</span></div><br />As far as I can tell, such a configuration requires that jobs are submitted from the same node as the JobTracker. This is what I want to avoid. The first thing to do is to change the fs.default.name attribute to the IP address of my NameNode.<br /><br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 0);">Configuration conf </span><span style="color: rgb(0, 0, 0);">=</span><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">new</span><span style="color: rgb(0, 0, 0);"> Configuration();<br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />conf.set(</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">fs.default.name</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">, </span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">192.168.102.131:9000</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">);</span></div><br />And in core-site.xml:<br /><br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">configuration</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">fs.default.name</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">192.168.102.131:9000</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">configuration</span><span style="color: rgb(0, 0, 255);">&gt;</span></div><br />This tells the job to connect to the HDFS residing on a different machine. Running the job with this configuration will read from and write to the remote HDFS correctly, but the JobTracker at 192.168.102.131:9001 will not notice it. This means that the admin panel at 192.168.102.131:50030 wont list the job either. So the next thing to do is to tell the job configuration to submit the job to the appropriate JobTracker like this:<br /><br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 0);">config.set(</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">mapred.job.tracker</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">, </span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">192.168.102.131:9001</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">);</span></div><br />You also need to change mapred-site.xml to allow external connections, this can be done by replacing &#8220;localhost&#8221; with the JobTracker&#8217;s IP address:<br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">configuration</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">mapred.job.tracker</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">192.168.102.131:9001</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">configuration</span><span style="color: rgb(0, 0, 255);">&gt;</span></div><br />Restart hadoop.Upon trying to run your job, you may get an exception like this:<pre>SEVERE: PriviledgedActionException as:[user] cause:org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException: Permission denied: user=[user], access=WRITE, inode="mapred":root:supergroup:rwxr-xr-x
</pre>If you do, this may be solved by adding the following mapred-site.xml:<br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">configuration</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">mapreduce.jobtracker.staging.root.dir</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">/user</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">configuration</span><span style="color: rgb(0, 0, 255);">&gt;</span></div><br />And then execute the following commands:<pre>stop-mapred.sh
start-mapred.sh
</pre>When you now submit your job, it should be picked up by the admin page over at :50030. However, it will most probably fail and the log will be telling you something like:<pre>java.lang.ClassNotFoundException: com.pcbje.hadoopjobs.MyFirstJob$MyFirstMapper
</pre>In order to fix this, you have to ensure that all dependencies of the submitted job are available to the JobTracker. This can be achieved by exporting the project in as a runnable jar, and then execute something like:&nbsp; <pre>java -jar myfirstjob-jar-with-dependencies.jar /input/path /output/path
</pre>If your user has the appropriate permissions to the input and out directory on HDFS, the job should now run successfully. This can be verified in the console and on the administration panel.<br /><br />Manually exporting runnable jars requires a lot of clicks in IDEs such as Eclipse. If you are using Maven, you can tell it to build the jar with its dependencies (See <a title="How can I create an executable jar with dependencies using Maven?" href="http://stackoverflow.com/questions/574594/how-can-i-create-an-executable-jar-with-dependencies-using-maven" target="_blank">this answer</a> for details). This would make the process a whole lot easier.Finally, to make it even easier, place a tiny bash-script in the same folder as pom.xml for building the maven project and executing the jar:<pre>#!/bin/sh
mvn assembly:assembly
java -jar $1 $2 $3
</pre>After making the script executable, you can build and submit the job with the following command:<pre>./build-and-run-job target/myfirstjob-jar-with-dependencies.jar /input/path </pre></div> 
 
 
<img src ="http://www.blogjava.net/paulwong/aggbug/388988.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2012-10-03 15:06 <a href="http://www.blogjava.net/paulwong/archive/2012/10/03/388988.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBASE的MAPREDUCE任务运行异常解决办法，无需CYGWIN，纯WINDOWS环境</title><link>http://www.blogjava.net/paulwong/archive/2012/10/03/388977.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Tue, 02 Oct 2012 18:18:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2012/10/03/388977.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/388977.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2012/10/03/388977.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/388977.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/388977.html</trackback:ping><description><![CDATA[ 
如果是在WINDOWS的ECLIPSE中，运行HBASE的MAPREDUCE，会出现异常，这是由于默认运行MAPREDUCE任务是在本地运行，而由于会建立文件赋权限是按照UNIX的方式进行，因此会报错：<br /><br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 0);">java.lang.RuntimeException: Error </span><span style="color: rgb(0, 0, 255);">while</span><span style="color: rgb(0, 0, 0);"> running command to get file permissions : java.io.IOException: Cannot run program </span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">ls</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">: CreateProcess error</span><span style="color: rgb(0, 0, 0);">=</span><span style="color: rgb(0, 0, 0);">2</span><span style="color: rgb(0, 0, 0);">, </span></div><br /><br />解决办法是将任务发到运程主机，通常是LINUX上运行，在hbase-site.xml中加入：<br /><br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">mapred.job.tracker</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">master:9001</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span></div><br />同时需把HDFS的权限机制关掉：<br /><br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">dfs.permissions</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">false</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span></div><br /><br />另外由于是在远程上执行任务，自定义的类文件，如Maper/Reducer等需打包成jar文件上传，具体见方案：<br />Hadoop作业提交分析（五）<a href="http://www.cnblogs.com/spork/archive/2010/04/21/1717592.html" target="_blank">http://www.cnblogs.com/spork/archive/2010/04/21/1717592.html</a><br /> 
<br /><br />研究了好几天，终于搞清楚，CONFIGUARATION就是JOB的配置信息，远程JOBTRACKER就是以此为参数构建JOB去执行，由于远程主机并没有自定义的MAPREDUCE类，需打成JAR包后，上传到主机处，但无需每次都手动传，可以代码设置：<br /><br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 0);">conf.set(</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">tmpjars</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">, </span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">d:/aaa.jar</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">);</span></div><br /><br />另注意，如果在WINDOWS系统中，文件分隔号是&#8220;；&#8221;，生成的JAR包信息是以&#8220;；&#8221;间隔的，在远程主机的LINUX上是无法辨别，需改为：<br /><br /><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 0);">System.setProperty(</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">path.separator</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">, </span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">:</span><span style="color: rgb(0, 0, 0);">"</span><span style="color: rgb(0, 0, 0);">);</span></div><br /><br />参考文章：<br /><a href="http://www.cnblogs.com/xia520pi/archive/2012/05/20/2510723.html" target="_blank">http://www.cnblogs.com/xia520pi/archive/2012/05/20/2510723.html</a><br /><br /><br />使用hadoop eclipse plugin提交Job并添加多个第三方jar（完美版）<br /><a href="http://heipark.iteye.com/blog/1171923" target="_blank">http://heipark.iteye.com/blog/1171923</a>&nbsp;
 
 <img src ="http://www.blogjava.net/paulwong/aggbug/388977.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2012-10-03 02:18 <a href="http://www.blogjava.net/paulwong/archive/2012/10/03/388977.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HADOOP1.0.3+HBASE0.94.1伪单机环境配置实录</title><link>http://www.blogjava.net/paulwong/archive/2012/10/01/388930.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Mon, 01 Oct 2012 14:15:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2012/10/01/388930.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/388930.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2012/10/01/388930.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/388930.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/388930.html</trackback:ping><description><![CDATA[1.在host中加入master 127.0.0.1&nbsp;
<div><br />
</div>
<div>2.实现无需密码登录ssh&nbsp;</div>
<div><br />
</div>
<div>3.hadoop配置文件&nbsp;</div>
<div><br />
</div>
<div>core-site.xml&nbsp;</div>
<div>
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->&nbsp; &nbsp;<span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml&nbsp;version="1.0"</span><span style="color: #0000FF; ">?&gt;</span><br />
<span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml-stylesheet&nbsp;type="text/xsl"&nbsp;href="configuration.xsl"</span><span style="color: #0000FF; ">?&gt;</span><br />
<br />
<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; ">&nbsp;Put&nbsp;site-specific&nbsp;property&nbsp;overrides&nbsp;in&nbsp;this&nbsp;file.&nbsp;</span><span style="color: #008000; ">--&gt;</span><br />
<br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hadoop.tmp.dir<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>/Users/paul/Documents/PAUL/DOWNLOAD/SOFTWARE/DEVELOP/HADOOP/hadoop-tmp-data<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span>A&nbsp;base&nbsp;for&nbsp;other&nbsp;temporary&nbsp;directories.<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span><br />
<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;<br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>fs.default.name<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>hdfs://master:9000<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span>The&nbsp;name&nbsp;of&nbsp;the&nbsp;default&nbsp;file&nbsp;system.&nbsp;&nbsp;A&nbsp;URI&nbsp;whose<br />
&nbsp;&nbsp;scheme&nbsp;and&nbsp;authority&nbsp;determine&nbsp;the&nbsp;FileSystem&nbsp;implementation.&nbsp;&nbsp;The<br />
&nbsp;&nbsp;uri's&nbsp;scheme&nbsp;determines&nbsp;the&nbsp;config&nbsp;property&nbsp;(fs.SCHEME.impl)&nbsp;naming<br />
&nbsp;&nbsp;the&nbsp;FileSystem&nbsp;implementation&nbsp;class.&nbsp;&nbsp;The&nbsp;uri's&nbsp;authority&nbsp;is&nbsp;used&nbsp;to<br />
&nbsp;&nbsp;determine&nbsp;the&nbsp;host,&nbsp;port,&nbsp;etc.&nbsp;for&nbsp;a&nbsp;filesystem.<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span><br />
<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span></div>
</div>
<div><br />
</div>
<div><configuration>
</configuration>
hdfs-site.xml&nbsp;</div>
<div><configuration>
<property><name><br />
</name></property>
</configuration></div>
<div>
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
--><span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml&nbsp;version="1.0"</span><span style="color: #0000FF; ">?&gt;</span><br />
<span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml-stylesheet&nbsp;type="text/xsl"&nbsp;href="configuration.xsl"</span><span style="color: #0000FF; ">?&gt;</span><br />
<br />
<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; ">&nbsp;Put&nbsp;site-specific&nbsp;property&nbsp;overrides&nbsp;in&nbsp;this&nbsp;file.&nbsp;</span><span style="color: #008000; ">--&gt;</span><br />
<br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
<br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>dfs.replication<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>1<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span>Default&nbsp;block&nbsp;replication.<br />
&nbsp;&nbsp;The&nbsp;actual&nbsp;number&nbsp;of&nbsp;replications&nbsp;can&nbsp;be&nbsp;specified&nbsp;when&nbsp;the&nbsp;file&nbsp;is&nbsp;created.<br />
&nbsp;&nbsp;The&nbsp;default&nbsp;is&nbsp;used&nbsp;if&nbsp;replication&nbsp;is&nbsp;not&nbsp;specified&nbsp;in&nbsp;create&nbsp;time.<br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span><br />
<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; "><br />
&nbsp;&nbsp;&lt;property&gt;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&lt;name&gt;dfs.name.dir&lt;/name&gt;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&lt;value&gt;/Users/paul/Documents/PAUL/DOWNLOAD/SOFTWARE/DEVELOP/HADOOP/hadoop-tmp-data/hdfs-data-name&lt;/value&gt;<br />
&nbsp;&nbsp;&lt;/property&gt;<br />
<br />
&nbsp;&nbsp;&lt;property&gt;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&lt;name&gt;dfs.data.dir&lt;/name&gt;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&lt;value&gt;/Users/paul/Documents/PAUL/DOWNLOAD/SOFTWARE/DEVELOP/HADOOP/hadoop-tmp-data/hdfs-data&lt;/value&gt;<br />
&nbsp;&nbsp;&lt;/property&gt;<br />
</span><span style="color: #008000; ">--&gt;</span><br />
<br />
<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span><br />
</div>
</div>
<div><br />
</div>
<div><configuration>
<property><description></description></property>
<!--
<property>
<name>dfs.name.dir</name>
<value>/Users/paul/Documents/PAUL/DOWNLOAD/SOFTWARE/DEVELOP/HADOOP/hadoop-tmp-data/hdfs-data-name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/Users/paul/Documents/PAUL/DOWNLOAD/SOFTWARE/DEVELOP/HADOOP/hadoop-tmp-data/hdfs-data</value>
</property>
-->
</configuration>mapred-site.xml&nbsp;</div>
<div><br />
</div>
<div>
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
--><span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml&nbsp;version="1.0"</span><span style="color: #0000FF; ">?&gt;</span><br />
<span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml-stylesheet&nbsp;type="text/xsl"&nbsp;href="configuration.xsl"</span><span style="color: #0000FF; ">?&gt;</span><br />
<br />
<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; ">&nbsp;Put&nbsp;site-specific&nbsp;property&nbsp;overrides&nbsp;in&nbsp;this&nbsp;file.&nbsp;</span><span style="color: #008000; ">--&gt;</span><br />
<br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
<br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>mapred.job.tracker<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>master:9001<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span>The&nbsp;host&nbsp;and&nbsp;port&nbsp;that&nbsp;the&nbsp;MapReduce&nbsp;job&nbsp;tracker&nbsp;runs<br />
&nbsp;&nbsp;at.&nbsp;If&nbsp;"local",&nbsp;then&nbsp;jobs&nbsp;are&nbsp;run&nbsp;in-process&nbsp;as&nbsp;a&nbsp;single&nbsp;map<br />
&nbsp;&nbsp;and&nbsp;reduce&nbsp;task.<br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span><br />
<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>mapred.tasktracker.tasks.maximum<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>8<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span>The&nbsp;maximum&nbsp;number&nbsp;of&nbsp;tasks&nbsp;that&nbsp;will&nbsp;be&nbsp;run&nbsp;simultaneously&nbsp;by&nbsp;a<br />
a&nbsp;task&nbsp;tracker<br />
<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span><br />
<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span></div>
</div>
<div><br />
</div>
<div><configuration>
<property><description></description></property>
</configuration>masters/slaves&nbsp;</div>
<div>master&nbsp;</div>
<div><br />
</div>
<div>4. 格式化namenode&nbsp;</div>
<div><br />
</div>
<div>5. 启动hadoop&nbsp;</div>
<div><br />
</div>
<div>6. hbase配置文件&nbsp;</div>
<div><br />
</div>
<div>hbase-site.xml&nbsp;</div>
<div><configuration>
<property><name><br />
</name></property>
</configuration></div>
<div>
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
--><span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml&nbsp;version="1.0"</span><span style="color: #0000FF; ">?&gt;</span><br />
<span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml-stylesheet&nbsp;type="text/xsl"&nbsp;href="configuration.xsl"</span><span style="color: #0000FF; ">?&gt;</span><br />
<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; "><br />
/**<br />
&nbsp;*&nbsp;Copyright&nbsp;2010&nbsp;The&nbsp;Apache&nbsp;Software&nbsp;Foundation<br />
&nbsp;*<br />
&nbsp;*&nbsp;Licensed&nbsp;to&nbsp;the&nbsp;Apache&nbsp;Software&nbsp;Foundation&nbsp;(ASF)&nbsp;under&nbsp;one<br />
&nbsp;*&nbsp;or&nbsp;more&nbsp;contributor&nbsp;license&nbsp;agreements.&nbsp;&nbsp;See&nbsp;the&nbsp;NOTICE&nbsp;file<br />
&nbsp;*&nbsp;distributed&nbsp;with&nbsp;this&nbsp;work&nbsp;for&nbsp;additional&nbsp;information<br />
&nbsp;*&nbsp;regarding&nbsp;copyright&nbsp;ownership.&nbsp;&nbsp;The&nbsp;ASF&nbsp;licenses&nbsp;this&nbsp;file<br />
&nbsp;*&nbsp;to&nbsp;you&nbsp;under&nbsp;the&nbsp;Apache&nbsp;License,&nbsp;Version&nbsp;2.0&nbsp;(the<br />
&nbsp;*&nbsp;"License");&nbsp;you&nbsp;may&nbsp;not&nbsp;use&nbsp;this&nbsp;file&nbsp;except&nbsp;in&nbsp;compliance<br />
&nbsp;*&nbsp;with&nbsp;the&nbsp;License.&nbsp;&nbsp;You&nbsp;may&nbsp;obtain&nbsp;a&nbsp;copy&nbsp;of&nbsp;the&nbsp;License&nbsp;at<br />
&nbsp;*<br />
&nbsp;*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;http://www.apache.org/licenses/LICENSE-2.0<br />
&nbsp;*<br />
&nbsp;*&nbsp;Unless&nbsp;required&nbsp;by&nbsp;applicable&nbsp;law&nbsp;or&nbsp;agreed&nbsp;to&nbsp;in&nbsp;writing,&nbsp;software<br />
&nbsp;*&nbsp;distributed&nbsp;under&nbsp;the&nbsp;License&nbsp;is&nbsp;distributed&nbsp;on&nbsp;an&nbsp;"AS&nbsp;IS"&nbsp;BASIS,<br />
&nbsp;*&nbsp;WITHOUT&nbsp;WARRANTIES&nbsp;OR&nbsp;CONDITIONS&nbsp;OF&nbsp;ANY&nbsp;KIND,&nbsp;either&nbsp;express&nbsp;or&nbsp;implied.<br />
&nbsp;*&nbsp;See&nbsp;the&nbsp;License&nbsp;for&nbsp;the&nbsp;specific&nbsp;language&nbsp;governing&nbsp;permissions&nbsp;and<br />
&nbsp;*&nbsp;limitations&nbsp;under&nbsp;the&nbsp;License.<br />
&nbsp;*/Users/paul/Documents/PAUL/DOWNLOAD/SOFTWARE/DEVELOP/HADOOP/hadoop-tmp-data<br />
&nbsp;*/<br />
<br />
</span><span style="color: #008000; ">--&gt;</span><br />
<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.rootdir<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>hdfs://master:9000/hbase<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.cluster.distributed<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>true<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.zookeeper.quorum<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>localhost<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><span style="color: #008000; ">&lt;!--</span><span style="color: #008000; ">单机配这个</span><span style="color: #008000; ">--&gt;</span><br />
&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
<br />
<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span><br />
</div>
</div>
<div><br />
</div>
<div>
7. 启动hbase
</div><img src ="http://www.blogjava.net/paulwong/aggbug/388930.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2012-10-01 22:15 <a href="http://www.blogjava.net/paulwong/archive/2012/10/01/388930.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Hadoop集群配置</title><link>http://www.blogjava.net/paulwong/archive/2012/09/21/388299.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Fri, 21 Sep 2012 14:45:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2012/09/21/388299.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/388299.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2012/09/21/388299.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/388299.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/388299.html</trackback:ping><description><![CDATA[
step1:安装JDK <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.1 sudo sh jdk-6u10-linux-i586.bin<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.2 sudo gedit /etc/environment<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export JAVA_HOME=/home/linkin/Java/jdk1.6.0_23 <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export JRE_Home=/home/linkin/Java/jdk1.6.0_23/jre <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.3 sudo gedit /etc/profile<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 在umask 022之前添加以下语句：<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export JAVA_HOME=/home/linkin/Java/jdk1.6.0_23 <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export JRE_HOME=/home/linkin/Java/jdk1.6.0_23/jre <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOME/bin <br /> <br />更改时区：<br />cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime<br />安装NTP：<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;yum install ntp<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;安装后执行<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ntpdate cn.pool.ntp.org<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;即可同步国际时间..<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;开机后自动同步时间:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;vi /etc/rc.d/rc.local中，最下面添加<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ntpdate cn.pool.ntp.org<br /><br />关闭IPV6<br />在/etc/sysctl.conf结尾添加<br />net.ipv6.conf.all.disable_ipv6 = 1<br /><span id="line-89" class="anchor"></span>net.ipv6.conf.default.disable_ipv6 = 1<br />重启服务器<br /><br />删除IPV6的DNS服务器<br /><br />step2:SSH免密码登陆<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.1 首先在master主机上，linkin<a href="http://my.oschina.net/u/615690" rel="nofollow" target="_blank">@master</a> :~$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.2 linkin<a href="http://my.oschina.net/u/615690" rel="nofollow" target="_blank">@master</a> :~$ cat ~/.ssh/id_dsa.pub &gt;&gt; ~/.ssh/authorized_keys 将id_dsa.pub写入authorized_keys<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.3 linkin<a href="http://my.oschina.net/u/615690" rel="nofollow" target="_blank">@master</a> :~/.ssh$ scp id_dsa.pub linkin@192.168.149.2:/home/linkin<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.4 登陆到linkin主机 $cat id_dsa.pub &gt;&gt; .ssh/authorized_keys<br />&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong style="color: rgb(255, 102, 0); line-height: 25.18px; font-family: Helvetica, Tahoma, Arial, sans-serif; background-color: rgb(255, 255, 255);">authorized_keys的权限要是600</strong><span style="line-height: 25.18px; font-family: Helvetica, Tahoma, Arial, sans-serif; background-color: rgb(255, 255, 255);">。</span><span style="line-height: 25.18px; font-family: Helvetica, Tahoma, Arial, sans-serif; font-size: 13px;">chmod&nbsp;</span><span style="line-height: 25.18px; font-family: Helvetica, Tahoma, Arial, sans-serif; font-size: 13px;">600</span><span style="line-height: 25.18px; font-family: Helvetica, Tahoma, Arial, sans-serif; font-size: 13px;">&nbsp;.ssh</span><span style="line-height: 25.18px; font-family: Helvetica, Tahoma, Arial, sans-serif; font-size: 13px;">/</span><span style="line-height: 25.18px; font-family: Helvetica, Tahoma, Arial, sans-serif; font-size: 13px;">authorized_keys</span><br />&nbsp; &nbsp; &nbsp; 2.5 在Datenode上执行同样的操作就能实现彼此无密码登陆<br /> <br />step3:安装hadoop<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.1 设置hadoop-env.sh<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export JAVA_HOME=/home/linkin/jdk1.6.0_10<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.2 配置core-site.xml<br />&nbsp;&nbsp;&nbsp; <property><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">hadoop.tmp.dir</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">/home/linkin/hadoop-0.20.2/tmp</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">description</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">A&nbsp;base&nbsp;for&nbsp;other&nbsp;temporary&nbsp;directories.</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">description</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span></div><br />&nbsp;<div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 0);"> </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">fs.default.name</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp;&nbsp;&nbsp;&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">hdfs://master:9000</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">//要写主机名<br /><img alt="" align="top" src="http://www.blogjava.net/Images/OutliningIndicators/None.gif" />&nbsp; </span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span></div>&nbsp;&nbsp;&nbsp;</property>&nbsp;<br /><property>&nbsp;&nbsp;&nbsp;&nbsp;</property><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.3 配置hdfs-site.xml<br /><property><div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />&nbsp;&nbsp;</span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">dfs.replication</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />&nbsp;&nbsp;</span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">1</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span></div>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</property><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.4 配置mapred-site.xml<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<div style="padding: 4px 5px 4px 4px; border: 1px solid rgb(204, 204, 204); width: 98%; font-size: 13px; word-break: break-all; background-color: rgb(238, 238, 238);"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />&nbsp;&nbsp;</span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">mapred.job.tracker</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">name</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" />&nbsp;&nbsp;</span><span style="color: rgb(0, 0, 255);">&lt;</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">master:9001</span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">value</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);">//要写主机名<br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span><span style="color: rgb(0, 0, 255);">&lt;/</span><span style="color: rgb(128, 0, 0);">property</span><span style="color: rgb(0, 0, 255);">&gt;</span><span style="color: rgb(0, 0, 0);"><br /><img alt="" align="top" src="http://www.blogjava.net/images/OutliningIndicators/None.gif" /></span></div><property>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</property><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.5 配置master和slaves<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; master:master(主机名)slaves:linkin(主机名)这2个配置文件可以不拷贝到其它机器上，只在master上保存即可。<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.6 配置hosts文件<br />&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: red;">127.0.0.1 localhost (注意这里不能放其他的如机器名，否则会使hbase的master名称变成localhost)</span><br />&nbsp;&nbsp;&nbsp;&nbsp; 192.168.149.7 master<br />&nbsp;&nbsp;&nbsp;&nbsp; 192.168.149.2 linkin<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.7 配置profile，在末尾追加以下内容，并输入source/etc/profile使之生效<br />&nbsp;&nbsp;&nbsp; export JAVA_HOME=/home/linkin/jdk1.6.0_10<br />&nbsp;&nbsp;&nbsp; export JRE_HOME=/home/linkin/jdk1.6.0_10/jre<br />&nbsp;&nbsp;&nbsp; export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH<br />&nbsp;&nbsp;&nbsp; export PATH=$JAVA_HOME/bin:$PATH<br />&nbsp;&nbsp;&nbsp; HADOOP设置<br />&nbsp;&nbsp;&nbsp; export HADOOP_HOME=/home/linkin/hadoop-0.20.2<br />&nbsp;&nbsp;&nbsp; export PATH=$HADOOP_HOME/bin:$PATH<br />&nbsp;&nbsp;&nbsp; //export PATH=$PATH:$HIVE_HOME/bin<br />&nbsp;&nbsp;&nbsp;&nbsp; 3.8 将hadoop-0.20.2拷贝到其它主机对应的目录下。将/ect/profile和/etc/hosts也拷贝到其它机器上。profile需要做生效操作。<br />step4 格式化HDFS<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; bin/hadoop namenode -format<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; bin/hadoop dfs -ls<br />step5 启动hadoop<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; bin/start-all.sh<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 查看HDFS http://192.168.149.7:50070<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 查看JOB状态 http://192.168.149.7:50030/jobtracker.jsp<br /><br />参考资源：<br /><a href="http://wiki.ubuntu.org.cn/%E5%88%A9%E7%94%A8Cloudera%E5%AE%9E%E7%8E%B0Hadoop" target="_blank">http://wiki.ubuntu.org.cn/%E5%88%A9%E7%94%A8Cloudera%E5%AE%9E%E7%8E%B0Hadoop</a> 
 
<img src ="http://www.blogjava.net/paulwong/aggbug/388299.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2012-09-21 22:45 <a href="http://www.blogjava.net/paulwong/archive/2012/09/21/388299.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBASE基本概念</title><link>http://www.blogjava.net/paulwong/archive/2012/09/09/387318.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 08 Sep 2012 16:38:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2012/09/09/387318.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/387318.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2012/09/09/387318.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/387318.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/387318.html</trackback:ping><description><![CDATA[象博客这些普通数据，有标题，内容，作者等信息，要进行保存时，如果是关系数据库，数据的属性，如标题、内容等只保存一次，建表时，实际数据则每增加一篇博客，就增加一条数据，这种方式进行查询时必须通过SQL语句，还需要知道栏位名称。<br />
<br />
如果用HBASE这种数据库，存储时是属性和数据一起存进数据库，如每增加一篇博客，如数据和数据的属性一起存进数据库，是以KEY-VALUE的键值对的形式保存，即一条数据由若干个键值对组成，查询的时候把所有的数据加载进来，通过MAPREDUCE的算法进行过滤，无需SQL语句，因此也叫作NO-SQL数据库。<br />
<br />
HBase 官方文档中文版<br />
<a href="http://abloz.com/hbase/book.html" target="_blank">http://abloz.com/hbase/book.html</a><br />
<br />
HBase MapReduce实例分析<br />
<a href="http://www.taobaotesting.com/blogs/qa?bid=13914" target="_blank">http://www.taobaotesting.com/blogs/qa?bid=13914</a><br />
<br />
业务开发测试HBase之旅一：HTable基本概念<br />
<a href="http://www.taobaotesting.com/blogs/qa?bid=13850" target="_blank">http://www.taobaotesting.com/blogs/qa?bid=13850</a><br />
<br />
业务开发测试HBase之旅五：HBase MapReduce测试实战<br />
<a href="http://www.taobaotesting.com/blogs/qa?bid=13939" target="_blank">http://www.taobaotesting.com/blogs/qa?bid=13939</a> <br />
<br />
HBase 线上问题分析小记<br />
<a href="http://www.taobaotesting.com/blogs/2158" target="_blank">http://www.taobaotesting.com/blogs/2158</a><br />
<br />
Hadoop HBase 单机环境简单配置教程<br />
<a href="http://blog.nosqlfan.com/html/311.html" target="_blank">http://blog.nosqlfan.com/html/311.html</a><br />
<br />
hadoop和hbase分布式配置及整合eclipse开发<br />
<a href="http://wenku.baidu.com/view/8712a661caaedd3383c4d392.html">http://wenku.baidu.com/view/8712a661caaedd3383c4d392.html</a> <br />
<br />
Java操作Hbase进行建表、删表以及对数据进行增删改查，条件查询<br />
<a href="http://javacrazyer.iteye.com/blog/1186881">http://javacrazyer.iteye.com/blog/1186881</a><img src ="http://www.blogjava.net/paulwong/aggbug/387318.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2012-09-09 00:38 <a href="http://www.blogjava.net/paulwong/archive/2012/09/09/387318.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss>