﻿<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"><channel><title>BlogJava-paulwong-随笔分类-云计算</title><link>http://www.blogjava.net/paulwong/category/50970.html</link><description /><language>zh-cn</language><lastBuildDate>Sat, 05 Jul 2014 18:25:31 GMT</lastBuildDate><pubDate>Sat, 05 Jul 2014 18:25:31 GMT</pubDate><ttl>60</ttl><item><title>【转载】经典漫画讲解HDFS原理 </title><link>http://www.blogjava.net/paulwong/archive/2013/10/26/405663.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 26 Oct 2013 01:15:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/10/26/405663.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/405663.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/10/26/405663.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/405663.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/405663.html</trackback:ping><description><![CDATA[<span style="font-family: 微软雅黑, 宋体; background-color: #dfdfdf;">分布式文件系统比较出名的有HDFS&nbsp;&nbsp;和 GFS，其中HDFS比较简单一点。本文是一篇描述非常简洁易懂的漫画形式讲解HDFS的原理。比一般PPT要通俗易懂很多。不难得的学习资料。<br /></span><br /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><span style="font-family: 微软雅黑, 宋体; background-color: #dfdfdf;">1、三个部分: 客户端、nameserver（可理解为主控和文件索引,类似linux的inode）、datanode（存放实际数据）</span><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="225" src="http://my.csdn.net/uploads/201208/11/1344691496_8076.png" border="0" alt="" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><span style="font-family: 微软雅黑, 宋体; background-color: #dfdfdf;">在这里，client的形式我所了解的有两种，通过hadoop提供的api所编写的程序可以和hdfs进行交互，另外一种就是安装了hadoop的datanode其也可以通过命令行与hdfs系统进行交互，如在datanode上上传则使用如下命令行：bin/hadoop fs -put example1 user/chunk/<br /><br /></span><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><span style="font-family: 微软雅黑, 宋体; background-color: #dfdfdf;">2、如何写数据过程</span><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="476" src="http://my.csdn.net/uploads/201208/11/1344691715_8066.png" border="0" alt="" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="480" src="http://my.csdn.net/uploads/201208/11/1344692755_3243.png" border="0" alt="" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="217" src="http://my.csdn.net/uploads/201208/12/1344702703_2919.png" border="0" alt="" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><br /><br /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><span style="font-family: 微软雅黑, 宋体; background-color: #dfdfdf;">3、读取数据过程</span><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="439" src="http://my.csdn.net/uploads/201208/11/1344693039_4501.png" border="0" alt="" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><br /><br /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><span style="font-family: 微软雅黑, 宋体; background-color: #dfdfdf;">4、容错：第一部分：故障类型及其检测方法（nodeserver 故障，和网络故障，和脏数据问题）</span><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="471" src="http://my.csdn.net/uploads/201208/11/1344693728_5407.png" border="0" alt="" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="442" src="http://my.csdn.net/uploads/201208/11/1344693685_4529.png" border="0" alt="" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><br /><br /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><span style="font-family: 微软雅黑, 宋体; background-color: #dfdfdf;">5、容错第二部分：读写容错</span><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="429" src="http://my.csdn.net/uploads/201208/11/1344693811_7697.png" border="0" alt="" id="img_0.5895301518030465" initialized="true" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><br /><br /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><span style="font-family: 微软雅黑, 宋体; background-color: #dfdfdf;">6、容错第三部分：dataNode 失效</span><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="421" src="http://my.csdn.net/uploads/201208/11/1344694035_2660.png" border="0" alt="" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><br /><br /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><span style="font-family: 微软雅黑, 宋体; background-color: #dfdfdf;">7、备份规则</span><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="450" src="http://my.csdn.net/uploads/201208/11/1344694119_7534.png" border="0" alt="" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><br /><br /><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><span style="font-family: 微软雅黑, 宋体; background-color: #dfdfdf;">8、结束语</span><br style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf;" /><img width="600" height="235" src="http://my.csdn.net/uploads/201208/11/1344694185_4387.png" border="0" alt="" style="word-wrap: break-word; font-family: 微软雅黑, 宋体; background-color: #dfdfdf; cursor: pointer;" /><img src ="http://www.blogjava.net/paulwong/aggbug/405663.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-10-26 09:15 <a href="http://www.blogjava.net/paulwong/archive/2013/10/26/405663.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HIVE资源</title><link>http://www.blogjava.net/paulwong/archive/2013/09/01/403532.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sun, 01 Sep 2013 04:41:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/09/01/403532.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/403532.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/09/01/403532.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/403532.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/403532.html</trackback:ping><description><![CDATA[<p>
 
Hive是建立在Hadoop上的数据仓库基础构架。它提供了一系列的工具，可以用来进行数据提取转化加载（ETL），这是一种可以存储、查询和分析存储在 Hadoop 中的大规模数据的机制。Hive 定义了简单的类 SQL 查询语言，称为 HQL，它允许熟悉 SQL 的用户查询数据。同时，这个语言也允许熟悉 MapReduce 开发者的开发自定义的 mapper 和 reducer 来处理内建的 mapper 和 reducer 无法完成的复杂的分析工作。<br /><br /><br />Hive 没有专门的数据格式。 Hive 可以很好的工作在 Thrift 之上，控制分隔符，也允许用户指定数据格式<br /><br /><br />hive与关系数据库的区别：<br /><br />数据存储不同：hive基于hadoop的HDFS，关系数据库则基于本地文件系统<br /><br />计算模型不同：hive基于hadoop的mapreduce，关系数据库则基于索引的内存计算模型<br /><br />应用场景不同：hive是OLAP数据仓库系统提供海量数据查询的，实时性很差;关系数据库是OLTP事务系统，为实时查询业务服务<br /><br />扩展性不同：hive基于hadoop很容易通过分布式增加存储能力和计算能力，关系数据库水平扩展很难，要不断增加单机的性能<br /><br /><br />Hive安装及使用攻略<br /><a href="http://blog.fens.me/hadoop-hive-intro/" target="_blank">http://blog.fens.me/hadoop-hive-intro/</a><br /><br /><br />R利剑NoSQL系列文章 之 Hive<br /><a href="http://cos.name/2013/07/r-nosql-hive/" target="_blank">http://cos.name/2013/07/r-nosql-hive/</a><br /><br /><br /><br /><br /><br /><br /><br /><br /></p><img src ="http://www.blogjava.net/paulwong/aggbug/403532.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-09-01 12:41 <a href="http://www.blogjava.net/paulwong/archive/2013/09/01/403532.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>分布式搜索资源</title><link>http://www.blogjava.net/paulwong/archive/2013/08/31/403522.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 31 Aug 2013 07:52:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/31/403522.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/403522.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/31/403522.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/403522.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/403522.html</trackback:ping><description><![CDATA[云端分布式搜索技术<br /><a href="http://www.searchtech.pro" target="_blank">http://www.searchtech.pro</a><br /><br /><br />ELASTICSEARCH中文社区<br /><a href="http://es-bbs.medcl.net/categories/%E6%9C%80%E6%96%B0%E5%8A%A8%E6%80%81" target="_blank">http://es-bbs.medcl.net/categories/%E6%9C%80%E6%96%B0%E5%8A%A8%E6%80%81</a><br /><br /><br /><a href="http://wangwei3.iteye.com/blog/1818599" target="_blank">http://wangwei3.iteye.com/blog/1818599</a><br /><br /><br />Welcome to the Apache Nutch Wiki<br /><a href="https://wiki.apache.org/nutch/FrontPage" target="_blank">https://wiki.apache.org/nutch/FrontPage</a><br /><br /><br />elasticsearch客户端大全<br /><a href="http://www.searchtech.pro/elasticsearch-clients" target="_blank">http://www.searchtech.pro/elasticsearch-clients</a><br /><br /><br />客户端<br /><a href="http://es-cn.medcl.net/guide/concepts/scaling-lucene/" target="_blank">http://es-cn.medcl.net/guide/concepts/scaling-lucene/</a><br /><a href="https://github.com/aglover/elasticsearch_article/blob/master/src/main/java/com/b50/usat/load/MusicReviewSearch.java" target="_blank">https://github.com/aglover/elasticsearch_article/blob/master/src/main/java/com/b50/usat/load/MusicReviewSearch.java</a><br /><br /><br />&nbsp;<img src ="http://www.blogjava.net/paulwong/aggbug/403522.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-31 15:52 <a href="http://www.blogjava.net/paulwong/archive/2013/08/31/403522.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Install hadoop+hbase+nutch+elasticsearch</title><link>http://www.blogjava.net/paulwong/archive/2013/08/31/403513.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Fri, 30 Aug 2013 17:17:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/31/403513.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/403513.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/31/403513.html#Feedback</comments><slash:comments>3</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/403513.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/403513.html</trackback:ping><description><![CDATA[&nbsp;&nbsp;&nbsp;&nbsp; 摘要: This document is for Anyela Chavarro.Only these version of each framework work togetherCode highlighting produced by Actipro CodeHighlighter (freeware)http://www.CodeHighlighter.com/-->H...&nbsp;&nbsp;<a href='http://www.blogjava.net/paulwong/archive/2013/08/31/403513.html'>阅读全文</a><img src ="http://www.blogjava.net/paulwong/aggbug/403513.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-31 01:17 <a href="http://www.blogjava.net/paulwong/archive/2013/08/31/403513.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Implementation for CombineFileInputFormat Hadoop 0.20.205</title><link>http://www.blogjava.net/paulwong/archive/2013/08/29/403442.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Thu, 29 Aug 2013 08:08:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/29/403442.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/403442.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/29/403442.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/403442.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/403442.html</trackback:ping><description><![CDATA[运行MAPREDUCE JOB时，如果输入的文件比较小而多时，默认情况下会生成很多的MAP JOB，即一个文件一个MAP JOB，因此需要优化，使多个文件能合成一个MAP JOB的输入。<br /><br />具体的原理是下述三步: <br /><br />1.根据输入目录下的每个文件,如果其长度超过mapred.max.split.size,以block为单位分成多个split(一个split是一个map的输入),每个split的长度都大于mapred.max.split.size, 因为以block为单位, 因此也会大于blockSize, 此文件剩下的长度如果大于mapred.min.split.size.per.node, 则生成一个split, 否则先暂时保留.<br /><br />2. 现在剩下的都是一些长度效短的碎片,把每个rack下碎片合并, 只要长度超过mapred.max.split.size就合并成一个split, 最后如果剩下的碎片比mapred.min.split.size.per.rack大, 就合并成一个split, 否则暂时保留.<br /><br />3. 把不同rack下的碎片合并, 只要长度超过mapred.max.split.size就合并成一个split, 剩下的碎片无论长度, 合并成一个split.<br />举例: mapred.max.split.size=1000<br />      mapred.min.split.size.per.node=300<br />      mapred.min.split.size.per.rack=100<br />输入目录下五个文件,rack1下三个文件,长度为2050,1499,10, rack2下两个文件,长度为1010,80. 另外blockSize为500.<br />经过第一步, 生成五个split: 1000,1000,1000,499,1000. 剩下的碎片为rack1下:50,10; rack2下10:80<br />由于两个rack下的碎片和都不超过100, 所以经过第二步, split和碎片都没有变化.<br />第三步,合并四个碎片成一个split, 长度为150.<br /><br />如果要减少map数量, 可以调大mapred.max.split.size, 否则调小即可.<br /><br />其特点是: 一个块至多作为一个map的输入，一个文件可能有多个块，一个文件可能因为块多分给做为不同map的输入， 一个map可能处理多个块，可能处理多个文件。<br /><br />注：CombineFileInputFormat是一个抽象类，需要编写一个继承类。<br /><br /><br /><div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #0000FF; ">import</span>&nbsp;java.io.IOException;<br /><br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.conf.Configuration;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.io.LongWritable;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.io.Text;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapred.FileSplit;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapred.InputSplit;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapred.JobConf;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapred.LineRecordReader;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapred.RecordReader;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapred.Reporter;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapred.lib.CombineFileInputFormat;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapred.lib.CombineFileRecordReader;<br /><span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapred.lib.CombineFileSplit;<br /><br />@SuppressWarnings("deprecation")<br /><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;CombinedInputFormat&nbsp;<span style="color: #0000FF; ">extends</span>&nbsp;CombineFileInputFormat&lt;LongWritable,&nbsp;Text&gt;&nbsp;{<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;@SuppressWarnings({&nbsp;"unchecked",&nbsp;"rawtypes"&nbsp;})<br />&nbsp;&nbsp;&nbsp;&nbsp;@Override<br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;RecordReader&lt;LongWritable,&nbsp;Text&gt;&nbsp;getRecordReader(InputSplit&nbsp;split,&nbsp;JobConf&nbsp;conf,&nbsp;Reporter&nbsp;reporter)&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException&nbsp;{<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">return</span>&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;CombineFileRecordReader(conf,&nbsp;(CombineFileSplit)&nbsp;split,&nbsp;reporter,&nbsp;(Class)&nbsp;myCombineFileRecordReader.<span style="color: #0000FF; ">class</span>);<br />&nbsp;&nbsp;&nbsp;&nbsp;}<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;myCombineFileRecordReader&nbsp;<span style="color: #0000FF; ">implements</span>&nbsp;RecordReader&lt;LongWritable,&nbsp;Text&gt;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">private</span>&nbsp;<span style="color: #0000FF; ">final</span>&nbsp;LineRecordReader&nbsp;linerecord;<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;myCombineFileRecordReader(CombineFileSplit&nbsp;split,&nbsp;Configuration&nbsp;conf,&nbsp;Reporter&nbsp;reporter,&nbsp;Integer&nbsp;index)&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileSplit&nbsp;filesplit&nbsp;=&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;FileSplit(split.getPath(index),&nbsp;split.getOffset(index),&nbsp;split.getLength(index),&nbsp;split.getLocations());<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;linerecord&nbsp;=&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;LineRecordReader(conf,&nbsp;filesplit);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;close()&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;linerecord.close();<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;LongWritable&nbsp;createKey()&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;TODO&nbsp;Auto-generated&nbsp;method&nbsp;stub</span><span style="color: #008000; "><br /></span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">return</span>&nbsp;linerecord.createKey();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;Text&nbsp;createValue()&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;TODO&nbsp;Auto-generated&nbsp;method&nbsp;stub</span><span style="color: #008000; "><br /></span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">return</span>&nbsp;linerecord.createValue();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">long</span>&nbsp;getPos()&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;TODO&nbsp;Auto-generated&nbsp;method&nbsp;stub</span><span style="color: #008000; "><br /></span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">return</span>&nbsp;linerecord.getPos();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">float</span>&nbsp;getProgress()&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;TODO&nbsp;Auto-generated&nbsp;method&nbsp;stub</span><span style="color: #008000; "><br /></span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">return</span>&nbsp;linerecord.getProgress();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;@Override<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">boolean</span>&nbsp;next(LongWritable&nbsp;key,&nbsp;Text&nbsp;value)&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException&nbsp;{<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;TODO&nbsp;Auto-generated&nbsp;method&nbsp;stub</span><span style="color: #008000; "><br /></span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">return</span>&nbsp;linerecord.next(key,&nbsp;value);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /><br />&nbsp;&nbsp;&nbsp;&nbsp;}<br />}</div><br /><br />在运行时这样设置：<br /><br /><div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #0000FF; ">if</span>&nbsp;(argument&nbsp;!=&nbsp;<span style="color: #0000FF; ">null</span>)&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("mapred.max.split.size",&nbsp;argument);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;<span style="color: #0000FF; ">else</span>&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("mapred.max.split.size",&nbsp;"134217728");&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;128&nbsp;MB</span><span style="color: #008000; "><br /></span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /><span style="color: #008000; ">//</span><span style="color: #008000; "><img src="http://www.blogjava.net/Images/dot.gif"  alt="" /></span><span style="color: #008000; "><br /></span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.setInputFormat(CombinedInputFormat.<span style="color: #0000FF; ">class</span>);</div><br /><br /><img src ="http://www.blogjava.net/paulwong/aggbug/403442.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-29 16:08 <a href="http://www.blogjava.net/paulwong/archive/2013/08/29/403442.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>大数据平台架构设计资源</title><link>http://www.blogjava.net/paulwong/archive/2013/08/18/403001.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sun, 18 Aug 2013 10:27:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/18/403001.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/403001.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/18/403001.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/403001.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/403001.html</trackback:ping><description><![CDATA[!!!基于Hadoop的大数据平台实施记&#8212;&#8212;整体架构设计<br /><a href="http://blog.csdn.net/jacktan/article/details/9200979" target="_blank">http://blog.csdn.net/jacktan/article/details/9200979</a><br /><br /><br /><br /><br /><br /><br /><br /><img src ="http://www.blogjava.net/paulwong/aggbug/403001.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-18 18:27 <a href="http://www.blogjava.net/paulwong/archive/2013/08/18/403001.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>How to install Hadoop cluster(2 node cluster) and Hbase on Vmware Workstation. It also includes installing Pig and Hive in the appendix</title><link>http://www.blogjava.net/paulwong/archive/2013/08/17/402982.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 17 Aug 2013 14:23:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/17/402982.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/402982.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/17/402982.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/402982.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/402982.html</trackback:ping><description><![CDATA[By Tzu-Cheng Chuang 1-28-2011<br /><br /><br />Requires: Ubuntu10.04, hadoop0.20.2, zookeeper 3.3.2 HBase0.90.0<br />1. Download Ubuntu 10.04 desktop 32 bit from Ubuntu website.<br /><br />2. Install Ubuntu 10.04 with username: hadoop, password: password,&nbsp; disk size: 20GB, memory: 2048MB, 1 processor, 2 cores<br /><br />3. Install build-essential (for GNU C, C++ compiler)&nbsp;&nbsp;&nbsp; $ sudo apt-get install build-essential <br /><br />4. Install sun-jave-6-jdk<br />&nbsp;&nbsp;&nbsp; (1) Add the Canonical Partner Repository to your apt repositories<br />&nbsp;&nbsp;&nbsp; $ sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner"<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Update the source list<br />&nbsp;&nbsp;&nbsp; $ sudo apt-get update<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Install sun-java-6-jdk and make sure Sun&#8217;s java is the default jvm<br />&nbsp;&nbsp;&nbsp; $ sudo apt-get install sun-java6-jdk<br />&nbsp;&nbsp;&nbsp;&nbsp; (4) Set environment variable by modifying ~/.bashrc file, put the following two lines in the end of the file<br />&nbsp;&nbsp;&nbsp; export JAVA_HOME=/usr/lib/jvm/java-6-sun<br />&nbsp; &nbsp; export PATH=$PATH:$JAVA_HOME/bin&nbsp;<br /><br /> 5. Configure SSH server so that ssh to localhost doesn&#8217;t need a passphrase <br />&nbsp;&nbsp;&nbsp; (1) Install openssh server<br />&nbsp;&nbsp;&nbsp; $ sudo apt-get install openssh-server<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Generate RSA pair key<br />&nbsp;&nbsp;&nbsp; $ ssh-keygen &#8211;t ras &#8211;P ""<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Enable SSH access to local machine<br />&nbsp;&nbsp;&nbsp; $ cat ~/.ssh/id_rsa.pub &gt;&gt; ~/.ssh/authorized_keys <br /><br />6. Disable IPv6 by&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; modifying&nbsp; /etc/sysctl.conf file, put the following two lines in the end of the file<br /> #disable <br />ipv6 net.ipv6.conf.all.disable_ipv6 = 1 <br />net.ipv6.conf.default.disable_ipv6 = 1 <br />net.ipv6.conf.lo.disable_ipv6 = 1 <br /><br />7. Install hadoop<br />&nbsp;&nbsp;&nbsp; (1) Download hadoop-0.20.2.tar.gz(stable release on 1/25/2011)&nbsp; from Apache hadoop website&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (2) Extract hadoop archive file to /usr/local/&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (3) Make symbolic link&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (4) Modify /usr/local/hadoop/conf/hadoop-env.sh&nbsp;&nbsp;&nbsp; <br />Change from # The java implementation to use. Required. # export JAVA_HOME=/usr/lib/j2sdk1.5-sun To # The java implementation to use. Required. export JAVA_HOME=/usr/lib/jvm/java-6-sun<br />&nbsp;&nbsp;&nbsp;&nbsp; (5)Create /usr/local/hadoop-datastore folder&nbsp;&nbsp;&nbsp; <br />$ sudo mkdir /usr/local/hadoop-datastore<br /> $ sudo chown hadoop:hadoop /usr/local/hadoop-datastore<br /> $ sudo chmod 750 /usr/local/hadoop-datastore<br />&nbsp;&nbsp;&nbsp;&nbsp; (6)Put the following code in /usr/local/hadoop/conf/core-site.xml&nbsp;&nbsp;&nbsp; <br />hadoop.tmp.dir/usr/local/hadoop/tmp/dir/hadoop-${user.name}A base for other temporary directories.fs.default.namehdfs://master:54310The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.<br />&nbsp;&nbsp;&nbsp; (7) Put the following code in /usr/local/hadoop/conf/mapred-site.xml&nbsp;&nbsp;&nbsp; <br />mapred.job.trackermaster:54311The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.<br />&nbsp;&nbsp;&nbsp;&nbsp; (8) Put the following code in /usr/local/hadoop/conf/hdfs-site.xml&nbsp;&nbsp;&nbsp; <br />dfs.replication1Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.<br />&nbsp;&nbsp;&nbsp;&nbsp; (9) Add hadoop to environment variable by modifying ~/.bashrc&nbsp;&nbsp;&nbsp; <br />export HADOOP_HOME=/usr/local/hadoop export PATH=$HADOOP_HOME/bin:$PATH <br /><br />8. Restart Ubuntu Linux<br /><br />9. Copy this virtual machine to another folder. At least we have 2 copies of Ubuntu linux<br /><br />10. Modify /etc/hosts on both Linux Virtual Image machines, add in the following lines in the file. The IP address depends on each machine. We can use (ifconfig) to find out IP address.<br /> # /etc/hosts (for master AND slave) 192.168.0.1 master 192.168.0.2 slave&nbsp;&nbsp;&nbsp;&nbsp; Modify the following line, because it might cause Hbase to find out wrong ip.&nbsp;&nbsp;&nbsp; <br />192.168.0.1 ubuntu <br /><br />11. Check hadoop user access on both machines.<br />The hadoop user on the master (aka hadoop@master) must be able to connect a) to its own user account on the master &#8211; i.e. ssh master in this context and not necessarily ssh localhost &#8211; and b) to the hadoop user account on the slave (aka hadoop@slave)&nbsp; via a password-less SSH login. On both machines, make sure each one can connect to master, slave without typing passwords.<br /><br />12. Cluster configuration <br />&nbsp;&nbsp;&nbsp; (1) Modify /usr/local/hadoop/conf/masters<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; only on master machine&nbsp;&nbsp;&nbsp; master<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Modify /usr/local/hadoop/conf/slaves<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; only on master machine&nbsp;&nbsp;&nbsp; master slave<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Change &#8220;localhost&#8221; to &#8220;master&#8221; in /usr/local/conf/hadoop/conf/core-site.xml and /usr/local/hadoop/conf/mapred-site.xml<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; only on master machine&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (4) Change dfs.replication to &#8220;1&#8221; in /usr/local/conf/hadoop/conf/hdfs-site.xml<br />&nbsp;&nbsp;&nbsp; only on master machine&nbsp;&nbsp;&nbsp; <br /><br />13. Format the namenode only once and only on master machine <br />$ /usr/local/hadoop/bin/hadoop namenode &#8211;format <br /><br />14. Later on, start the multi-node cluster by typing following code only on master. So far, please don&#8217;t start hadoop yet. <br />$ /usr/local/hadoop/bin/start-dfs.sh $ /usr/local/hadoop/bin/start-mapred.sh <br /><br />15. Install zookeeper only on master node <br />&nbsp;&nbsp;&nbsp; (1) download zookeeper-3.3.2.tar.gz from Apache hadoop website&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (2) Extract&nbsp; zookeeper-3.3.2.tar.gz&nbsp;&nbsp;&nbsp; $ tar &#8211;xzf zookeeper-3-3.2.tar.gz<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Move folder zookeeper-3.3.2 to /home/hadoop/ and create a symbloink link<br />&nbsp;&nbsp;&nbsp; $ mv zookeeper-3.3.2 /home/hadoop/ ; ln &#8211;s /home/hadoop/zookeeper-3.3.2 /home/hadoop/zookeeper<br />&nbsp;&nbsp;&nbsp;&nbsp; (4) copy conf/zoo_sample.cfg to conf/zoo.cfg<br />&nbsp;&nbsp;&nbsp; $ cp conf/zoo_sample.cfg confg/zoo.cfg<br />&nbsp;&nbsp;&nbsp;&nbsp; (5) Modify conf/zoo.cfg&nbsp;&nbsp;&nbsp; dataDir=/home/hadoop/zookeeper/snapshot <br /><br />16. Install Hbase on both master and slave nodes, configure it as fully-distributed <br />&nbsp;&nbsp;&nbsp; (1) Download hbase-0.90.0.tar.gz from Apache hadoop website&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (2) Extract&nbsp; hbase-0.90.0.tar.gz&nbsp;&nbsp;&nbsp; $ tar &#8211;xzf hbase-0.90.0.tar.gz<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Move folder hbase-0.90.0 to /home/hadoop/ and create a symbloink link&nbsp;&nbsp;&nbsp; $ mv hbase-0.90.0 /home/hadoop/ ; ln &#8211;s /home/hadoop/hbase-0.90.0 /home/hadoop/hbase<br />&nbsp;&nbsp;&nbsp;&nbsp; (4) Edit /home/hadoop/hbase/conf/hbase-site.xml, put the following in between and hbase.rootdirhdfs://master:54310/hbase The directory shared by region servers. Should be fully-qualified to include the filesystem to use. E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR hbase.cluster.distributedtrueThe mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh) hbase.zookeeper.quorummasterComma separated list of servers in the ZooKeeper Quorum. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which we will start/stop ZooKeeper on.<br />&nbsp;&nbsp;&nbsp;&nbsp; (5) modify environment variables in /home/hadoop/hbase/conf/hbase-env.sh<br />&nbsp;&nbsp;&nbsp; export JAVA_HOME=/usr/lib/jvm/java-6-sun/<br /> export HBASE_IDENT_STRING=$HOSTNAME<br /> export HBASE_MANAGES_ZK=false<br />&nbsp;&nbsp;&nbsp;&nbsp; (6)Overwrite /home/hadoop/hbase/conf/regionservers<br />&nbsp; on both machines&nbsp;&nbsp;&nbsp; master slave<br />&nbsp;&nbsp;&nbsp;&nbsp; (7)copy /usr/local/hadoop-0.20.2/haoop-0.20.2-core.jar to /home/hadoop/hbase/lib/&nbsp; on both machines.<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; This is very important to fix version difference issue. Pay attention to its ownership and mode(755).&nbsp;&nbsp;&nbsp; <br /><br />17. Start zookeeper. It seems the zookeeper bundled with Hbase is not set up correctly. <br />$ /home/hadoop/zookeeper/bin/zkServer.sh start&nbsp;&nbsp;&nbsp;&nbsp; (Optional)We can test if zookeeper is running correctly by&nbsp; typing&nbsp;&nbsp;&nbsp;&nbsp; $ /home/hadoop/zookeeper/bin/zkCli.sh &#8211;server 127.0.0.1:2181 <br /><br />18. Start hadoop cluster <br />$ /usr/local/hadoop/bin/start-dfs.sh $ /usr/local/hadoop/bin/start-mapred.sh <br /><br />19. Start Hbase<br /> $ /home/hadoop/hbase/bin/start-hbase.sh <br /><br />20. Use Hbase shell<br /> $ /home/hadoop/hbase/bin/hbase shell&nbsp;&nbsp;&nbsp;&nbsp; Check if hbase is running smoothly<br />&nbsp;&nbsp;&nbsp; Open your browser, and type in the following.<br />&nbsp;&nbsp;&nbsp; http://localhost:60010&nbsp;&nbsp;&nbsp; <br /><br /><br />21. Later on, stop the multi-node cluster by typing following code only on master <br />&nbsp;&nbsp;&nbsp; (1) Stop Hbase&nbsp;&nbsp;&nbsp; $ /home/hadoop/hbase/bin/stop-hbase.sh<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Stop hadoop file system (HDFS)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br />$ /usr/local/hadoop/bin/stop-mapred.sh <br />$ /usr/local/hadoop/bin/stop-dfs.sh<br />&nbsp;&nbsp;&nbsp;&nbsp; (3) Stop zookeeper&nbsp;&nbsp;&nbsp;&nbsp; <br />$ /home/hadoop/zookeeper/bin/zkServer.sh stop <br /><br />Reference<br />http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/<br />http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/<br />http://wiki.apache.org/hadoop/Hbase/10Minutes<br />http://hbase.apache.org/book/quickstart.html<br />http://alans.se/blog/2010/hadoop-hbase-cygwin-windows-7-x64/<br /><br />Author<br />Tzu-Cheng Chuang <br /><br /><br />Appendix- Install Pig and Hive<br />1. Install Pig 0.8.0 on this cluster <br />&nbsp;&nbsp;&nbsp; (1) Download pig-0.8.0.tar.gz from Apache pig project website.&nbsp; Then extract the file and move it to /home/hadoop/&nbsp;&nbsp;&nbsp; <br />$ tar &#8211;xzf pig-0.8.0.tar.gz ; mv pig-0.8.0 /home/hadoop/<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Make symbolink link under pig-0.8.0/conf/&nbsp;&nbsp;&nbsp; <br />$ ln -s /usr/local/hadoop/conf/core-site.xml /home/hadoop/pig-0.8.0/conf/core-site.xml <br />$ ln -s /usr/local/hadoop/conf/mapred-site.xml /home/hadoop/pig-0.8.0/conf/mapred-site.xml <br />$ ln -s /usr/local/hadoop/conf/hdfs-site.xml /home/hadoop/pig-0.8.0/conf/hdfs-site.xml<br />&nbsp;&nbsp;&nbsp;&nbsp; 3) Start pig in map-reduce mode: $ /home/hadoop/pig-0.8.0/bin/pig<br />&nbsp;&nbsp;&nbsp;&nbsp; (4) Exit pig from grunt&gt;&nbsp;&nbsp;&nbsp; quit <br /><br />2. Install Hive on this cluster <br />&nbsp;&nbsp;&nbsp; (1) Download hive-0.6.0.tar.gz from Apache hive project website, and then extract the file and move it to /home/hadoop/&nbsp;&nbsp;&nbsp; $ tar &#8211;xzf hive-0.6.0.tar.gz ; mv hive-0.6.0 ~/<br />&nbsp;&nbsp;&nbsp;&nbsp; (2) Modify java heap size in hive-0.6.0/bin/ext/execHiveCmd.sh&nbsp; Change 4096 to 1024&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; (3) Create /tmp and /user/hive/warehouse and set them chmod g+w in HDFS before a table can be created in Hive&nbsp;&nbsp;&nbsp; $ hadoop fs &#8211;mkdir /tmp $ hadoop fs &#8211;mkdir /user/hive/warehouse $ hadoop fs &#8211;chmod g+w /tmp $ hadoop fs &#8211;chmod g+w /user/hive/warehouse<br />&nbsp;&nbsp;&nbsp;&nbsp; (4) start Hive&nbsp;&nbsp;&nbsp;&nbsp; $ /home/hadoop/hive-0.6.0/bin/hive <br /><br />&nbsp;&nbsp;&nbsp;&nbsp; 3. (Optional)Load data by using Hive <br />&nbsp;&nbsp;&nbsp; Create a file /home/hadoop/customer.txt&nbsp;&nbsp;&nbsp; 1, Kevin 2, David 3, Brian 4, Jane 5, Alice&nbsp;&nbsp;&nbsp;&nbsp; After hive shell is started, type in&nbsp;&nbsp;&nbsp; &gt; CREATE TABLE IF NOT EXISTS customer(id INT, name STRING) &gt; ROW FORMAT delimited fields terminated by ',' &gt; STORED AS TEXTFILE; &gt;LOAD DATA INPATH '/home/hadoop/customer.txt' OVERWRITE INTO TABLE customer; &gt;SELECT customer.id, customer.name from customer;<br /><br /><a href="http://chuangtc.info/ParallelComputing/SetUpHadoopClusterOnVmwareWorkstation.htm" target="_blank">http://chuangtc.info/ParallelComputing/SetUpHadoopClusterOnVmwareWorkstation.htm</a><img src ="http://www.blogjava.net/paulwong/aggbug/402982.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-17 22:23 <a href="http://www.blogjava.net/paulwong/archive/2013/08/17/402982.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBASE界面工具</title><link>http://www.blogjava.net/paulwong/archive/2013/08/14/402775.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Wed, 14 Aug 2013 01:51:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/14/402775.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/402775.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/14/402775.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/402775.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/402775.html</trackback:ping><description><![CDATA[
hbaseexplorer<br />下载此0.6的WAR包时，要将lib下的jasper-runtime-5.5.23.jar和jasper-compiler-5.5.23.jar删掉，否则会报错<br /><a href="http://sourceforge.net/projects/hbaseexplorer/?source=dlp" target="_blank">http://sourceforge.net/projects/hbaseexplorer/?source=dlp</a><br /><br />HBaseXplorer<br /><a href="https://github.com/bit-ware/HBaseXplorer/downloads" target="_blank">https://github.com/bit-ware/HBaseXplorer/downloads</a><br /><br />HBase Manager<br /><a href="http://sourceforge.net/projects/hbasemanagergui/" target="_blank">http://sourceforge.net/projects/hbasemanagergui/</a> 
<img src ="http://www.blogjava.net/paulwong/aggbug/402775.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-14 09:51 <a href="http://www.blogjava.net/paulwong/archive/2013/08/14/402775.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Kettle - HADOOP数据转换工具</title><link>http://www.blogjava.net/paulwong/archive/2013/08/01/402269.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Thu, 01 Aug 2013 09:21:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/08/01/402269.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/402269.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/08/01/402269.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/402269.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/402269.html</trackback:ping><description><![CDATA[ETL（Extract-Transform-Load的缩写，即数据抽取、转换、装载的过程），对于企业或行业应用来说，我们经常会遇到各种数据的处理，转换，迁移，所以了解并掌握一种etl工具的使用，必不可少，这里我介绍一个我在工作中使用了3年左右的ETL工具Kettle,本着好东西不独享的想法，跟大家分享碰撞交流一下！在使用中我感觉这个工具真的很强大，支持图形化的GUI设计界面，然后可以以工作流的形式流转，在做一些简单或复杂的数据抽取、质量检测、数据清洗、数据转换、数据过滤等方面有着比较稳定的表现，其中最主要的我们通过熟练的应用它，减少了非常多的研发工作量，提高了我们的工作效率，不过对于我这个.net研发者来说唯一的遗憾就是这个工具是Java编写的。<br /><br /><a href="http://www.cnblogs.com/limengqiang/archive/2013/01/16/KettleApply1.html" target="_blank">http://www.cnblogs.com/limengqiang/archive/2013/01/16/KettleApply1.html</a><img src ="http://www.blogjava.net/paulwong/aggbug/402269.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-08-01 17:21 <a href="http://www.blogjava.net/paulwong/archive/2013/08/01/402269.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>使用Sqoop实现HDFS与Mysql互转</title><link>http://www.blogjava.net/paulwong/archive/2013/05/11/399153.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 11 May 2013 13:27:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/05/11/399153.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/399153.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/05/11/399153.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/399153.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/399153.html</trackback:ping><description><![CDATA[<br />
简介<br />
Sqoop是一个用来将Hadoop和关系型数据库中的数据相互转移的工具，可以将一个关系型数据库（例如 ： MySQL ,Oracle ,Postgres等）中的数据导入到Hadoop的HDFS中，也可以将HDFS的数据导入到关系型数据库中。<br />
<br />
http://sqoop.apache.org/<br />
<br />
环境<br />
当调试过程出现IncompatibleClassChangeError一般都是版本兼容问题。<br />
<br />
为了保证hadoop和sqoop版本的兼容性，使用Cloudera，<br />
<br />
Cloudera简介：<br />
<br />
Cloudera为了让Hadoop的配置标准化，可以帮助企业安装，配置，运行hadoop以达到大规模企业数据的处理和分析。<br />
<br />
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDHTarballs/3.25.2013/CDH4-Downloadable-Tarballs/CDH4-Downloadable-Tarballs.html<br />
<br />
下载安装hadoop-0.20.2-cdh3u6，sqoop-1.3.0-cdh3u6。<br />
<br />
安装<br />
安装比较简单，直接解压即可<br />
<br />
唯一需要做的就是将mysql的jdbc适配包mysql-connector-java-5.0.7-bin.jar copy到$SQOOP_HOME/lib下。<br />
<br />
配置好环境变量：/etc/profile<br />
<br />
export SQOOP_HOME=/home/hadoop/sqoop-1.3.0-cdh3u6/<br />
<br />
export PATH=$SQOOP_HOME/bin:$PATH<br />
<br />
MYSQL转HDFS-示例<br />
./sqoop import --connect jdbc:mysql://10.8.210.166:3306/recsys --username root --password root --table shop -m 1 --target-dir /user/recsys/input/shop/$today<br />
<br />
<br />
HDFS转MYSQ-示例<br />
./sqoop export --connect jdbc:mysql://10.8.210.166:3306/recsys --username root --password root --table shopassoc  --fields-terminated-by ',' --export-dir /user/recsys/output/shop/$today<br />
<br />
示例参数说明<br />
(其他参数我未使用，故不作解释，未使用，就没有发言权，详见命令help)<br />
<br />
<br />
参数类型<br />
<br />
参数名<br />
<br />
解释<br />
<br />
公共<br />
<br />
connect<br />
<br />
Jdbc-url<br />
<br />
公共<br />
<br />
username<br />
<br />
---<br />
<br />
公共<br />
<br />
password<br />
<br />
---<br />
<br />
公共<br />
<br />
table<br />
<br />
表名<br />
<br />
Import<br />
<br />
target-dir<br />
<br />
制定输出hdfs目录，默认输出到/user/$loginName/<br />
<br />
export<br />
<br />
fields-terminated-by<br />
<br />
Hdfs文件中的字段分割符，默认是&#8220;\t&#8221;<br />
<br />
export<br />
<br />
export-dir<br />
<br />
hdfs文件的路径<img src ="http://www.blogjava.net/paulwong/aggbug/399153.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-05-11 21:27 <a href="http://www.blogjava.net/paulwong/archive/2013/05/11/399153.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>一网打尽13款开源Java大数据工具</title><link>http://www.blogjava.net/paulwong/archive/2013/05/03/398700.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Fri, 03 May 2013 01:05:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/05/03/398700.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/398700.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/05/03/398700.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/398700.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/398700.html</trackback:ping><description><![CDATA[<p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>下面将介绍大数据领域支持Java的主流开源工具</strong>：</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce391277b5.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce391277b5.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>1.	HDFS</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">HDFS是Hadoop应用程序中主要的分布式储存系统， HDFS集群包含了一个NameNode（主节点），这个节点负责管理所有文件系统的元数据及存储了真实数据的DataNode（数据节点，可以有很多）。HDFS针对海量数据所设计，所以相比传统文件系统在大批量小文件上的优化，HDFS优化的则是对小批量大型文件的访问和存储。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce3c49ded6.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"></a><a href="http://cms.csdnimg.cn/article/201304/28/517ce3c49ded6.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce3c49ded6.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>2.	MapReduce</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Hadoop MapReduce是一个软件框架，用以轻松编写处理海量（TB级）数据的并行应用程序，以可靠和容错的方式连接<span style="line-height: 1.45em;">大型集群中</span><span style="line-height: 1.45em;">上万个节点（商用硬件）。</span></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce3ee64519.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce3ee64519.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>3.	HBase</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache HBase是Hadoop数据库，一个分布式、可扩展的大数据存储。它提供了大数据集上随机和实时的读/写访问，并针对了商用服务器集群上的大型表格做出优化&#8212;&#8212;上百亿行，上千万列。其核心是Google Bigtable论文的开源实现，分布式列式存储。就像Bigtable利用GFS（Google File System）提供的分布式数据存储一样，它是Apache Hadoop在HDFS基础上提供的一个类Bigatable。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce413366c7.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce413366c7.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>4.	Cassandra</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Cassandra是一个高性能、可线性扩展、高有效性数据库，可以运行在商用硬件或云基础设施上打造完美的任务关键性数据平台。在横跨数据中心的复制中，Cassandra同类最佳，为用户提供更低的延时以及更可靠的灾难备份。通过log-structured update、反规范化和物化视图的强支持以及强大的内置缓存，Cassandra的数据模型提供了方便的二级索引（column indexe）。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce4611885c.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce4611885c.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>5.	Hive</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Hive是Hadoop的一个数据仓库系统，促进了数据的综述（将结构化的数据文件映射为一张数据库表）、即席查询以及存储在Hadoop兼容系统中的大型数据集分析。Hive提供完整的SQL查询功能&#8212;&#8212;HiveQL语言，同时当使用这个语言表达一个<span style="line-height: 1.45em;">逻辑</span><span style="line-height: 1.45em;">变得低效和繁琐</span><span style="line-height: 1.45em;">时，HiveQL还允许传统的Map/Reduce程序员使用自己定制的Mapper和Reducer。</span></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce470085ed.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce470085ed.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>6.	Pig</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Pig是一个用于大型数据集分析的平台，它包含了一个用于数据分析应用的高级语言以及评估这些应用的基础设施。Pig应用的闪光特性在于它们的结构经得起大量的并行，也就是说让它们支撑起非常大的数据集。Pig的基础设施层包含了产生Map-Reduce任务的编译器。Pig的语言层当前包含了一个原生语言&#8212;&#8212;Pig Latin，开发的初衷是易于编程和保证可扩展性。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce47b8e077.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce47b8e077.jpg" border="0" alt="" style="vertical-align: middle; border: none; width: 99px; height: 99px; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>7.	Chukwa</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Chukwa是个开源的数据收集系统，用以监视大型分布系统。建立于HDFS和Map/Reduce框架之上，继承了Hadoop的可扩展性和稳定性。Chukwa同样包含了一个灵活和强大的工具包，用以显示、监视和分析结果，以保证数据的使用达到最佳效果。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce4870b072.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce4870b072.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>8.	Ambari</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Ambari是一个基于web的工具，用于配置、管理和监视Apache Hadoop集群，支持Hadoop HDFS,、Hadoop MapReduce、Hive、HCatalog,、HBase、ZooKeeper、Oozie、Pig和Sqoop。Ambari同样还提供了集群状况仪表盘，比如heatmaps和查看MapReduce、Pig、Hive应用程序的能力，以友好的用户界面对它们的性能特性进行诊断。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce49282930.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce49282930.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>9.	ZooKeeper</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache ZooKeeper是一个针对大型分布式系统的可靠协调系统，提供的功能包括：配置维护、命名服务、分布式同步、组服务等。ZooKeeper的目标就是封装好复杂易出错的关键服务，将简单易用的接口和性能高效、功能稳定的系统提供给用户。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce49e31e19.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce49e31e19.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>10.	Sqoop</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Sqoop是一个用来将Hadoop和关系型数据库中的数据相互转移的工具，可以将一个关系型数据库中数据导入Hadoop的HDFS中，也可以将HDFS中数据导入关系型数据库中。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce4b0d3c61.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce4b0d3c61.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>11.	Oozie</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Oozie是一个可扩展、可靠及可扩充的工作流调度系统，用以管理Hadoop作业。Oozie Workflow作业是活动的Directed Acyclical Graphs（DAGs）。Oozie Coordinator作业是由周期性的Oozie Workflow作业触发，周期一般决定于时间（频率）和数据可用性。Oozie与余下的Hadoop堆栈结合使用，开箱即用的支持多种类型Hadoop作业（比如：Java map-reduce、Streaming map-reduce、Pig、 Hive、Sqoop和Distcp）以及其它系统作业（比如Java程序和Shell脚本）。</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce4bdedb23.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce4bdedb23.jpg" border="0" alt="" style="vertical-align: middle; border: none; width: 100px; height: 100px; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>12.	Mahout</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache Mahout是个可扩展的机器学习和数据挖掘库，当前Mahout支持主要的4个用例：</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"></p><ul style="margin: 0px 0px 1em 20px; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">推荐挖掘：搜集用户动作并以此给用户推荐可能喜欢的事物。</span></li><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">聚集：收集文件并进行相关文件分组。</span></li><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">分类：从现有的分类文档中学习，寻找文档中的相似特征，并为无标签的文档进行正确的归类。</span></li><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">频繁项集挖掘：将一组项分组，并识别哪些个别项会经常一起出现。</span></li></ul><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><a href="http://cms.csdnimg.cn/article/201304/28/517ce4cf93346.jpg" target="_blank" style="cursor: pointer; color: #0066cc; text-decoration: none;"><img src="http://cms.csdnimg.cn/article/201304/28/517ce4cf93346.jpg" border="0" alt="" style="vertical-align: middle; border: none; float: right; margin: 0px 0px 10px 10px;" /></a></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><strong>13.	HCatalog</strong></p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;">Apache HCatalog是Hadoop建立数据的映射表和存储管理服务，它包括：</p><p style="margin: 0px 0px 1.5em; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"></p><ul style="margin: 0px 0px 1em 20px; padding: 0px; list-style: none; color: #333333; font-family: Helvetica, Tahoma, Arial, sans-serif; line-height: 24px; background-color: #ffffff;"><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">提供一个共享模式和数据类型机制。</span></li><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">提供一个抽象表，这样用户就不需要关注数据存储的方式和地址。</span></li><li style="margin: 0px; padding: 0px; list-style: disc;"><span style="line-height: 1.45em;">为类似Pig、MapReduce及Hive这些数据处理工具提供互操作性。</span></li></ul><img src ="http://www.blogjava.net/paulwong/aggbug/398700.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-05-03 09:05 <a href="http://www.blogjava.net/paulwong/archive/2013/05/03/398700.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>一个PIG脚本例子分析</title><link>http://www.blogjava.net/paulwong/archive/2013/04/13/397791.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 13 Apr 2013 07:21:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/04/13/397791.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/397791.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/04/13/397791.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/397791.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/397791.html</trackback:ping><description><![CDATA[执行脚本：<br />
<div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->PIGGYBANK_PATH=$PIG_HOME/contrib/piggybank/java/piggybank.jar<br />
INPUT=pig/input/test-pig-full.txt<br />
OUTPUT=pig/output/test-pig-output-$(date&nbsp;&nbsp;+%Y%m%d%H%M%S)<br />
PIGSCRIPT=analyst_status_logs.pig<br />
<br />
<span style="color: #008000; ">#</span><span style="color: #008000; ">analyst_500_404_month.pig</span><span style="color: #008000; "><br />
#</span><span style="color: #008000; ">analyst_500_404_day.pig</span><span style="color: #008000; "><br />
#</span><span style="color: #008000; ">analyst_404_percentage.pig</span><span style="color: #008000; "><br />
#</span><span style="color: #008000; ">analyst_500_percentage.pig</span><span style="color: #008000; "><br />
#</span><span style="color: #008000; ">analyst_unique_path.pig</span><span style="color: #008000; "><br />
#</span><span style="color: #008000; ">analyst_user_logs.pig</span><span style="color: #008000; "><br />
#</span><span style="color: #008000; ">analyst_status_logs.pig</span><span style="color: #008000; "><br />
</span><br />
<br />
pig&nbsp;-p&nbsp;PIGGYBANK_PATH=$PIGGYBANK_PATH&nbsp;-p&nbsp;INPUT=$INPUT&nbsp;-p&nbsp;OUTPUT=$OUTPUT&nbsp;$PIGSCRIPT</div><br /><br />要分析的数据源，LOG 文件<br /><div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />-->46.20.45.18&nbsp;-&nbsp;-&nbsp;[25/Dec/2012:23:00:25&nbsp;+0100]&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">GET&nbsp;/&nbsp;HTTP/1.0</span><span style="color: #800000; ">"</span>&nbsp;302&nbsp;-&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">-</span><span style="color: #800000; ">"</span>&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)</span><span style="color: #800000; ">"</span>&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">-</span><span style="color: #800000; ">"</span>&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">-</span><span style="color: #800000; ">"</span>&nbsp;46.20.45.18&nbsp;<span style="color: #800000; ">""</span>&nbsp;11011AEC9542DB0983093A100E8733F8&nbsp;0<br />46.20.45.18&nbsp;-&nbsp;-&nbsp;[25/Dec/2012:23:00:25&nbsp;+0100]&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">GET&nbsp;/sign-in.jspx&nbsp;HTTP/1.0</span><span style="color: #800000; ">"</span>&nbsp;200&nbsp;3926&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">-</span><span style="color: #800000; ">"</span>&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)</span><span style="color: #800000; ">"</span>&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">-</span><span style="color: #800000; ">"</span>&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">-</span><span style="color: #800000; ">"</span>&nbsp;46.20.45.18&nbsp;<span style="color: #800000; ">""</span>&nbsp;11011AEC9542DB0983093A100E8733F8&nbsp;0<br />69.59.28.19&nbsp;-&nbsp;-&nbsp;[25/Dec/2012:23:01:25&nbsp;+0100]&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">GET&nbsp;/&nbsp;HTTP/1.0</span><span style="color: #800000; ">"</span>&nbsp;302&nbsp;-&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">-</span><span style="color: #800000; ">"</span>&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)</span><span style="color: #800000; ">"</span>&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">-</span><span style="color: #800000; ">"</span>&nbsp;<span style="color: #800000; ">"</span><span style="color: #800000; ">-</span><span style="color: #800000; ">"</span>&nbsp;69.59.28.19&nbsp;<span style="color: #800000; ">""</span>&nbsp;36D80DE7FE52A2D89A8F53A012307B0A&nbsp;15</div><br /><br />PIG脚本：<br /><div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />-->--注册JAR包，因为要用到DateExtractor<br />register&nbsp;<span style="color: #800000; ">'</span><span style="color: #800000; ">$PIGGYBANK_PATH</span><span style="color: #800000; ">'</span>;<br /><br />--声明一个短函数名<br />DEFINE&nbsp;DATE_EXTRACT_MM&nbsp;<br />org.apache.pig.piggybank.evaluation.util.apachelogparser.DateExtractor(<span style="color: #800000; ">'</span><span style="color: #800000; ">yyyy-MM</span><span style="color: #800000; ">'</span>);<br /><br />DEFINE&nbsp;DATE_EXTRACT_DD&nbsp;<br />org.apache.pig.piggybank.evaluation.util.apachelogparser.DateExtractor(<span style="color: #800000; ">'</span><span style="color: #800000; ">yyyy-MM-dd</span><span style="color: #800000; ">'</span>);<br /><br />--&nbsp;pig/input/test-pig-full.txt<br />--把数据从变量所指的文件加载到PIG中，并定义数据列名，此时的数据集为数组(a,b,c)<br />raw_logs&nbsp;=&nbsp;load&nbsp;<span style="color: #800000; ">'</span><span style="color: #800000; ">$INPUT</span><span style="color: #800000; ">'</span>&nbsp;USING&nbsp;org.apache.pig.piggybank.storage.MyRegExLoader(<span style="color: #800000; ">'</span><span style="color: #800000; ">^(\\S+)&nbsp;(\\S+)&nbsp;(\\S+)&nbsp;\\[([\\w:/]+\\s[+\\-]\\d{4})\\]&nbsp;"(\\S+)&nbsp;(\\S+)&nbsp;(HTTP[^"]+)"&nbsp;(\\S+)&nbsp;(\\S+)&nbsp;"([^"]*)"&nbsp;"([^"]*)"&nbsp;"(\\S+)"&nbsp;"(\\S+)"&nbsp;(\\S+)&nbsp;"(.*)"&nbsp;(\\S+)&nbsp;(\\S+)</span><span style="color: #800000; ">'</span>)<br />as&nbsp;(remoteAddr:&nbsp;chararray,&nbsp;<br />n2:&nbsp;chararray,&nbsp;<br />n3:&nbsp;chararray,&nbsp;<br />time:&nbsp;chararray,&nbsp;<br />method:&nbsp;chararray,<br />path:chararray,<br />protocol:chararray,<br />status:&nbsp;int,&nbsp;<br />bytes_string:&nbsp;chararray,&nbsp;<br />referrer:&nbsp;chararray,&nbsp;<br />browser:&nbsp;chararray,&nbsp;<br />n10:chararray,<br />remoteLogname:&nbsp;chararray,&nbsp;<br />remoteAddr12:&nbsp;chararray,&nbsp;<br />path2:&nbsp;chararray,&nbsp;<br />sessionid:&nbsp;chararray,&nbsp;<br />n15:&nbsp;chararray<br />);<br /><br />--过滤数据<br />filter_logs&nbsp;=&nbsp;FILTER&nbsp;raw_logs&nbsp;BY&nbsp;<span style="color: #0000FF; ">not</span>&nbsp;(browser&nbsp;matches&nbsp;<span style="color: #800000; ">'</span><span style="color: #800000; ">.*pingdom.*</span><span style="color: #800000; ">'</span>);<br />--item_logs&nbsp;=&nbsp;FOREACH&nbsp;raw_logs&nbsp;GENERATE&nbsp;browser;<br /><br />--percent&nbsp;500&nbsp;logs<br />--重定义数据项，数据集只取2项status,month<br />reitem_percent_500_logs&nbsp;=&nbsp;FOREACH&nbsp;filter_logs&nbsp;GENERATE&nbsp;status,DATE_EXTRACT_MM(time)&nbsp;as&nbsp;month;<br />--分组数据集，此时的数据结构为MAP(a{(aa,bb,cc),(dd,ee,ff)},b{(bb,cc,dd),(ff,gg,hh)})<br />group_month_percent_500_logs&nbsp;=&nbsp;GROUP&nbsp;reitem_percent_500_logs&nbsp;BY&nbsp;(month);<br />--重定义分组数据集数据项，进行分组统计，此时要联合分组数据集和原数据集统计<br />final_month_500_logs&nbsp;=&nbsp;FOREACH&nbsp;group_month_percent_500_logs&nbsp;<br />{<br />&nbsp;&nbsp;&nbsp;&nbsp;--对原数据集做count，因为是在foreachj里做count的，即使是对原数据集，也会自动会加month==group的条件<br />&nbsp;&nbsp;&nbsp;&nbsp;--从这里可以看出对于group里的数据集，完全没用到<br />&nbsp;&nbsp;&nbsp;&nbsp;--这时是以每一行为单位的，统计MAP中的KEY-a对应的数组在原数据集中的个数<br />&nbsp;&nbsp;&nbsp;&nbsp;total&nbsp;=&nbsp;COUNT(reitem_percent_500_logs);<br />&nbsp;&nbsp;&nbsp;&nbsp;--对原数据集做filter，因为是在foreachj里做count的，即使是对原数据集，也会自动会加month==group的条件<br />&nbsp;&nbsp;&nbsp;&nbsp;--重新过滤一下原数据集，得到status==500,month==group的数据集<br />&nbsp;&nbsp;&nbsp;&nbsp;t&nbsp;=&nbsp;filter&nbsp;reitem_percent_500_logs&nbsp;by&nbsp;status==&nbsp;500;&nbsp;--create&nbsp;a&nbsp;bag&nbsp;which&nbsp;contains&nbsp;only&nbsp;T&nbsp;values<br />&nbsp;&nbsp;&nbsp;&nbsp;--重定义数据项，取group，统计结果<br />&nbsp;&nbsp;&nbsp;&nbsp;generate&nbsp;flatten(group)&nbsp;as&nbsp;col1,&nbsp;100*(double)COUNT(t)/(double)total;<br />}<br />STORE&nbsp;final_month_500_logs&nbsp;into&nbsp;<span style="color: #800000; ">'</span><span style="color: #800000; ">$OUTPUT</span><span style="color: #800000; ">'</span>&nbsp;using&nbsp;PigStorage(<span style="color: #800000; ">'</span><span style="color: #800000; ">,</span><span style="color: #800000; ">'</span>);</div><br /><img src ="http://www.blogjava.net/paulwong/aggbug/397791.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-04-13 15:21 <a href="http://www.blogjava.net/paulwong/archive/2013/04/13/397791.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>把命令行中的值传进PIG中</title><link>http://www.blogjava.net/paulwong/archive/2013/04/10/397645.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Wed, 10 Apr 2013 07:32:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/04/10/397645.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/397645.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/04/10/397645.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/397645.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/397645.html</trackback:ping><description><![CDATA[<a href="http://wiki.apache.org/pig/ParameterSubstitution" target="_blank">http://wiki.apache.org/pig/ParameterSubstitution<br />
<br />
<br />
</a>
<div>
<div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->%pig&nbsp;-param&nbsp;input=/user/paul/sample.txt&nbsp;-param&nbsp;output=/user/paul/output/</div>
</div><br /><br />PIG中获取<br /><div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />-->records&nbsp;=&nbsp;LOAD&nbsp;<span style="color: #800080; ">$input</span>;</div><img src ="http://www.blogjava.net/paulwong/aggbug/397645.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-04-10 15:32 <a href="http://www.blogjava.net/paulwong/archive/2013/04/10/397645.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>PIG中的分组统计百分比</title><link>http://www.blogjava.net/paulwong/archive/2013/04/10/397642.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Wed, 10 Apr 2013 06:13:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/04/10/397642.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/397642.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/04/10/397642.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/397642.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/397642.html</trackback:ping><description><![CDATA[<a href="http://stackoverflow.com/questions/15318785/pig-calculating-percentage-of-total-for-a-field" target="_blank">http://stackoverflow.com/questions/15318785/pig-calculating-percentage-of-total-for-a-field<br /><br /></a><a href="http://stackoverflow.com/questions/13476642/calculating-percentage-in-a-pig-query" target="_blank">http://stackoverflow.com/questions/13476642/calculating-percentage-in-a-pig-query</a><img src ="http://www.blogjava.net/paulwong/aggbug/397642.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-04-10 14:13 <a href="http://www.blogjava.net/paulwong/archive/2013/04/10/397642.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>PIG小议</title><link>http://www.blogjava.net/paulwong/archive/2013/04/05/397411.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Fri, 05 Apr 2013 13:33:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/04/05/397411.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/397411.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/04/05/397411.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/397411.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/397411.html</trackback:ping><description><![CDATA[<div><strong>什么是PIG</strong></div><div>是一种设计语言，通过设计数据怎么流动，然后由相应的引擎将此变成MAPREDUCE JOB去HADOOP中运行。</div><div></div><div></div><div><strong>PIG与SQL</strong></div><div>两者有相同之处，执行一个或多个语句，然后出来一些结果。</div><div>但不同的是，SQL要先把数据导到表中才能执行，SQL不关心中间如何做，即发一个SQL语句过去，就有结果出来。</div><div>PIG，无须导数据到表中，但要设计直到出结果的中间过程，步骤如何等等。</div><img src ="http://www.blogjava.net/paulwong/aggbug/397411.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-04-05 21:33 <a href="http://www.blogjava.net/paulwong/archive/2013/04/05/397411.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>PIG资源</title><link>http://www.blogjava.net/paulwong/archive/2013/04/05/397406.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Fri, 05 Apr 2013 10:19:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/04/05/397406.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/397406.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/04/05/397406.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/397406.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/397406.html</trackback:ping><description><![CDATA[Hadoop Pig学习笔记(一) 各种SQL在PIG中实现<br />
<a href="http://guoyunsky.iteye.com/blog/1317084" target="_blank">http://guoyunsky.iteye.com/blog/1317084<br />
<br />
</a><a href="http://guoyunsky.iteye.com/category/196632" target="_blank">http://guoyunsky.iteye.com/category/196632<br />
<br />
</a>Hadoop学习笔记(9) Pig简介<br />
<a href="http://www.distream.org/?p=385" target="_blank">http://www.distream.org/?p=385</a><br />
<br />
<br />
[hadoop系列]Pig的安装和简单示例<br />
<a href="http://blog.csdn.net/inkfish/article/details/5205999" target="_blank">http://blog.csdn.net/inkfish/article/details/5205999</a><br />
<br />
<br />
Hadoop and Pig for Large-Scale Web Log Analysis<br />
<a href="http://www.devx.com/Java/Article/48063" target="_blank">http://www.devx.com/Java/Article/48063</a>
<br />
<br />
<br />
Pig实战<br />
<a href="http://www.cnblogs.com/xuqiang/archive/2011/06/06/2073601.html" target="_blank">http://www.cnblogs.com/xuqiang/archive/2011/06/06/2073601.html</a><br />
<br />
<br />
[原创]Apache Pig中文教程（进阶）<br />
<a href="http://www.codelast.com/?p=4249" target="_blank">http://www.codelast.com/?p=4249</a><br />
<br />
<br />
基于hadoop平台的pig语言对apache日志系统的分析<br />
<a href="http://goodluck-wgw.iteye.com/blog/1107503" target="_blank">http://goodluck-wgw.iteye.com/blog/1107503</a><br />
<br />
<br />
!!Pig语言<br />
<a href="http://hi.baidu.com/cpuramdisk/item/a2980b78caacfa3d71442318" target="_blank">http://hi.baidu.com/cpuramdisk/item/a2980b78caacfa3d71442318</a><br />
<br />
<br />
Embedding Pig In Java Programs<br />
<a href="http://wiki.apache.org/pig/EmbeddedPig" target="_blank">http://wiki.apache.org/pig/EmbeddedPig</a><br />
<br />
<br />
一个pig事例(REGEX_EXTRACT_ALL, DBStorage，结果存进数据库)<br />
<a href="http://www.myexception.cn/database/1256233.html" target="_blank">http://www.myexception.cn/database/1256233.html</a><br />
<br />
<br />
Programming Pig<br />
<a href="http://ofps.oreilly.com/titles/9781449302641/index.html" target="_blank">http://ofps.oreilly.com/titles/9781449302641/index.html</a><br />
<br />
<br />
[原创]Apache Pig的一些基础概念及用法总结（1）<br />
<a href="http://www.codelast.com/?p=3621" target="_blank">http://www.codelast.com/?p=3621<br />
<br /></a><br />
!PIG手册<br /><a href="http://pig.apache.org/docs/r0.11.1/func.html#built-in-functions" target="_blank">http://pig.apache.org/docs/r0.11.1/func.html#built-in-functions</a><img src ="http://www.blogjava.net/paulwong/aggbug/397406.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-04-05 18:19 <a href="http://www.blogjava.net/paulwong/archive/2013/04/05/397406.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>hadoop集群中添加节点步骤</title><link>http://www.blogjava.net/paulwong/archive/2013/03/16/396544.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Sat, 16 Mar 2013 15:04:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/03/16/396544.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/396544.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/03/16/396544.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/396544.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/396544.html</trackback:ping><description><![CDATA[在新节点安装好hadoop<br /><br /><br />把namenode的有关配置文件复制到该节点<br /><br /><br />修改masters和slaves文件,增加该节点<br /><br /><br />设置ssh免密码进出该节点<br /><br /><br />单独启动该节点上的datanode和tasktracker(hadoop-daemon.sh start  datanode/tasktracker)<br /><br /><br />运行start-balancer.sh进行数据负载均衡<br />  <br /><br />负载均衡:作用:当节点出现故障,或新增加节点时,数据块分布可能不均匀,负载均衡可以重新平衡各个datanode上数据块的分布<img src ="http://www.blogjava.net/paulwong/aggbug/396544.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-03-16 23:04 <a href="http://www.blogjava.net/paulwong/archive/2013/03/16/396544.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Phoenix: HBase终于有SQL接口了～</title><link>http://www.blogjava.net/paulwong/archive/2013/02/19/395432.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Tue, 19 Feb 2013 15:15:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/02/19/395432.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/395432.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/02/19/395432.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/395432.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/395432.html</trackback:ping><description><![CDATA[这项利器是由CRM领域的领导Saleforce发布的。相当于HBase的JDBC。<br /><br />具体详见：<a href="https://github.com/forcedotcom/phoenix" target="_blank">https://github.com/forcedotcom/phoenix</a><br /><br />支持select，from，where，groupby，having，orderby和建表操作，未来将支持二级索引，join操作，动态列簇等功能。<br /><br />是建立在原生HBASE API基础上的，响应时间10M级别的数据是毫秒，100M级别是秒。<br /><br /><br /><div><a href="http://www.infoq.com/cn/news/2013/02/Phoenix-HBase-SQL" target="_blank">http://www.infoq.com/cn/news/2013/02/Phoenix-HBase-SQL</a></div><img src ="http://www.blogjava.net/paulwong/aggbug/395432.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-02-19 23:15 <a href="http://www.blogjava.net/paulwong/archive/2013/02/19/395432.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBASE读书笔记-基础功能</title><link>http://www.blogjava.net/paulwong/archive/2013/02/06/395168.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Wed, 06 Feb 2013 01:53:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/02/06/395168.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/395168.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/02/06/395168.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/395168.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/395168.html</trackback:ping><description><![CDATA[<ol>
     <li>HBASE的SHELL命令使用<br />
     <br />
     </li>
     <li>HBASE的JAVA CLIENT的使用<br /><br />新增和修改记录用PUT。<br /><br />PUT的执行流程：<br />首先会在内存中增加MEMSTORE，如果这个表有N个COLOUMN FAMILY，则会产生N个MEMSTORE，记录中的值属于不同的COLOUMN FAMILY的，会保存到不同的MEMSTORE中。MEMSTORE中的值不会马上FLUSH到文件中，而是到MEMSTORE满的时候再FLUSH，且FLUSH的时候不会写入已存在的HFILE中，而是新增一个HFILE去保存。另外会写WRITE AHEAD LOG，这是由于新增记录时不是马上写入HFILE的，如果中途出现DOWN机时，则HBASE重启时会根据这个LOG来恢复数据。<br /><br />删除记录用DELETE。<br /><br />删除时并不会将在HFILE中的内容删除，而是作一标记，然后在查询的时候可以不取这些记录。<br /><br />读取单条记录用GET。<br /><br />读取的时候会将记录保存到CAHE中，同样如果这个表有N个COLOUMN FAMILY，则会产生N个CAHE<br />，记录中的值属于不同的COLOUMN FAMILY的，会保存到不同的CAHE中。这样下次客户端再取记录时会综合CAHE和MEMSTORE来返回数据。<br /><br />新增表用HADMIN。<br /><br />查询多条记录用SCAN和FILTER。<br />
     <br />
     </li>
     <li>HBASE的分布式计算<br /><br />为什么会有分布式计算<br />前面的API是针对ONLINE的应用，即要求低延时的，相当于OLTP。而针对大量数据时这些API就不适用了。<br />如要针对全表数据进行分析时用SCAN，这样会将全表数据取回本地，如果数据量在100G时会耗几个小时，为了节省时间，引入多线程做法，但要引入多线程时，需遵从新算法：将全表数据分成N个段，每段用一个线程处理，处理完后，交结果合成，然后进行分析。<br /><br />如果数据量在200G或以上时间就加倍了，多线程的方式不能满足了，因此引入多进程方式，即将计算放在不同的物理机上处理，这时就要考虑每个物理机DOWN机时的处理方式等情况了，HADOOP的MAPREDUCE则是这种分布式计算的框架了，对于应用者而言，只须处理分散和聚合的算法，其他的无须考虑。<br /><br />HBASE的MAPREDUCE<br />使用TABLEMAP和TABLEREDUCE。<br /><br />HBASE的部署架构和组成的组件<br />架构在HADOOP和ZOOPKEEPER之上。<br /><br />HBASE的查询记录和保存记录的流程<br />说见前一编博文。<br /><br />HBASE作为数据来源地、保存地和共享数据源的处理方式<br />即相当于数据库中JOIN的算法：REDUCE SIDE JOIN、MAP SIDE JOIN。<br /></li>
</ol><img src ="http://www.blogjava.net/paulwong/aggbug/395168.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-02-06 09:53 <a href="http://www.blogjava.net/paulwong/archive/2013/02/06/395168.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>监控HBASE</title><link>http://www.blogjava.net/paulwong/archive/2013/02/04/395107.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Mon, 04 Feb 2013 07:08:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/02/04/395107.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/395107.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/02/04/395107.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/395107.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/395107.html</trackback:ping><description><![CDATA[@import url(http://www.blogjava.net/CuteSoft_Client/CuteEditor/Load.ashx?type=style&file=SyntaxHighlighter.css);@import url(/css/cuteeditor.css);
<div>Hadoop/Hbase是开源版的google Bigtable, GFS, MapReduce的实现，随着互联网的发展，大数据的处理显得越发重要，Hadoop/Hbase的用武之地也越发广泛。为了更好的使用Hadoop/Hbase系统，需要有一套完善的监控系统，来了解系统运行的实时状态，做到一切尽在掌握。Hadoop/Hbase有自己非常完善的metrics framework, 里面包种各种维度的系统指标的统计，另外，这套metrics framework设计的也非常不错，用户可以很方便地添加自定义的metrics。更为重要的一点是metrics的展示方式，目前它支持三种方式：一种是落地到本地文件，一种是report给Ganglia系统，另一种是通过JMX来展示。本文主要介绍怎么把Hadoop/Hbase的metrics report给Ganglia系统，通过浏览器来查看。<br />
<br />
介绍后面的内容之前有必要先简单介绍一下Ganglia系统。Ganglia是一个开源的用于系统监控的系统，它由三部分组成：gmond, gmetad, webfrontend, 三部分是这样分工的：<br />
<br />
gmond: 是一个守护进程，运行在每一个需要监测的节点上，收集监测统计，发送和接受在同一个组播或单播通道上的统计信息<br />
gmetad: 是一个守护进程，定期检查gmond，从那里拉取数据，并将他们的指标存储在RRD存储引擎中<br />
webfrontend: 安装在有gmetad运行的机器上，以便读取RRD文件，用来做前台展示<br />
<br />
简单总结它们三者的各自的功用，gmond收集数据各个node上的metrics数据，gmetad汇总gmond收集到的数据，webfrontend在前台展示gmetad汇总的数据。Ganglia缺省是对系统的一些metric进行监控，比如cpu/memory/net等。不过Hadoop/Hbase内部做了对Ganglia的支持，只需要简单的改配置就可以将Hadoop/Hbase的metrics也接入到ganglia系统中进行监控。<br />
<br />
接下来介绍如何把Hadoop/Hbase接入到Ganglia系统，这里的Hadoop/Hbase的版本号是0.94.2，早期的版本可能会有一些不同，请注意区别。Hbase本来是Hadoop下面的子项目，因此所用的metrics framework原本是同一套Hadoop metrics，但后面hadoop有了改进版本的metrics framework:metrics2(metrics version 2), Hadoop下面的项目都已经开始使用metrics2, 而Hbase成了Apache的顶级子项目，和Hadoop成为平行的项目后，目前还没跟进metrics2，它用的还是原始的metrics.因此这里需要把Hadoop和Hbase的metrics分开介绍。<br />
<br />
Hadoop接入Ganglia:<br />
<br />
1. Hadoop metrics2对应的配置文件为：hadoop-metrics2.properties<br />
2. hadoop metrics2中引用了source和sink的概念，source是用来收集数据的, sink是用来把source收集的数据consume的（包括落地文件，上报ganglia，JMX等）<br />
3. hadoop metrics2配置支持Ganglia:</div>
<div>
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->#*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30<br />
*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31<br />
&nbsp;<br />
*.sink.ganglia.period=10<br />
*.sink.ganglia.supportsparse=true<br />
*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both<br />
*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40<br />
&nbsp;<br />
#uncomment&nbsp;as&nbsp;your&nbsp;needs<br />
namenode.sink.ganglia.servers=10.235.6.156:8649<br />
#datanode.sink.ganglia.servers=10.235.6.156:8649<br />
#jobtracker.sink.ganglia.servers=10.0.3.99:8649<br />
#tasktracker.sink.ganglia.servers=10.0.3.99:8649<br />
#maptask.sink.ganglia.servers=10.0.3.99:8649<br />
#reducetask.sink.ganglia.servers=10.0.3.99:8649</div>
</div>
<br />
<div><br />
</div>
<div>这里需要注意的几点：<br />
<br />
(1) 因为Ganglia3.1与3.0不兼容，需要根据Ganglia的版本选择使用GangliaSink30或者GangliaSink31<br />
(2) period配置上报周期，单位是秒(s)<br />
(3) namenode.sink.ganglia.servers指定Ganglia gmetad所在的host:port，用来向其上报数据<br />
(4) 如果同一个物理机器上同时启动了多个hadoop进程(namenode/datanode, etc)，根据需要把相应的进程的sink.ganglia.servers配置好即可<br />
Hbase接入Ganglia:<br />
<br />
1. Hbase所用的hadoop metrics对应的配置文件是: hadoop-metrics.properties<br />
2. hadoop metrics里核心是Context，写文件有写文件的TimeStampingFileContext, 向Ganglia上报有GangliaContext/GangliaContext31<br />
3. hadoop metrics配置支持Ganglia:</div>
<div>
<div style="background-color: #eeeeee; font-size: 13px; border-left-color: #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->#&nbsp;Configuration&nbsp;of&nbsp;the&nbsp;"hbase"&nbsp;context&nbsp;for&nbsp;ganglia<br />
#&nbsp;Pick&nbsp;one:&nbsp;Ganglia&nbsp;3.0&nbsp;(former)&nbsp;or&nbsp;Ganglia&nbsp;3.1&nbsp;(latter)<br />
#&nbsp;hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext<br />
hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31<br />
hbase.period=10<br />
hbase.servers=10.235.6.156:8649</div>
</div>
<div><br />
</div>
<div>这里需要注意几点：<br />
<br />
(1) 因为Ganglia3.1和3.0不兼容，所以如果是3.1以前的版本，需要用GangliaContext, 如果是3.1版的Ganglia，需要用GangliaContext31<br />
(2) period的单位是秒(s)，通过period可以配置向Ganglia上报数据的周期<br />
(3) servers指定的是Ganglia gmetad所在的host:port，把数据上报到指定的gmetad<br />
(4) 对rpc和jvm相关的指标都可以进行类似的配置</div>
<div><br />
</div>
<div><br />
</div>
<div><br />
</div>
<div><br />
</div>
<div><br />
</div><img src ="http://www.blogjava.net/paulwong/aggbug/395107.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-02-04 15:08 <a href="http://www.blogjava.net/paulwong/archive/2013/02/04/395107.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HBASE部署要点</title><link>http://www.blogjava.net/paulwong/archive/2013/02/04/395101.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Mon, 04 Feb 2013 04:10:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/02/04/395101.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/395101.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/02/04/395101.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/395101.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/395101.html</trackback:ping><description><![CDATA[<div>REGIONS SERVER和TASK TRACKER SERVER不要在同一台机器上，最好如果有MAPREDUCE JOB运行的话，应该分开两个CLUSTER，即两群不同的服务器上，这样MAPREDUCE 的线下负载不会影响到SCANER这些线上负载。</div>
<div><br />
</div>
<div>如果主要是做MAPREDUCE JOB的话，将REGIONS SERVER和TASK TRACKER SERVER放在一起是可以的。</div>
<div><br />
</div>
<div><br />
</div>
<div><span style="background-color: yellow; color: red; ">原始集群模式</span></div>
<div><br />
</div>
10个或以下节点，无MAPREDUCE JOB，主要用于低延迟的访问。每个节点上的配置为：CPU4-6CORE，内存24-32G，4个SATA硬盘。Hadoop NameNode, JobTracker, HBase Master, 和ZooKeeper全都在同一个NODE上。
<div><br />
</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">小型集群模式（10-20台服务器）</span></div>
<div><br />
</div>
HBase Master放在单独一台机器上, 以便于使用较低配置的机器。ZooKeeper也放在单独一台机器上，NameNode和JobTracker放在同一台机器上。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">中型集群模式（20-50台服务器）</span></div>
<div><br />
</div>
由于无须再节省费用，可以将HBase Master和ZooKeeper放在同一台机器上,&nbsp;ZooKeeper和HBase Master要三个实例。NameNode和JobTracker放在同一台机器上。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">大型集群模式（&gt;50台服务器）</span></div>
<div><br />
</div>
和中型集群模式相似，但ZooKeeper和HBase Master要五个实例。NameNode和Second&nbsp;NameNode要有足够大的内存。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">HADOOP MASTER节点</span></div>
<div><br />
</div>
NameNode和Second&nbsp;NameNode服务器配置要求：（小型）8CORE CPU，16G内存，1G网卡和SATA 硬盘，中弄再增加多16G内存，大型则再增加多32G内存。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">HBASE MASTER节点</span></div>
<div><br />
</div>
服务器配置要求：4CORE CPU，8-16G内存，1G网卡和2个SATA 硬盘，一个用于操作系统，另一个用于HBASE MASTER LOGS。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">HADOOP DATA NODES和HBASE REGION SERVER节点</span></div>
<div><br />
</div>
DATA NODE和REGION SERVER应在同一台服务器上，且不应该和TASK TRACKER在一起。服务器配置要求：8-12CORE CPU，24-32G内存，1G网卡和12*1TB SATA 硬盘，一个用于操作系统，另一个用于HBASE MASTER LOGS。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">ZOOPKEEPERS节点</span></div>
<div><br />
</div>
服务器配置和HBASE MASTER相似，也可以与HBASE MASTER放在一起，但就要多增加一个硬盘单独给ZOOPKEEPER使用。</div>
<div><br />
</div>
<div>
<div><span style="background-color: yellow; color: red; ">安装各节点</span></div>
<div><br />
</div>
JVM配置：</div>
-Xmx8g&#8212;设置HEAP的最大值到8G，不建议设到15 GB.<br />
-Xms8g&#8212;设置HEAP的最小值到8GS.<br />
-Xmn128m&#8212;设置新生代的值到128 MB，默认值太小。<br />
-XX:+UseParNewGC&#8212;设置对于新生代的垃圾回收器类型，这种类型是会停止JAVA进程，然后再进行回收的，但由于新生代体积比较小，持续时间通常只有几毫秒，因此可以接受。<br />
-XX:+UseConcMarkSweepGC&#8212;设置老生代的垃圾回收类型，如果用新生代的那个会不合适，即会导致JAVA进程停止的时间太长，用这种不会停止JAVA进程，而是在JAVA进程运行的同时，并行的进行回收。<br />
-XX:CMSInitiatingOccupancyFraction&#8212;设置CMS回收器运行的频率。<br />
<div><br />
</div>
<div><br />
</div>
<div><br />
</div>
<div><br />
</div><img src ="http://www.blogjava.net/paulwong/aggbug/395101.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-02-04 12:10 <a href="http://www.blogjava.net/paulwong/archive/2013/02/04/395101.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Hadoop的几种Join方法</title><link>http://www.blogjava.net/paulwong/archive/2013/01/31/395000.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Thu, 31 Jan 2013 10:24:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/31/395000.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/395000.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/31/395000.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/395000.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/395000.html</trackback:ping><description><![CDATA[1)      在Reduce阶段进行Join,这样运算量比较小.(这个适合被Join的数据比较小的情况下.)<br />2)      压缩字段,对数据预处理,过滤不需要的字段.<br />3)      最后一步就是在Mapper阶段过滤,这个就是Bloom Filter的用武之地了.也就是需要详细说明的地方.<br /><br /> <br />下面就拿一个我们大家都熟悉的场景来说明这个问题: 找出上个月动感地带的客户资费的使用情况,包括接入和拨出.<br /><br />(这个只是我臆想出来的例子,根据实际的DB数据存储结构,在这个场景下肯定有更好的解决方案,大家不要太较真哦)<br /><br />这个时候的两个个数据集都是比较大的,这两个数据集分别是:上个月的通话记录,动感地带的手机号码列表.<br /><br /><br />比较直接的处理方法有2种:<br /><br /><strong>1)在 Reduce 阶段,通过动感地带号码来过滤.</strong><br /><br />                优点:这样需要处理的数据相对比较少,这个也是比较常用的方法.<br /><br />                缺点:很多数据在Mapper阶段花了老鼻子力气汇总了,还通过网络Shuffle到Reduce节点,结果到这个阶段给过滤了.<br /><br /> <br /><br /><strong>2)在 Mapper 阶段时,通过动感地带号码来过滤数据.</strong><br /><br />                优点:这样可以过滤很多不是动感地带的数据,比如神州行,全球通.这些过滤的数据就可以节省很多网络带宽了.<br /><br />                缺点:就是动感地带的号码不是小数目,如果这样处理就需要把这个大块头复制到所有的Mapper节点,甚至是Distributed Cache.(Bloom Filter就是用来解决这个问题的)<br /><br /><br />Bloom Filter就是用来解决上面方法2的缺点的.<br /><br />方法2的缺点就是大量的数据需要在多个节点复制.Bloom Filter通过多个Hash算法, 把这个号码列表压缩到了一个Bitmap里面. 通过允许一定的错误率来换空间, 这个和我们平时经常提到的时间和空间的互换类似.详细情况可以参考:<br /><br />http://blog.csdn.net/jiaomeng/article/details/1495500<br /><br />但是这个算法也是有缺陷的,就是会把很多神州行,全球通之类的号码当成动感地带.但在这个场景中,这根本不是问题.因为这个算法只是过滤一些号码,漏网之鱼会在Reduce阶段进行精确匹配时顾虑掉.<br /><br />这个方法改进之后基本上完全回避了方法2的缺点:<br /><br />1)      没有大量的动感地带号码发送到所有的Mapper节点.<br />2)      很多非动感地带号码在Mapper阶段就过滤了(虽然不是100%),避免了网络带宽的开销及延时.<br /><br /><br />继续需要学习的地方:Bitmap的大小, Hash函数的多少, 以及存储的数据的多少. 这3个变量如何取值才能才能在存储空间与错误率之间取得一个平衡.<img src ="http://www.blogjava.net/paulwong/aggbug/395000.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-31 18:24 <a href="http://www.blogjava.net/paulwong/archive/2013/01/31/395000.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>配置secondarynamenode</title><link>http://www.blogjava.net/paulwong/archive/2013/01/31/394998.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Thu, 31 Jan 2013 09:39:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/31/394998.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394998.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/31/394998.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394998.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394998.html</trackback:ping><description><![CDATA[NAME NODE起保存DATA NODE上文件的位置信息用，主要有两个保存文件：FsImage和EditLog，FsImage保存了上一次NAME NODE启动时的状态，EditLog则记录每次成功后的对HDFS的操作行为。当NAME NODE重启时，会合并FsImage和EditLog成为一个新的FsImage，清空EditLog，如果EditLog非常大的时候，则NAME NODE启动的时间会非常长。因此就有SECOND NAME NODE。<br /><br /><br />SECOND NAME NODE会以HTTP的方式向NAME NODE要这两个文件，当NAME NODE收到请求时，就会韦一个新的EditLog来记录，这时SECOND NAME NODE就会将取得的这两个文件合并，成一个新的FsImage，再发给NAME NODE，NAME NODE收到后，就会以这个为准，旧的就会归档不用。<br /><br /><br />SECOND NAME NODE还有一个用途就是当NAME NODE DOWN了的时候，可以改SECOND NAME NODE的IP为NAME NODE所用的IP，当NAME NODE用。<br /><br />secondary namenoded 配置很容易被忽视，如果jps检查都正常，大家通常不会太关心，除非namenode发生问题的时候，才会想起还有个secondary namenode，它的配置共两步：<br />
<br />
<ol>
     <li>集群配置文件conf/master中添加secondarynamenode的机器</li>
     <li>修改/添加 hdfs-site.xml中如下属性：<br />
     <br />
     </li>
</ol>
<div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
--><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>dfs.http.address<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>{your_namenode_ip}:50070<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;The&nbsp;address&nbsp;and&nbsp;the&nbsp;base&nbsp;port&nbsp;where&nbsp;the&nbsp;dfs&nbsp;namenode&nbsp;web&nbsp;ui&nbsp;will&nbsp;listen&nbsp;on.<br />
&nbsp;If&nbsp;the&nbsp;port&nbsp;is&nbsp;0&nbsp;then&nbsp;the&nbsp;server&nbsp;will&nbsp;start&nbsp;on&nbsp;a&nbsp;free&nbsp;port.<br />
&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span><br />
&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span></div>
<br />
<br />
这两项配置OK后，启动集群。进入secondary namenode 机器，检查fs.checkpoint.dir（core-site.xml文件，默认为${hadoop.tmp.dir}/dfs/namesecondary）目录同步状态是否和namenode一致的。<br />
<br />
如果不配置第二项则，secondary namenode同步文件夹永远为空，这时查看secondary namenode的log显示错误为：<br />
<br />
<br />
<div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
-->2011-06-09&nbsp;11:06:41,430&nbsp;INFO&nbsp;org.apache.hadoop.hdfs.server.common.Storage:&nbsp;Recovering&nbsp;storage&nbsp;directory&nbsp;/tmp/hadoop-hadoop/dfs/namesecondary&nbsp;from&nbsp;failed&nbsp;checkpoint.<br />
2011-06-09&nbsp;11:06:41,433&nbsp;ERROR&nbsp;org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:&nbsp;Exception&nbsp;in&nbsp;doCheckpoint:&nbsp;<br />
2011-06-09&nbsp;11:06:41,434&nbsp;ERROR&nbsp;org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:&nbsp;java.net.ConnectException:&nbsp;Connection&nbsp;refused<br />
at&nbsp;java.net.PlainSocketImpl.socketConnect(Native&nbsp;Method)<br />
at&nbsp;java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)<br />
at&nbsp;java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:211)<br />
at&nbsp;java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)<br />
at&nbsp;java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)<br />
at&nbsp;java.net.Socket.connect(Socket.java:529)<br />
at&nbsp;java.net.Socket.connect(Socket.java:478)<br />
at&nbsp;sun.net.NetworkClient.doConnect(NetworkClient.java:163)<br />
at&nbsp;sun.net.www.http.HttpClient.openServer(HttpClient.java:394)<br />
at&nbsp;sun.net.www.http.HttpClient.openServer(HttpClient.java:529)<br />
at&nbsp;sun.net.www.http.HttpClient.&lt;init&gt;(HttpClient.java:233)<br />
at&nbsp;sun.net.www.http.HttpClient.New(HttpClient.java:306)<br />
at&nbsp;sun.net.www.http.HttpClient.New(HttpClient.java:323)<br />
at&nbsp;sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:970)<br />
at&nbsp;sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:911)<br />
at&nbsp;sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:836)<br />
at&nbsp;sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1172)<br />
at&nbsp;org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:151)<br />
at&nbsp;org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.downloadCheckpointFiles(SecondaryNameNode.java:256)<br />
at&nbsp;org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:313)<br />
at&nbsp;org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:225)<br />
at&nbsp;java.lang.Thread.run(Thread.java:662)</div><br /><br /><span style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff;">可能用到的core-site.xml文件相关属性</span><span style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff;">：<br /><br /></span><div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>fs.checkpoint.period<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>300<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span>The&nbsp;number&nbsp;of&nbsp;seconds&nbsp;between&nbsp;two&nbsp;periodic&nbsp;checkpoints.<br /><span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br /><br /><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>fs.checkpoint.dir<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>${hadoop.tmp.dir}/dfs/namesecondary<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span>Determines&nbsp;where&nbsp;on&nbsp;the&nbsp;local&nbsp;filesystem&nbsp;the&nbsp;DFS&nbsp;secondary<br />&nbsp;name&nbsp;node&nbsp;should&nbsp;store&nbsp;the&nbsp;temporary&nbsp;images&nbsp;to&nbsp;merge.<br />&nbsp;If&nbsp;this&nbsp;is&nbsp;a&nbsp;comma-delimited&nbsp;list&nbsp;of&nbsp;directories&nbsp;then&nbsp;the&nbsp;image&nbsp;is<br />&nbsp;replicated&nbsp;in&nbsp;all&nbsp;of&nbsp;the&nbsp;directories&nbsp;for&nbsp;redundancy.<br />&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">description</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span></div><img src ="http://www.blogjava.net/paulwong/aggbug/394998.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-31 17:39 <a href="http://www.blogjava.net/paulwong/archive/2013/01/31/394998.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>配置Hadoop M/R 采用Fair Scheduler算法代替FIFO</title><link>http://www.blogjava.net/paulwong/archive/2013/01/31/394997.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Thu, 31 Jan 2013 09:30:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/31/394997.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394997.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/31/394997.html#Feedback</comments><slash:comments>1</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394997.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394997.html</trackback:ping><description><![CDATA[采用Cloudera版本的hadoop/hbase:<br /><br />hadoop-0.20.2-cdh3u0<br /><br />hbase-0.90.1-cdh3u0<br /><br />zookeeper-3.3.3-cdh3u0<br /><br />默认已支持FairScheduler调度算法.<br /><br />只需改配置使期用FairSchedule而非默认的JobQueueTaskScheduler即可.<br /><br />配置fair-scheduler.xml (/$HADOOP_HOME/conf/):<br /><br /><div style="background-color: #eeeeee; font-size: 13px; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all;"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml&nbsp;version="1.0"</span><span style="color: #0000FF; ">?&gt;</span><br /><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>mapred.fairscheduler.allocation.file<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>[HADOOP_HOME]/conf/fair-scheduler.xml<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">allocations</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">pool&nbsp;</span><span style="color: #FF0000; ">name</span><span style="color: #0000FF; ">="qiji-task-pool"</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">minMaps</span><span style="color: #0000FF; ">&gt;</span>5<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">minMaps</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">minReduces</span><span style="color: #0000FF; ">&gt;</span>5<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">minReduces</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">maxRunningJobs</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">maxRunningJobs</span><span style="color: #0000FF; ">&gt;</span>5<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">maxRunningJobs</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">minSharePreemptionTimeout</span><span style="color: #0000FF; ">&gt;</span>300<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">minSharePreemptionTimeout</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">weight</span><span style="color: #0000FF; ">&gt;</span>1.0<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">weight</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">pool</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">user&nbsp;</span><span style="color: #FF0000; ">name</span><span style="color: #0000FF; ">="ecap"</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">maxRunningJobs</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">maxRunningJobs</span><span style="color: #0000FF; ">&gt;</span>6<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">maxRunningJobs</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">user</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">poolMaxJobsDefault</span><span style="color: #0000FF; ">&gt;</span>10<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">poolMaxJobsDefault</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">userMaxJobsDefault</span><span style="color: #0000FF; ">&gt;</span>8<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">userMaxJobsDefault</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">defaultMinSharePreemptionTimeout</span><span style="color: #0000FF; ">&gt;</span>600<br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">defaultMinSharePreemptionTimeout</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">fairSharePreemptionTimeout</span><span style="color: #0000FF; ">&gt;</span>600<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">fairSharePreemptionTimeout</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">allocations</span><span style="color: #0000FF; ">&gt;</span></div><br /><br /><br />配置$HADOOP_HOME/conf/mapred-site.xml,最后添加:<br /><br /><div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>mapred.jobtracker.taskScheduler<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>org.apache.hadoop.mapred.FairScheduler<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>mapred.fairscheduler.allocation.file<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>/opt/hadoop/conf/fair-scheduler.xml<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>mapred.fairscheduler.assignmultiple<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>true<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>mapred.fairscheduler.sizebasedweight<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>true<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br /><span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span></div><br /><br /><br />然后重新运行集群,这样有几个Job(上面配置是5个并行)并行运行时,不会因为一个Job把Map/Reduce占满而使其它Job处于Pending状态.<br /><br />可从: http://&lt;masterip&gt;:50030/scheduler查看并行运行的状态.<img src ="http://www.blogjava.net/paulwong/aggbug/394997.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-31 17:30 <a href="http://www.blogjava.net/paulwong/archive/2013/01/31/394997.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>大规模数据查重的多种方法，及Bloom Filter的应用</title><link>http://www.blogjava.net/paulwong/archive/2013/01/31/394980.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Thu, 31 Jan 2013 05:55:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/31/394980.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394980.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/31/394980.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394980.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394980.html</trackback:ping><description><![CDATA[挺有意思的题目。<br /><br /><br /><strong>1. 给你A,B两个文件，各存放50亿条URL，每条URL占用64字节，内存限制是4G，让你找出:A,B文件共同的URL。</strong>  <br />解法一：Hash成内存大小的小块文件，然后分块内存内查交集。<br />解法二：Bloom Filter（广泛应用于URL过滤、查重。参考http://en.wikipedia.org/wiki/Bloom_filter、http://blog.csdn.net/jiaomeng/archive/2007/01/28/1496329.aspx）<br /><br /><br /><strong>2. 有10个文件，每个文件1G， 每个文件的每一行都存放的是用户的query，每个文件的query都可能重复。要你按照query的频度排序。</strong><br />解法一：根据数据稀疏程度算法会有不同，通用方法是用Hash把文件重排，让相同query一定会在同一个文件，同时进行计数，然后归并，用最小堆来统计频度最大的。<br />解法二：类似1，但是用的是与简单Bloom Filter稍有不同的CBF（Counting Bloom Filter）或者更进一步的SBF（Spectral Bloom Filter，参考http://blog.csdn.net/jiaomeng/archive/2007/03/19/1534238.aspx）<br />解法三：MapReduce，几分钟可以在hadoop集群上搞定。参考http://en.wikipedia.org/wiki/MapReduce<br /><br /><br /><strong>3. 有一个1G大小的一个文件，里面每一行是一个词，词的大小不超过16个字节，内存限制大小是1M。返回频数最高的100个词。</strong><br />解法一：跟2类似，只是不需要排序，各个文件分别统计前100，然后一起找前100。<img src ="http://www.blogjava.net/paulwong/aggbug/394980.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-31 13:55 <a href="http://www.blogjava.net/paulwong/archive/2013/01/31/394980.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Cassandra VS. HBase 全文zz</title><link>http://www.blogjava.net/paulwong/archive/2013/01/30/394902.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Tue, 29 Jan 2013 16:22:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/30/394902.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394902.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/30/394902.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394902.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394902.html</trackback:ping><description><![CDATA[<div id="content" mod-cs-content="" text-content=""  clearfix"="" style="zoom: 1; width: 758px; overflow: hidden; line-height: 1.5; margin: 7px 0px 10px; color: #454545; font-family: tahoma, helvetica, arial;"><p style="margin: 0px; padding: 0px;">摘取了一部分，全文请查看</p><p style="margin: 0px; padding: 0px;"><a href="http://blog.csdn.net/anghlq/article/details/6538115" target="_blank" style="color: #3fa7cb;"></a></p><p style="margin: 0px; padding: 0px;"></p><p style="margin: 0px; padding: 0px;"><strong><a href="http://blog.sina.com.cn/s/blog_633f4ab20100r9nm.html" target="_blank" style="color: #3fa7cb;">http://blog.sina.com.cn/s/blog_633f4ab20100r9nm.html</a><br /></strong></p><p style="margin: 0px; padding: 0px;"><strong>背景</strong></p><p style="margin: 0px; padding: 0px;">&#8220;这是最好的时代，也是最坏的时代。&#8221;&nbsp;</p><p style="margin: 0px; padding: 0px;">每个时代的人都在这么形容自己所处的时代。在一次次IT浪潮下面，有人觉得当下乏味无聊，有人却能锐意进取，找到突破。数据存储这个话题自从有了计算机之后，就一直是一个有趣或者无聊的主题。上世纪七十年代，关系数据库理论的出现，造就了一批又一批传奇，并推动整个世界信息化到了一个新的高度。而进入新千年以来，随着SNS等应用的出现，传统的SQL数据库已经越来越不适应海量数据的处理了。于是，这几年NoSQL数据库的呼声也越来越高。</p><p style="margin: 0px; padding: 0px;">在NoSQL数据库当中，呼声最高的是HBase和Cassandra两个。虽然严格意义上来说，两者服务的目的有所不同，侧重点也不尽相同，但是作为当前开源NoSQL数据库的佼佼者，两者经常被用来做各种比较。</p><p style="margin: 0px; padding: 0px;">去年十月，Facebook推出了他的新的Message系统。Facebook宣布他们采用HBase作为后台存储系统。这引起了一片喧哗声。因为Cassandra恰恰是Facebook开发，并且于2008年开源。这让很多人惊呼，是否是Cassandra已经被Facebook放弃了？HBase在这场NoSQL数据库的角力当中取得了决定性的胜利？本文打算主要从技术角度分析，HBase和Cassandra的异同，并非要给出任何结论，只是共享自己研究的一些结果。</p><p style="margin: 0px; padding: 0px;">&nbsp;</p><p style="margin: 0px; padding: 0px;"><strong>选手简介</strong></p><p style="margin: 0px; padding: 0px;"><strong>HBase</strong></p><p style="margin: 0px; padding: 0px;">HBase是一个开源的分布式存储系统。他可以看作是Google的Bigtable的开源实现。如同Google的Bigtable使用Google File System一样，HBase构建于和Google File System类似的Hadoop HDFS之上。</p><p style="margin: 0px; padding: 0px;"><strong>Cassandra</strong></p><p style="margin: 0px; padding: 0px;">Cassandra可以看作是Amazon Dynamo的开源实现。和Dynamo不同之处在于，Cassandra结合了Google Bigtable的ColumnFamily的数据模型。可以简单地认为，Cassandra是一个P2P的，高可靠性并具有丰富的数据模型的分布式文件系统。</p><p style="margin: 0px; padding: 0px;"></p><p style="margin: 0px; padding: 0px;"><strong>分布式文件系统的指标</strong></p><p style="margin: 0px; padding: 0px;">根据UC Berkeley的教授Eric Brewer于2000年提出猜测- CAP定理，一个分布式计算机系统，不可能同时满足以下三个指标：</p>Consistency 所有节点在同一时刻保持同一状态Availability 某个节点失败，不会影响系统的正常运行Partition tolerance 系统可以因为网络故障等原因被分裂成小的子系统，而不影响系统的运行<p style="margin: 0px; padding: 0px;">&nbsp;</p><p style="margin: 0px; padding: 0px;">Brewer教授推测，任何一个系统，同时只能满足以上两个指标。</p><p style="margin: 0px; padding: 0px;">在2002年，MIT的Seth Gilbert和Nancy Lynch发表正式论文论证了CAP定理。</p><p style="margin: 0px; padding: 0px;">&nbsp;</p><p style="margin: 0px; padding: 0px;">而HBase和Cassandra两者都属于分布式计算机系统。但是其设计的侧重点则有所不同。HBase继承于Bigtable的设计，侧重于CA。而Cassandra则继承于Dynamo的设计，侧重于AP。</p><p style="margin: 0px; padding: 0px;"></p>。。。。。。。。。。。。。。。。。。。<p style="margin: 0px; padding: 0px;"></p><p style="margin: 0px; padding: 0px;"><strong>特性比较</strong></p><p style="margin: 0px; padding: 0px;">由于HBase和Cassandra的数据模型比较接近，所以这里就不再比较两者之间数据模型的异同了。接下来主要比较双方在数据一致性、多拷贝复制的特性。</p><p style="margin: 0px; padding: 0px;"><strong>HBase</strong></p><p style="margin: 0px; padding: 0px;">HBase保证写入的一致性。当一份数据被要求复制N份的时候，只有N份数据都被真正复制到N台服务器上之后，客户端才会成功返回。如果在复制过程中出现失败，所有的复制都将失败。连接上任何一台服务器的客户端都无法看到被复制的数据。HBase提供行锁，但是不提供多行锁和事务。HBase基于HDFS，因此数据的多份复制功能和可靠性将由HDFS提供。HBase和MapReduce天然集成。</p><p style="margin: 0px; padding: 0px;"><strong>Cassandra</strong></p><p style="margin: 0px; padding: 0px;">写入的时候，有多种模式可以选择。当一份数据模式被要求复制N份的时候，可以立即返回，可以成功复制到一个服务器之后返回，可以等到全部复制到N份服务器之后返回，还可以设定一个复制到quorum份服务器之后返回。Quorum后面会有具体解释。复制不会失败。最终所有节点数据都将被写入。而在未被完全写入的时间间隙，连接到不同服务器的客户端有可能读到不同的数据。在集群里面，所有的服务器都是等价的。不存在任何一个单点故障。节点和节点之间通过Gossip协议互相通信。写入顺序按照timestamp排序，不提供行锁。新版本的Cassandra已经集成了MapReduce了。</p><p style="margin: 0px; padding: 0px;">相对于配置Cassandra，配置HBase是一个艰辛、复杂充满陷阱的工作。Facebook关于为何采取HBase，里面有一句，大意是，Facebook长期以来一直关注HBase的开发并且有一只专门的经验丰富的HBase维护的team来负责HBase的安装和维护。可以想象，Facebook内部关于使用HBase和Cassandra有过激烈的斗争，最终人数更多的HBase&nbsp;team占据了上风。对于大公司来说，养一只相对庞大的类似DBA的team来维护HBase不算什么大的开销，但是对于小公司，这实在不是一个可以负担的起的开销。</p><p style="margin: 0px; padding: 0px;">另外HBase在高可靠性上有一个很大的缺陷，就是HBase依赖HDFS。HDFS是Google File&nbsp;System的复制品，NameNode是HDFS的单点故障点。而到目前为止，HDFS还没有加入NameNode的自我恢复功能。不过我相信，Facebook在内部一定有恢复NameNode的手段，只是没有开源出来而已。</p><p style="margin: 0px; padding: 0px;">相反，Cassandra的P2P和去中心化设计，没有可能出现单点故障。从设计上来看，Cassandra比HBase更加可靠。</p><p style="margin: 0px; padding: 0px;"><strong>关于数据一致性，实际上，Cassandra也可以以牺牲响应时间的代价来获得和HBase一样的一致性。而且，通过对Quorum的合适的设置，可以在响应时间和数据一致性得到一个很好的折衷值。</strong></p>Cassandra优缺点<p style="margin: 0px; padding: 0px;">主要表现在：</p><p style="margin: 0px; padding: 0px;">配置简单，不需要多模块协同操作。功能灵活性强，数据一致性和性能之间，可以根据应用不同而做不同的设置。&nbsp;可靠性更强，没有单点故障。</p><p style="margin: 0px; padding: 0px;">尽管如此，Cassandra就没有弱点吗？当然不是，Cassandra有一个致命的弱点。</p><p style="margin: 0px; padding: 0px;"></p><p style="margin: 0px; padding: 0px;">这就是存储大文件。虽然说，Cassandra的设计初衷就不是存储大文件，但是Amazon的S3实际上就是基于Dynamo构建的，总是会让人想入非非地让Cassandra去存储超大文件。而和Cassandra不同，HBase基于HDFS，HDFS的设计初衷就是存储超大规模文件并且提供最大吞吐量和最可靠的可访问性。因此，从这一点来说，Cassandra由于背后不是一个类似HDFS的超大文件存储的文件系统，对于存储那种巨大的（几百T甚至P）的超大文件目前是无能为力的。而且就算由Client手工去分割，这实际上是非常不明智和消耗Client CPU的工作的。</p><p style="margin: 0px; padding: 0px;">因此，如果我们要构建一个类似Google的搜索引擎，最少，HDFS是我们所必不可少的。虽然目前HDFS的NameNode还是一个单点故障点，但是相应的Hack可以让NameNode变得更皮实。基于HDFS的HBase相应地，也更适合做搜索引擎的背后倒排索引数据库。事实上，Lucene和HBase的结合，远比Lucene结合Cassandra的项目Lucandra要顺畅和高效的多。（Lucandra要求Cassandra使用OrderPreservingPartitioner,这将可能导致Key的分布不均匀，而无法做负载均衡，产生访问热点机器）。</p><p style="margin: 0px; padding: 0px;">&nbsp;</p><p style="margin: 0px; padding: 0px;">所以我的结论是，在这个需求多样化的年代，没有赢者通吃的事情。而且我也越来越不相信在工程界存在一劳永逸和一成不变的解决方案。<strong>当你仅仅是存储海量增长的消息数据，存储海量增长的图片，小视频的时候，你要求数据不能丢失，你要求人工维护尽可能少，你要求能迅速通过添加机器扩充存储，那么毫无疑问，Cassandra现在是占据上风的。</strong></p><p style="margin: 0px; padding: 0px;">但是<strong>如果你希望构建一个超大规模的搜索引擎，产生超大规模的倒排索引文件（当然是逻辑上的文件，真实文件实际上被切分存储于不同的节点上），那么目前HDFS+HBase是你的首选。</strong></p><p style="margin: 0px; padding: 0px;">就让这个看起来永远正确的结论结尾吧，上帝的归上帝，凯撒的归凯撒。大家都有自己的地盘，野百合也会有春天的！</p></div><img src ="http://www.blogjava.net/paulwong/aggbug/394902.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-30 00:22 <a href="http://www.blogjava.net/paulwong/archive/2013/01/30/394902.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>NOSQL之旅---HBase(转)</title><link>http://www.blogjava.net/paulwong/archive/2013/01/29/394901.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Tue, 29 Jan 2013 15:50:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/29/394901.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394901.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/29/394901.html#Feedback</comments><slash:comments>1</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394901.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394901.html</trackback:ping><description><![CDATA[<a href="http://www.jdon.com/38244" target="_blank">http://www.jdon.com/38244</a><br /><br />最近因为项目原因，研究了Cassandra,Hbase等几个NoSQL数据库，最终决定采用HBase。在这里，我就向大家分享一下自己对HBase的理解。<br /><br />在说HBase之前，我想再唠叨几句。做互联网应用的哥们儿应该都清楚，互联网应用这东西，你没办法预测你的系统什么时候会被多少人访问，你面临的用户到底有多少，说不定今天你的用户还少，明天系统用户就变多了，结果您的系统应付不过来了了，不干了，这岂不是咱哥几个的悲哀，说时髦点就叫&#8220;杯具啊&#8221;。<br /><br />其实说白了，这些就是事先没有认清楚互联网应用什么才是最重要的。从系统架构的角度来说，互联网应用更加看重系统性能以及伸缩性，而传统企业级应用都是比较看重数据完整性和数据安全性。那么我们就来说说互联网应用伸缩性这事儿.对于伸缩性这事儿，哥们儿我也写了几篇博文，想看的兄弟可以参考我以前的博文，对于web server,app server的伸缩性，我在这里先不说了，因为这部分的伸缩性相对来说比较容易一点，我主要来回顾一些一个慢慢变大的互联网应用如何应对数据库这一层的伸缩。<br /><br />首先刚开始，人不多，压力也不大,搞一台数据库服务器就搞定了，此时所有的东东都塞进一个Server里，包括web server,app server,db server,但是随着人越来越多，系统压力越来越多，这个时候可能你把web server,app server和db server分离了，好歹这样可以应付一阵子，但是随着用户量的不断增加，你会发现，数据库这哥们不行了，速度老慢了，有时候还会宕掉，所以这个时候，你得给数据库这哥们找几个伴，这个时候Master-Salve就出现了，这个时候有一个Master Server专门负责接收写操作，另外的几个Salve Server专门进行读取，这样Master这哥们终于不抱怨了，总算读写分离了，压力总算轻点了,这个时候其实主要是对读取操作进行了水平扩张，通过增加多个Salve来克服查询时CPU瓶颈。一般这样下来，你的系统可以应付一定的压力，但是随着用户数量的增多，压力的不断增加，你会发现Master server这哥们的写压力还是变的太大，没办法，这个时候怎么办呢？你就得切分啊，俗话说&#8220;只有切分了，才会有伸缩性嘛&#8221;，所以啊，这个时候只能分库了，这也是我们常说的数据库&#8220;垂直切分&#8221;，比如将一些不关联的数据存放到不同的库中，分开部署，这样终于可以带走一部分的读取和写入压力了，Master又可以轻松一点了，但是随着数据的不断增多，你的数据库表中的数据又变的非常的大，这样查询效率非常低，这个时候就需要进行&#8220;水平分区&#8221;了，比如通过将User表中的数据按照10W来划分，这样每张表不会超过10W了。<br /><br />综上所述，一般一个流行的web站点都会经历一个从单台DB，到主从复制，到垂直分区再到水平分区的痛苦的过程。其实数据库切分这事儿，看起来原理貌似很简单，如果真正做起来，我想凡是sharding过数据库的哥们儿都深受其苦啊。对于数据库伸缩的文章，哥们儿可以看看后面的参考资料介绍。<br /><br />好了，从上面的那一堆废话中，我们也发现数据库存储水平扩张scale out是多么痛苦的一件事情，不过幸好技术在进步，业界的其它弟兄也在努力，09年这一年出现了非常多的NoSQL数据库，更准确的应该说是No relation数据库，这些数据库多数都会对非结构化的数据提供透明的水平扩张能力，大大减轻了哥们儿设计时候的压力。下面我就拿Hbase这分布式列存储系统来说说。<br /><br />一 Hbase是个啥东东？ <br />在说Hase是个啥家伙之前，首先我们来看看两个概念，面向行存储和面向列存储。面向行存储，我相信大伙儿应该都清楚，我们熟悉的RDBMS就是此种类型的，面向行存储的数据库主要适合于事务性要求严格场合，或者说面向行存储的存储系统适合OLTP，但是根据CAP理论，传统的RDBMS，为了实现强一致性，通过严格的ACID事务来进行同步，这就造成了系统的可用性和伸缩性方面大大折扣，而目前的很多NoSQL产品，包括Hbase，它们都是一种最终一致性的系统，它们为了高的可用性牺牲了一部分的一致性。好像，我上面说了面向列存储，那么到底什么是面向列存储呢？Hbase,Casandra,Bigtable都属于面向列存储的分布式存储系统。看到这里，如果您不明白Hbase是个啥东东，不要紧，我再总结一下下：<br /><br />Hbase是一个面向列存储的分布式存储系统，它的优点在于可以实现高性能的并发读写操作，同时Hbase还会对数据进行透明的切分，这样就使得存储本身具有了水平伸缩性。<br /><br /><br />二 Hbase数据模型 <br />HBase,Cassandra的数据模型非常类似，他们的思想都是来源于Google的Bigtable，因此这三者的数据模型非常类似，唯一不同的就是Cassandra具有Super cloumn family的概念，而Hbase目前我没发现。好了，废话少说，我们来看看Hbase的数据模型到底是个啥东东。<br /><br />在Hbase里面有以下两个主要的概念，Row key,Column Family，我们首先来看看Column family,Column family中文又名&#8220;列族&#8221;，Column family是在系统启动之前预先定义好的，每一个Column Family都可以根据&#8220;限定符&#8221;有多个column.下面我们来举个例子就会非常的清晰了。<br /><br />假如系统中有一个User表，如果按照传统的RDBMS的话，User表中的列是固定的，比如schema 定义了name,age,sex等属性，User的属性是不能动态增加的。但是如果采用列存储系统，比如Hbase，那么我们可以定义User表，然后定义info 列族，User的数据可以分为：info:name = zhangsan,info:age=30,info:sex=male等，如果后来你又想增加另外的属性，这样很方便只需要info:newProperty就可以了。<br /><br />也许前面的这个例子还不够清晰，我们再举个例子来解释一下，熟悉SNS的朋友，应该都知道有好友Feed，一般设计Feed，我们都是按照&#8220;某人在某时做了标题为某某的事情&#8221;，但是同时一般我们也会预留一下关键字，比如有时候feed也许需要url，feed需要image属性等，这样来说，feed本身的属性是不确定的，因此如果采用传统的关系数据库将非常麻烦，况且关系数据库会造成一些为null的单元浪费，而列存储就不会出现这个问题，在Hbase里，如果每一个column 单元没有值，那么是占用空间的。下面我们通过两张图来形象的表示这种关系：<br /><br /><br /><br /><br />上图是传统的RDBMS设计的Feed表，我们可以看出feed有多少列是固定的，不能增加，并且为null的列浪费了空间。但是我们再看看下图，下图为Hbase，Cassandra,Bigtable的数据模型图，从下图可以看出，Feed表的列可以动态的增加，并且为空的列是不存储的，这就大大节约了空间，关键是Feed这东西随着系统的运行，各种各样的Feed会出现，我们事先没办法预测有多少种Feed，那么我们也就没有办法确定Feed表有多少列，因此Hbase,Cassandra,Bigtable的基于列存储的数据模型就非常适合此场景。说到这里，采用Hbase的这种方式，还有一个非常重要的好处就是Feed会自动切分，当Feed表中的数据超过某一个阀值以后，Hbase会自动为我们切分数据，这样的话，查询就具有了伸缩性，而再加上Hbase的弱事务性的特性，对Hbase的写入操作也将变得非常快。<br /><br /><br /><br />上面说了Column family，那么我之前说的Row key是啥东东，其实你可以理解row key为RDBMS中的某一个行的主键，但是因为Hbase不支持条件查询以及Order by等查询，因此Row key的设计就要根据你系统的查询需求来设计了额。我还拿刚才那个Feed的列子来说，我们一般是查询某个人最新的一些Feed，因此我们Feed的Row key可以有以下三个部分构成&lt;userId&gt;&lt;timestamp&gt;&lt;feedId&gt;，这样以来当我们要查询某个人的最进的Feed就可以指定Start Rowkey为&lt;userId&gt;&lt;0&gt;&lt;0&gt;，End Rowkey为&lt;userId&gt;&lt;Long.MAX_VALUE&gt;&lt;Long.MAX_VALUE&gt;来查询了，同时因为Hbase中的记录是按照rowkey来排序的，这样就使得查询变得非常快。<br /><br /><br />三 Hbase的优缺点 <br />1 列的可以动态增加，并且列为空就不存储数据,节省存储空间.<br /><br />2 Hbase自动切分数据，使得数据存储自动具有水平scalability.<br /><br />3 Hbase可以提供高并发读写操作的支持<br /><br />Hbase的缺点：<br /><br />1 不能支持条件查询，只支持按照Row key来查询.<br /><br />2 暂时不能支持Master server的故障切换,当Master宕机后,整个存储系统就会挂掉.<br /><br /><br /><br />关于数据库伸缩性的一点资料：<br /><a href="http://www.jurriaanpersyn.com/archives/2009/02/12/database-sharding-at-netlog-with-mysql-and-php/" target="_blank">http://www.jurriaanpersyn.com/archives/2009/02/12/database-sharding-at-netlog-with-mysql-and-php/</a><br /><br /><a href="http://adam.blog.heroku.com/past/2009/7/6/sql_databases_dont_scale/" target="_blank">http://adam.blog.heroku.com/past/2009/7/6/sql_databases_dont_scale/</a><img src ="http://www.blogjava.net/paulwong/aggbug/394901.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-29 23:50 <a href="http://www.blogjava.net/paulwong/archive/2013/01/29/394901.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>MAPREDUCE运行原理</title><link>http://www.blogjava.net/paulwong/archive/2013/01/29/394872.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Tue, 29 Jan 2013 04:54:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/29/394872.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394872.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/29/394872.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394872.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394872.html</trackback:ping><description><![CDATA[<ol>
     <li>将INPUT通过SPLIT成M个MAP任务<br />
     </li>
     <br />
     <li>JOB TRACKER将这M个任务分派给TASK TRACKER执行
     </li>
     <br />
     <li>TASK TRACKER执行完MAP任务后，会在本地生成文件，然后通知JOB TRACKER
     </li>
     <br />
     <li>JOB TRACKER收到通知后，将此任务标记为已完成，如果收到失败的消息，会将此任务重置为原始状态，再分派给另一TASK TRACKER执行
     </li>
     <br />
     <li>当所有的MAP任务完成后，JOB TRACKER将MAP执行后生成的LIST重新整理，整合相同的KEY，根据KEY的数量生成R个REDUCE任务，再分派给TASK TRACKER执行
     </li>
     <br />
     <li>TASK TRACKER执行完REDUCE任务后，会在HDFS生成文件，然后通知JOB TRACKER<br />
     <br />
     </li>
     <br />
     <li>JOB TRACKER等到所有的REDUCE任务执行完后，进行合并，产生最后结果，通知CLIENT<br />
     <br />
     </li>
     <br />
     <li>TASK TRACKER执行完MAP任务时，可以重新生成新的KEY VALUE对，从而影响REDUCE个数<br />
     <br />
     </li>
     <br />
</ol><img src ="http://www.blogjava.net/paulwong/aggbug/394872.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-29 12:54 <a href="http://www.blogjava.net/paulwong/archive/2013/01/29/394872.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Windows环境下用ECLIPSE提交MAPREDUCE JOB至远程HBASE中运行</title><link>http://www.blogjava.net/paulwong/archive/2013/01/29/394851.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Mon, 28 Jan 2013 16:19:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/29/394851.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394851.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/29/394851.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394851.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394851.html</trackback:ping><description><![CDATA[<ol>
     <li>假设远程HADOOP主机名为ubuntu，则应在hosts文件中加上192.168.58.130 &nbsp; &nbsp; &nbsp; ubuntu<br />
     <br /><br />
     </li>
     <li>新建MAVEN项目，加上相应的配置<br />
     pom.xml<br />
     <div style="background-color: #eeeeee; font-size: 13px; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all;"><!--<br />
     <br />
     Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
     http://www.CodeHighlighter.com/<br />
     <br />
     --><span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">project&nbsp;</span><span style="color: #FF0000; ">xmlns</span><span style="color: #0000FF; ">="http://maven.apache.org/POM/4.0.0"</span><span style="color: #FF0000; ">&nbsp;xmlns:xsi</span><span style="color: #0000FF; ">="http://www.w3.org/2001/XMLSchema-instance"</span><span style="color: #FF0000; "><br />
     &nbsp;&nbsp;xsi:schemaLocation</span><span style="color: #0000FF; ">="http://maven.apache.org/POM/4.0.0&nbsp;http://maven.apache.org/xsd/maven-4.0.0.xsd"</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">modelVersion</span><span style="color: #0000FF; ">&gt;</span>4.0.0<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">modelVersion</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>com.cloudputing<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>bigdata<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>1.0<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">packaging</span><span style="color: #0000FF; ">&gt;</span>jar<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">packaging</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>bigdata<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">url</span><span style="color: #0000FF; ">&gt;</span>http://maven.apache.org<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">url</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">properties</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">project</span><span style="color: #FF0000; ">.build.sourceEncoding</span><span style="color: #0000FF; ">&gt;</span>UTF-8<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">project.build.sourceEncoding</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">properties</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependencies</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>junit<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>junit<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>3.8.1<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">scope</span><span style="color: #0000FF; ">&gt;</span>test<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">scope</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>org.springframework.data<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>spring-data-hadoop<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>0.9.0.RELEASE<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>org.apache.hbase<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>hbase<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>0.94.1<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; ">&nbsp;&lt;dependency&gt;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;groupId&gt;org.apache.hbase&lt;/groupId&gt;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;artifactId&gt;hbase&lt;/artifactId&gt;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;version&gt;0.90.2&lt;/version&gt;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/dependency&gt;&nbsp;</span><span style="color: #008000; ">--&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>org.apache.hadoop<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>hadoop-core<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>1.0.3<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span>org.springframework<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">groupId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span>spring-test<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">artifactId</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span>3.0.5.RELEASE<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">version</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependency</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">dependencies</span><span style="color: #0000FF; ">&gt;</span><br />
     <span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">project</span><span style="color: #0000FF; ">&gt;</span></div>
     </li>
     <br /><br />
     <li>
     <div>hbase-site.xml<br />
     <div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br />
     <br />
     Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
     http://www.CodeHighlighter.com/<br />
     <br />
     --><span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml&nbsp;version="1.0"</span><span style="color: #0000FF; ">?&gt;</span><br />
     <span style="color: #0000FF; ">&lt;?</span><span style="color: #FF00FF; ">xml-stylesheet&nbsp;type="text/xsl"&nbsp;href="configuration.xsl"</span><span style="color: #0000FF; ">?&gt;</span><br />
     <span style="color: #008000; ">&lt;!--</span><span style="color: #008000; "><br />
     /**<br />
     &nbsp;*&nbsp;Copyright&nbsp;2010&nbsp;The&nbsp;Apache&nbsp;Software&nbsp;Foundation<br />
     &nbsp;*<br />
     &nbsp;*&nbsp;Licensed&nbsp;to&nbsp;the&nbsp;Apache&nbsp;Software&nbsp;Foundation&nbsp;(ASF)&nbsp;under&nbsp;one<br />
     &nbsp;*&nbsp;or&nbsp;more&nbsp;contributor&nbsp;license&nbsp;agreements.&nbsp;&nbsp;See&nbsp;the&nbsp;NOTICE&nbsp;file<br />
     &nbsp;*&nbsp;distributed&nbsp;with&nbsp;this&nbsp;work&nbsp;for&nbsp;additional&nbsp;information<br />
     &nbsp;*&nbsp;regarding&nbsp;copyright&nbsp;ownership.&nbsp;&nbsp;The&nbsp;ASF&nbsp;licenses&nbsp;this&nbsp;file<br />
     &nbsp;*&nbsp;to&nbsp;you&nbsp;under&nbsp;the&nbsp;Apache&nbsp;License,&nbsp;Version&nbsp;2.0&nbsp;(the<br />
     &nbsp;*&nbsp;"License");&nbsp;you&nbsp;may&nbsp;not&nbsp;use&nbsp;this&nbsp;file&nbsp;except&nbsp;in&nbsp;compliance<br />
     &nbsp;*&nbsp;with&nbsp;the&nbsp;License.&nbsp;&nbsp;You&nbsp;may&nbsp;obtain&nbsp;a&nbsp;copy&nbsp;of&nbsp;the&nbsp;License&nbsp;at<br />
     &nbsp;*<br />
     &nbsp;*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;http://www.apache.org/licenses/LICENSE-2.0<br />
     &nbsp;*<br />
     &nbsp;*&nbsp;Unless&nbsp;required&nbsp;by&nbsp;applicable&nbsp;law&nbsp;or&nbsp;agreed&nbsp;to&nbsp;in&nbsp;writing,&nbsp;software<br />
     &nbsp;*&nbsp;distributed&nbsp;under&nbsp;the&nbsp;License&nbsp;is&nbsp;distributed&nbsp;on&nbsp;an&nbsp;"AS&nbsp;IS"&nbsp;BASIS,<br />
     &nbsp;*&nbsp;WITHOUT&nbsp;WARRANTIES&nbsp;OR&nbsp;CONDITIONS&nbsp;OF&nbsp;ANY&nbsp;KIND,&nbsp;either&nbsp;express&nbsp;or&nbsp;implied.<br />
     &nbsp;*&nbsp;See&nbsp;the&nbsp;License&nbsp;for&nbsp;the&nbsp;specific&nbsp;language&nbsp;governing&nbsp;permissions&nbsp;and<br />
     &nbsp;*&nbsp;limitations&nbsp;under&nbsp;the&nbsp;License.<br />
     &nbsp;*/<br />
     </span><span style="color: #008000; ">--&gt;</span><br />
     <span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.rootdir<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>hdfs://ubuntu:9000/hbase<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; ">&nbsp;在构造JOB时，会新建一文件夹来准备所需文件。<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;如果这一段没写，则默认本地环境为LINUX，将用LINUX命令去实施，在WINDOWS环境下会出错&nbsp;</span><span style="color: #008000; ">--&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>mapred.job.tracker<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>ubuntu:9001<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.cluster.distributed<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>true<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">&lt;!--</span><span style="color: #008000; ">&nbsp;此处会向ZOOKEEPER咨询JOB&nbsp;TRACKER的可用IP&nbsp;</span><span style="color: #008000; ">--&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.zookeeper.quorum<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>ubuntu<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">property&nbsp;</span><span style="color: #FF0000; ">skipInDoc</span><span style="color: #0000FF; ">="true"</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span>hbase.defaults.for.version<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">name</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span>0.94.1<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">value</span><span style="color: #0000FF; ">&gt;</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">property</span><span style="color: #0000FF; ">&gt;</span><br />
     <br />
     <span style="color: #0000FF; ">&lt;/</span><span style="color: #800000; ">configuration</span><span style="color: #0000FF; ">&gt;</span></div>
     </div>
     </li>
     <br /><br />
     <li>测试文件：MapreduceTest.java<br />
     <div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br />
     <br />
     Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
     http://www.CodeHighlighter.com/<br />
     <br />
     --><span style="color: #0000FF; ">package</span>&nbsp;com.cloudputing.mapreduce;<br />
     <br />
     <span style="color: #0000FF; ">import</span>&nbsp;java.io.IOException;<br />
     <br />
     <span style="color: #0000FF; ">import</span>&nbsp;junit.framework.TestCase;<br />
     <br />
     <span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;MapreduceTest&nbsp;<span style="color: #0000FF; ">extends</span>&nbsp;TestCase{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;testReadJob()&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException,&nbsp;InterruptedException,&nbsp;ClassNotFoundException<br />
     &nbsp;&nbsp;&nbsp;&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MapreduceRead.read();<br />
     &nbsp;&nbsp;&nbsp;&nbsp;}<br />
     <br />
     }</div>
     </li>
<br /><br />
     <li>
     <div>MapreduceRead.java</div>
     <div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br />
     <br />
     Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
     http://www.CodeHighlighter.com/<br />
     <br />
     --><span style="color: #0000FF; ">package</span>&nbsp;com.cloudputing.mapreduce;<br />
     <br />
     <span style="color: #0000FF; ">import</span>&nbsp;java.io.IOException;<br />
     <br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.conf.Configuration;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.fs.FileSystem;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.fs.Path;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.HBaseConfiguration;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.client.Result;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.client.Scan;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.io.ImmutableBytesWritable;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.mapreduce.TableMapper;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.hbase.util.Bytes;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.io.Text;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapreduce.Job;<br />
     <span style="color: #0000FF; ">import</span>&nbsp;org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;<br />
     <br />
     <span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;MapreduceRead&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;read()&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException,&nbsp;InterruptedException,&nbsp;ClassNotFoundException<br />
     &nbsp;&nbsp;&nbsp;&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;Add&nbsp;these&nbsp;statements.&nbsp;XXX<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File&nbsp;jarFile&nbsp;=&nbsp;EJob.createTempJar("target/classes");<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EJob.addClasspath("D:/PAUL/WORK/WORK-SPACES/TEST1/cloudputing/src/main/resources");<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ClassLoader&nbsp;classLoader&nbsp;=&nbsp;EJob.getClassLoader();<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Thread.currentThread().setContextClassLoader(classLoader);</span><span style="color: #008000; "><br />
     </span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Configuration&nbsp;config&nbsp;=&nbsp;HBaseConfiguration.create();<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;addTmpJar("file:/D:/PAUL/WORK/WORK-SPACES/TEST1/cloudputing/target/bigdata-1.0.jar",config);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Job&nbsp;job&nbsp;=&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;Job(config,&nbsp;"ExampleRead");<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;And&nbsp;add&nbsp;this&nbsp;statement.&nbsp;XXX<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;((JobConf)&nbsp;job.getConfiguration()).setJar(jarFile.toString());<br />
     <br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;TableMapReduceUtil.addDependencyJars(job);<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;TableMapReduceUtil.addDependencyJars(job.getConfiguration(),<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MapreduceRead.class,MyMapper.class);</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;job.setJarByClass(MapreduceRead.<span style="color: #0000FF; ">class</span>);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;class&nbsp;that&nbsp;contains&nbsp;mapper</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Scan&nbsp;scan&nbsp;=&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;Scan();<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.setCaching(500);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;1&nbsp;is&nbsp;the&nbsp;default&nbsp;in&nbsp;Scan,&nbsp;which&nbsp;will&nbsp;be&nbsp;bad&nbsp;for&nbsp;MapReduce&nbsp;jobs</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan.setCacheBlocks(<span style="color: #0000FF; ">false</span>);&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;don't&nbsp;set&nbsp;to&nbsp;true&nbsp;for&nbsp;MR&nbsp;jobs<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;set&nbsp;other&nbsp;scan&nbsp;attrs</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;TableMapReduceUtil.initTableMapperJob(<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"wiki",&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;input&nbsp;HBase&nbsp;table&nbsp;name</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;scan,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;Scan&nbsp;instance&nbsp;to&nbsp;control&nbsp;CF&nbsp;and&nbsp;attribute&nbsp;selection</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MapreduceRead.MyMapper.<span style="color: #0000FF; ">class</span>,&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;mapper</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">null</span>,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;mapper&nbsp;output&nbsp;key&nbsp;</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">null</span>,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;mapper&nbsp;output&nbsp;value</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;job);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;job.setOutputFormatClass(NullOutputFormat.<span style="color: #0000FF; ">class</span>);&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;because&nbsp;we&nbsp;aren't&nbsp;emitting&nbsp;anything&nbsp;from&nbsp;mapper<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     </span><span style="color: #008000; ">//</span><span style="color: #008000; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;DistributedCache.addFileToClassPath(new&nbsp;Path("hdfs:</span><span style="color: #008000; ">//</span><span style="color: #008000; ">node.tracker1:9000/user/root/lib/stat-analysis-mapred-1.0-SNAPSHOT.jar"),job.getConfiguration());</span><span style="color: #008000; "><br />
     </span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">boolean</span>&nbsp;b&nbsp;=&nbsp;job.waitForCompletion(<span style="color: #0000FF; ">true</span>);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">if</span>&nbsp;(!b)&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">throw</span>&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;IOException("error&nbsp;with&nbsp;job!");<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #008000; ">/**</span><span style="color: #008000; "><br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;为Mapreduce添加第三方jar包<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;</span><span style="color: #808080; ">@param</span><span style="color: #008000; ">&nbsp;jarPath<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;举例：D:/Java/new_java_workspace/scm/lib/guava-r08.jar<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;</span><span style="color: #808080; ">@param</span><span style="color: #008000; ">&nbsp;conf<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;</span><span style="color: #808080; ">@throws</span><span style="color: #008000; ">&nbsp;IOException<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #008000; ">*/</span><br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;addTmpJar(String&nbsp;jarPath,&nbsp;Configuration&nbsp;conf)&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.setProperty("path.separator",&nbsp;":");<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileSystem&nbsp;fs&nbsp;=&nbsp;FileSystem.getLocal(conf);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;newJarPath&nbsp;=&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;Path(jarPath).makeQualified(fs).toString();<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;tmpjars&nbsp;=&nbsp;conf.get("tmpjars");<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">if</span>&nbsp;(tmpjars&nbsp;==&nbsp;<span style="color: #0000FF; ">null</span>&nbsp;||&nbsp;tmpjars.length()&nbsp;==&nbsp;0)&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("tmpjars",&nbsp;newJarPath);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;<span style="color: #0000FF; ">else</span>&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.set("tmpjars",&nbsp;tmpjars&nbsp;+&nbsp;":"&nbsp;+&nbsp;newJarPath);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;MyMapper&nbsp;<span style="color: #0000FF; ">extends</span>&nbsp;TableMapper&lt;Text,&nbsp;Text&gt;&nbsp;{<br />
     <br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;map(ImmutableBytesWritable&nbsp;row,&nbsp;Result&nbsp;value,<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Context&nbsp;context)&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;InterruptedException,&nbsp;IOException&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;val1&nbsp;=&nbsp;getValue(value.getValue(Bytes.toBytes("text"),&nbsp;Bytes.toBytes("qual1")));<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;val2&nbsp;=&nbsp;getValue(value.getValue(Bytes.toBytes("text"),&nbsp;Bytes.toBytes("qual2")));<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;System.out.println(val1&nbsp;+&nbsp;"&nbsp;--&nbsp;"&nbsp;+&nbsp;val2);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">private</span>&nbsp;String&nbsp;getValue(<span style="color: #0000FF; ">byte</span>&nbsp;[]&nbsp;value)<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0000FF; ">return</span>&nbsp;value&nbsp;==&nbsp;<span style="color: #0000FF; ">null</span>?&nbsp;"null"&nbsp;:&nbsp;<span style="color: #0000FF; ">new</span>&nbsp;String(value);<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
     &nbsp;&nbsp;&nbsp;&nbsp;}&nbsp;<br />
     <br />
     }</div>
     </li>
</ol><img src ="http://www.blogjava.net/paulwong/aggbug/394851.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-29 00:19 <a href="http://www.blogjava.net/paulwong/archive/2013/01/29/394851.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>未来企业IT技术关注点及IT架构变革探讨</title><link>http://www.blogjava.net/paulwong/archive/2013/01/14/394221.html</link><dc:creator>paulwong</dc:creator><author>paulwong</author><pubDate>Mon, 14 Jan 2013 15:09:00 GMT</pubDate><guid>http://www.blogjava.net/paulwong/archive/2013/01/14/394221.html</guid><wfw:comment>http://www.blogjava.net/paulwong/comments/394221.html</wfw:comment><comments>http://www.blogjava.net/paulwong/archive/2013/01/14/394221.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/paulwong/comments/commentRss/394221.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/paulwong/services/trackbacks/394221.html</trackback:ping><description><![CDATA[<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><span style="font-family: 'Times New Roman'; ">gartner十大战略性技术分析如下：</span></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">1.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;移动设备战争</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">移动设备多样化，<span style="font-family: 'Times New Roman'; ">Windows</span>仅仅是<span style="font-family: 'Times New Roman'; ">IT</span>需要支持的多种环境之一<span style="font-family: 'Times New Roman'; ">,IT</span>需要支持多样化环境。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">2.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;移动应用与<span style="font-family: 'Times New Roman'; ">HTML5</span></strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><span style="font-family: 'Times New Roman'; ">HTML5</span>将变得愈发重要，以满足多元化的需求，以满足对安全性非常看重的企业级应用。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">3.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;个人云</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">个人云将把重心从客户端设备向跨设备交付基于云的服务转移。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">4.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;企业应用商店</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">有了企业应用商店，<span style="font-family: 'Times New Roman'; ">IT</span>的角色将从集权式规划者转变为市场管理者，并为用户提供监管和经纪服务，甚至可能为应用程序专家提供生态系统支持。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">5.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;物联网</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">物联网是一个概念，描述了互联网将如何作为物理实物扩展，如消费电子设备和实物资产都连接到互联网上。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">6.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;混合型<span style="font-family: 'Times New Roman'; ">IT</span>和云计算</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">打造私有云并搭建相应的管理平台，再利用该平台来管理内外部服务</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">7.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;战略性大数据</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">企业应当将大数据看成变革性的构架，用多元化数据库代替基于同质划分的关系数据库。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">8.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;可行性分析</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">大数据的核心在于为企业提供可行的创意。受移动网络、社交网络、海量数据等因素的驱动，企业需要改变分析方式以应对新观点</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">9.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;内存计算</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">内存计算以云服务的形式提供给内部或外部用户<span style="font-family: 'Times New Roman'; ">,</span>数以百万的事件能在几十毫秒内被扫描以检测相关性和规律。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">10.</span>&nbsp;&nbsp;&nbsp; 整合生态系统</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">市场正在经历从松散耦合的异构系统向更为整合的系统和生态系统转移，应用程序与硬件、软件、软件及服务打包形成整合生态系统。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">结合应用实践及客户需求，可以有以下结论：</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">1.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;大数据时代已经到来</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><span style="font-family: 'Times New Roman'; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>物联网发展及非结构化、半结构化数据的剧增推动了大数据应用需求发展。大数据高效应用是挖掘企业数据资源价值的趋势与发展方向。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">2.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;云计算依旧是主题，云将更加关注个体</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><span style="font-family: 'Times New Roman'; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>云计算是改变<span style="font-family: 'Times New Roman'; ">IT</span>现状的核心技术之一，云计算将是大数据、应用商店交付的基础。个人云的发展将促使云端服务更关注个体。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">3.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;移动趋势，企业应用商店将改变传统软件交付模式</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><span style="font-family: 'Times New Roman'; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Windows</span>将逐步不再是客户端主流平台，<span style="font-family: 'Times New Roman'; ">IT</span>技术需要逐步转向支持多平台服务。在云平台上构建企业应用商店，逐步促成<span style="font-family: 'Times New Roman'; ">IT</span>的角色将从集权式规划者转变为应用市场管理者</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><strong><span style="font-family: 'Times New Roman'; ">4.</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;物联网将持续改变工作及生活方式</strong></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><span style="font-family: 'Times New Roman'; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>物联网将改变生活及工作方式，物联网将是一种革新的力量。在物联网方向，<span style="font-family: 'Times New Roman'; ">IPV6</span>将是值得研究的一个技术。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">未来企业IT架构图如下：</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><img alt="" src="http://img.my.csdn.net/uploads/201301/09/1357701760_4833.jpg" width="588" height="402" style="border: none; " /></p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">架构说明：</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">1.应用将被拆分，客户端将变得极简，用户只需要关注极小部分和自己有关的内容，打开系统后不再是上百个业务菜单。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">2.企业后端架构将以分布式架构为主，大数据服务能力将成为企业核心竞争力的集中体现。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">3.非结构化数据处理及分析相关技术将会得到前所未有的重视。</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; ">受个人水平有限，仅供参考，不当之处，欢迎拍砖！</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><br />
</p>
<p style="color: #333333; font-family: Arial; line-height: 26px; text-align: left; background-color: #ffffff; "><a href="http://blog.csdn.net/sdhustyh/article/details/8484780" target="_blank">http://blog.csdn.net/sdhustyh/article/details/8484780</a><br />
</p>
<img src ="http://www.blogjava.net/paulwong/aggbug/394221.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/paulwong/" target="_blank">paulwong</a> 2013-01-14 23:09 <a href="http://www.blogjava.net/paulwong/archive/2013/01/14/394221.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss>