﻿<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"><channel><title>BlogJava-anchor110-文章分类-bigdata</title><link>http://www.blogjava.net/anchor110/category/54814.html</link><description /><language>zh-cn</language><lastBuildDate>Sun, 26 Feb 2017 03:22:55 GMT</lastBuildDate><pubDate>Sun, 26 Feb 2017 03:22:55 GMT</pubDate><ttl>60</ttl><item><title>如何查看hadoop版本</title><link>http://www.blogjava.net/anchor110/articles/432303.html</link><dc:creator>小一败涂地</dc:creator><author>小一败涂地</author><pubDate>Mon, 13 Feb 2017 09:42:00 GMT</pubDate><guid>http://www.blogjava.net/anchor110/articles/432303.html</guid><wfw:comment>http://www.blogjava.net/anchor110/comments/432303.html</wfw:comment><comments>http://www.blogjava.net/anchor110/articles/432303.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/anchor110/comments/commentRss/432303.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/anchor110/services/trackbacks/432303.html</trackback:ping><description><![CDATA[<div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />-->hadoop&nbsp;version</div><img src ="http://www.blogjava.net/anchor110/aggbug/432303.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/anchor110/" target="_blank">小一败涂地</a> 2017-02-13 17:42 <a href="http://www.blogjava.net/anchor110/articles/432303.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>hadoop查看文件大小命令</title><link>http://www.blogjava.net/anchor110/articles/432298.html</link><dc:creator>小一败涂地</dc:creator><author>小一败涂地</author><pubDate>Sat, 11 Feb 2017 16:19:00 GMT</pubDate><guid>http://www.blogjava.net/anchor110/articles/432298.html</guid><wfw:comment>http://www.blogjava.net/anchor110/comments/432298.html</wfw:comment><comments>http://www.blogjava.net/anchor110/articles/432298.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/anchor110/comments/commentRss/432298.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/anchor110/services/trackbacks/432298.html</trackback:ping><description><![CDATA[参考命令：<span style="color: #333333; font-family: &quot;PingFang SC&quot;, &quot;Lantinghei SC&quot;, &quot;Microsoft YaHei&quot;, arial, 宋体, sans-serif, tahoma; background-color: #ffffff;">hadoop dfsadmin -report</span><img src ="http://www.blogjava.net/anchor110/aggbug/432298.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/anchor110/" target="_blank">小一败涂地</a> 2017-02-12 00:19 <a href="http://www.blogjava.net/anchor110/articles/432298.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>hbase启动时报错：java.lang.NoClassDefFoundError: org/htrace/Trace</title><link>http://www.blogjava.net/anchor110/articles/424888.html</link><dc:creator>小一败涂地</dc:creator><author>小一败涂地</author><pubDate>Wed, 06 May 2015 06:26:00 GMT</pubDate><guid>http://www.blogjava.net/anchor110/articles/424888.html</guid><wfw:comment>http://www.blogjava.net/anchor110/comments/424888.html</wfw:comment><comments>http://www.blogjava.net/anchor110/articles/424888.html#Feedback</comments><slash:comments>2</slash:comments><wfw:commentRss>http://www.blogjava.net/anchor110/comments/commentRss/424888.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/anchor110/services/trackbacks/424888.html</trackback:ping><description><![CDATA[<div>hadoop2.6，搭配hbase1.0.1，启动hadoop时，报错如下：<br /><br />java.lang.NoClassDefFoundError: org/htrace/Trace<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:214)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.lang.reflect.Method.invoke(Method.java:606)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.lang.reflect.Method.invoke(Method.java:606)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1526)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1275)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:831)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.lang.Thread.run(Thread.java:745)<br />Caused by: java.lang.ClassNotFoundException: org.htrace.Trace<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.net.URLClassLoader$1.run(URLClassLoader.java:366)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.net.URLClassLoader$1.run(URLClassLoader.java:355)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.security.AccessController.doPrivileged(Native Method)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.net.URLClassLoader.findClass(URLClassLoader.java:354)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.lang.ClassLoader.loadClass(ClassLoader.java:425)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.lang.ClassLoader.loadClass(ClassLoader.java:358)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 27 more<br /><br />解决办法：<br />将$HADOOP_HOME/share/hadoop/common/lib下的htrace-core-3.0.4.jar复制到$HBASE_HOME/lib下。</div><img src ="http://www.blogjava.net/anchor110/aggbug/424888.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/anchor110/" target="_blank">小一败涂地</a> 2015-05-06 14:26 <a href="http://www.blogjava.net/anchor110/articles/424888.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>SIMPLE authentication is not enabled.  Available:[TOKEN]</title><link>http://www.blogjava.net/anchor110/articles/424868.html</link><dc:creator>小一败涂地</dc:creator><author>小一败涂地</author><pubDate>Tue, 05 May 2015 09:48:00 GMT</pubDate><guid>http://www.blogjava.net/anchor110/articles/424868.html</guid><wfw:comment>http://www.blogjava.net/anchor110/comments/424868.html</wfw:comment><comments>http://www.blogjava.net/anchor110/articles/424868.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/anchor110/comments/commentRss/424868.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/anchor110/services/trackbacks/424868.html</trackback:ping><description><![CDATA[<div><div> hbase启动时，查看log日志，报错如下：<br /><br />2015-05-05&nbsp;14:58:32,926&nbsp;FATAL&nbsp;[master:16020.activeMasterManager]&nbsp;master.HMaster:&nbsp;Failed&nbsp;to&nbsp;become&nbsp;active&nbsp;master<br />org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):&nbsp;SIMPLE&nbsp;authentication&nbsp;is&nbsp;not&nbsp;enabled.&nbsp;&nbsp;Available:[TOKEN]<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.ipc.Client.call(Client.java:1411)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.ipc.Client.call(Client.java:1364)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;com.sun.proxy.$Proxy15.setSafeMode(Unknown&nbsp;Source)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;sun.reflect.NativeMethodAccessorImpl.invoke0(Native&nbsp;Method)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;java.lang.reflect.Method.invoke(Method.java:606)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;com.sun.proxy.$Proxy15.setSafeMode(Unknown&nbsp;Source)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;sun.reflect.NativeMethodAccessorImpl.invoke0(Native&nbsp;Method)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;java.lang.reflect.Method.invoke(Method.java:606)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;com.sun.proxy.$Proxy16.setSafeMode(Unknown&nbsp;Source)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:447)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:894)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:416)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:145)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hbase.master.MasterFileSystem.&lt;init&gt;(MasterFileSystem.java:125)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:594)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:165)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1428)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;at&nbsp;java.lang.Thread.run(Thread.java:745)<br /><br />经过各方查找，最终发现问题是$HBASE_HOME/conf/hbase-site.xml中的如下属性配置错了：<br /><div>&lt;property&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;name&gt;hbase.rootdir&lt;/name&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;value&gt;hdfs://master:8020/hbase&lt;/value&gt;<br />&lt;/property&gt;<br /><br />注：此配置中的IP或域名必须与$HADOOP_HOME/etc/hadoop/core-site.xml中的如下配置保持一致：<br /><div>&lt;property&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;name&gt;fs.defaultFS&lt;/name&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;value&gt;hdfs://master:8020&lt;/value&gt;<br />&lt;/property&gt;</div><br />总结：初入或者即使有经验的hadoop开发人员，在遇到类似错误时，也经常摸不着北，因为错误日志经常误导大家去其他地方查找问题。</div></div></div><img src ="http://www.blogjava.net/anchor110/aggbug/424868.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/anchor110/" target="_blank">小一败涂地</a> 2015-05-05 17:48 <a href="http://www.blogjava.net/anchor110/articles/424868.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation</title><link>http://www.blogjava.net/anchor110/articles/424756.html</link><dc:creator>小一败涂地</dc:creator><author>小一败涂地</author><pubDate>Wed, 29 Apr 2015 07:21:00 GMT</pubDate><guid>http://www.blogjava.net/anchor110/articles/424756.html</guid><wfw:comment>http://www.blogjava.net/anchor110/comments/424756.html</wfw:comment><comments>http://www.blogjava.net/anchor110/articles/424756.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/anchor110/comments/commentRss/424756.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/anchor110/services/trackbacks/424756.html</trackback:ping><description><![CDATA[通过Flume向hdfs传送文件数据时，报错如下：<br /><div>org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /aboutyunlog/FlumeData.1430287668990.tmp could only be replicated to 0 nodes instead of minReplication (=1).&nbsp; There are 2 datanode(s) running and 2 node(s) are excluded in this operation.</div><br />解决办法：<br />1、关闭hadoop集群中每一台机器的防火墙：iptables -F。<br /><br />当然上述方法比较暴力，真实情况可能只是放开Flume向hdfs通讯的几个端口就行了。<img src ="http://www.blogjava.net/anchor110/aggbug/424756.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/anchor110/" target="_blank">小一败涂地</a> 2015-04-29 15:21 <a href="http://www.blogjava.net/anchor110/articles/424756.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss>