﻿<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"><channel><title>BlogJava-xiaomage234-随笔分类-bigdata</title><link>http://www.blogjava.net/xiaomage234/category/54981.html</link><description>生命本就是一次凄美的漂流，记忆中放不下的，永远是孩提时代的那一份浪漫与纯真！</description><language>zh-cn</language><lastBuildDate>Thu, 08 Sep 2016 06:53:43 GMT</lastBuildDate><pubDate>Thu, 08 Sep 2016 06:53:43 GMT</pubDate><ttl>60</ttl><item><title>Introducing Apache Spark 2.0 Now generally available on Databricks</title><link>http://www.blogjava.net/xiaomage234/archive/2016/09/08/431778.html</link><dc:creator>小马歌</dc:creator><author>小马歌</author><pubDate>Thu, 08 Sep 2016 06:51:00 GMT</pubDate><guid>http://www.blogjava.net/xiaomage234/archive/2016/09/08/431778.html</guid><wfw:comment>http://www.blogjava.net/xiaomage234/comments/431778.html</wfw:comment><comments>http://www.blogjava.net/xiaomage234/archive/2016/09/08/431778.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/xiaomage234/comments/commentRss/431778.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/xiaomage234/services/trackbacks/431778.html</trackback:ping><description><![CDATA[<p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">Today, we&#8217;re excited to announce the general availability of&nbsp;<a href="https://spark.apache.org/releases/spark-release-2-0-0.html" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">Apache Spark 2.0</a>&nbsp;on Databricks. This release builds on what the community has learned in the past two years, doubling down on what users love and fixing&nbsp;the pain points. This post&nbsp;summarizes the three major themes&#8212;easier, faster, and smarter&#8212;that comprise Spark 2.0. We also explore many of them in more detail in our&nbsp;<a href="https://databricks.com/blog/2016/06/01/preview-apache-spark-2-0-an-anthology-of-technical-assets.html" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">anthology of Spark 2.0 content</a>.</p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">Two months ago, we&nbsp;launched a&nbsp;preview release of Apache Spark 2.0 on Databricks. As you can see in the chart below,&nbsp;10% of our clusters are already using this release, as customers experiment with the new features and give us feedback. Thanks to this experience, we are&nbsp;excited to be the first commercial vendor to support Spark 2.0.</p><div id="attachment_8330"  aligncenter"="" style="box-sizing: border-box; text-align: center; font-style: italic; margin-left: auto; margin-right: auto; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; width: 660px; background-color: #ffffff;"><a href="https://databricks.com/wp-content/uploads/2016/07/image00.png" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; cursor: zoom-in; background-color: transparent;"><img src="https://databricks.com/wp-content/uploads/2016/07/image00.png" alt="Spark Usage over Time by Release Versions" width="650" style="box-sizing: border-box; border: 0px; vertical-align: middle; max-width: 100%; height: auto;" /></a><p style="box-sizing: border-box; margin: 0px;">Apache Spark Usage over Time by Version</p></div><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">&nbsp;</p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">Now, let&#8217;s dive into what&#8217;s new in Apache Spark 2.0.</p><h3>Easier: ANSI SQL and Streamlined APIs</h3><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">One thing we are proud of in Spark is APIs that are simple, intuitive, and expressive. Spark 2.0 continues this tradition, focusing&nbsp;on two areas: (1) standard SQL support and (2) unifying DataFrame/Dataset API.</p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">On the SQL side, we have significantly expanded Spark&#8217;s SQL support, with the introduction of a new ANSI SQL parser and&nbsp;<a href="https://databricks.com/blog/2016/06/17/sql-subqueries-in-apache-spark-2-0.html" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">subqueries</a>.&nbsp;<span style="box-sizing: border-box; font-weight: 600;">Spark 2.0 can run all the 99 TPC-DS queries, which require many of the SQL:2003 features.</span>&nbsp;Because SQL has been one of the primary interfaces to Spark, these&nbsp;extended capabilities drastically reduce the effort of porting legacy applications.</p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">On the programmatic&nbsp;API side, we have streamlined Spark&#8217;s&nbsp;APIs:</p><ul style="box-sizing: border-box; margin-top: 0px; margin-bottom: 1.3em; padding-left: 0.5em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;"><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><span style="box-sizing: border-box; font-weight: 600;">Unifying DataFrames and Datasets in Scala/Java:</span>&nbsp;Starting in Spark 2.0, DataFrame is just a type alias for Dataset of Row. Both the typed methods (e.g.&nbsp;<code style="box-sizing: border-box; font-family: Menlo, Monaco, Consolas, &quot;Courier New&quot;, monospace; font-size: 15.3px; padding: 0px; border-radius: 4px; background-color: transparent;">map</code>,&nbsp;<code style="box-sizing: border-box; font-family: Menlo, Monaco, Consolas, &quot;Courier New&quot;, monospace; font-size: 15.3px; padding: 0px; border-radius: 4px; background-color: transparent;">filter</code>,&nbsp;<code style="box-sizing: border-box; font-family: Menlo, Monaco, Consolas, &quot;Courier New&quot;, monospace; font-size: 15.3px; padding: 0px; border-radius: 4px; background-color: transparent;">groupByKey</code>) and the untyped methods (e.g.&nbsp;<code style="box-sizing: border-box; font-family: Menlo, Monaco, Consolas, &quot;Courier New&quot;, monospace; font-size: 15.3px; padding: 0px; border-radius: 4px; background-color: transparent;">select</code>,&nbsp;<code style="box-sizing: border-box; font-family: Menlo, Monaco, Consolas, &quot;Courier New&quot;, monospace; font-size: 15.3px; padding: 0px; border-radius: 4px; background-color: transparent;">groupBy</code>) are available on the Dataset class. Also, this new combined Dataset interface is the abstraction used for Structured Streaming. Since compile-time type-safety&nbsp;is not a feature in Python and R, the concept of Dataset does not apply to these language APIs. Instead, DataFrame remains the primary interface&nbsp;there, and&nbsp;is analogous to the single-node data frame notion in these languages. Get a peek from<a href="https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/6122906529858466/431554386690871/4814681571895601/latest.html" target="_blank" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">this notebook</a>&nbsp;and&nbsp;<a href="https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">this blog</a>&nbsp;for the stories behind these APIs.</li><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><span style="box-sizing: border-box; font-weight: 600;">SparkSession:</span>&nbsp;a new entry point that supersedes&nbsp;SQLContext and HiveContext. For users of the DataFrame API, a common source of confusion for Spark is which &#8220;context&#8221; to use. Now you can use SparkSession, which subsumes both, as a single entry point, as<a href="https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/6122906529858466/431554386690884/4814681571895601/latest.html" target="_blank" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">demonstrated in this notebook</a>. Note that the old SQLContext and HiveContext classes are still kept for backward compatibility.</li><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><span style="box-sizing: border-box; font-weight: 600;">Simpler, more performant Accumulator API:</span>&nbsp;We have designed a&nbsp;<a href="http://spark.apache.org/docs/2.0.0/api/scala/index.html#org.apache.spark.util.AccumulatorV2" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">new Accumulator API</a>that has a simpler type hierarchy and support specialization for primitive types. The old Accumulator API has been deprecated but retained for backward compatibility</li><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><span style="box-sizing: border-box; font-weight: 600;">DataFrame-based Machine Learning API emerges as the primary ML API:</span>&nbsp;With Spark 2.0, the&nbsp;<a href="http://spark.apache.org/docs/2.0.0/api/scala/index.html#org.apache.spark.ml.package" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">spark.ml</a>&nbsp;package, with its &#8220;pipeline&#8221; APIs, will emerge as the primary machine learning API. While the original spark.mllib package is preserved, future development will focus on the DataFrame-based API.</li><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><span style="box-sizing: border-box; font-weight: 600;">Machine learning pipeline persistence:</span>&nbsp;Users can now save and load machine learning pipelines and models across all programming languages supported by Spark. See&nbsp;<a href="https://databricks.com/blog/2016/05/31/apache-spark-2-0-preview-machine-learning-model-persistence.html" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">this blog post</a>&nbsp;for more details and&nbsp;<a href="https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/6122906529858466/100429797847215/4814681571895601/latest.html" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">this notebook</a>&nbsp;for examples.</li><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><span style="box-sizing: border-box; font-weight: 600;">Distributed algorithms in R:</span>&nbsp;Added support for Generalized Linear Models (GLM), Naive Bayes, Survival Regression, and K-Means in R.</li><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><span style="box-sizing: border-box; font-weight: 600;">User-defined functions (UDFs) in R</span>: Added support for running partition level UDFs (dapply and gapply) and hyper-parameter tuning (lapply).</li></ul><h3>Faster: Apache Spark as a Compiler</h3><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">According to our&nbsp;<a href="https://databricks.com/blog/2015/09/24/spark-survey-results-2015-are-now-available.html" target="_blank" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">2015 Spark Survey</a>, 91% of users consider performance as the most important aspect of Apache Spark. As a result, performance optimizations have always been a focus in our Spark development. Before we started planning our contributions to Spark 2.0, we asked ourselves a question:&nbsp;<span style="box-sizing: border-box; font-weight: 600;">Spark is already pretty fast, but can we push the boundary and make Spark 10X faster?</span></p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">This question led us to fundamentally rethink the way we build Spark&#8217;s physical execution layer. When you look into a modern data engine (e.g. Spark or other MPP databases), majority of the CPU cycles are spent in useless work, such as making virtual function calls or reading/writing intermediate data to CPU cache or memory. Optimizing performance by reducing the amount of CPU cycles wasted in these useless work has been a long time focus of modern compilers.</p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">Spark 2.0 ships with the second generation&nbsp;<a href="https://databricks.com/blog/2015/04/28/project-tungsten-bringing-spark-closer-to-bare-metal.html" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">Tungsten</a>&nbsp;engine.&nbsp;<span style="box-sizing: border-box; font-weight: 600;">This engine builds upon ideas from modern compilers and MPP databases and applies them to Spark workloads.</span>&nbsp;The main idea is to emit optimized code at runtime that collapses the entire query into a single function, eliminating virtual function calls and leveraging CPU registers for intermediate data. We call this technique &#8220;<a href="https://databricks.com/blog/2016/05/23/apache-spark-as-a-compiler-joining-a-billion-rows-per-second-on-a-laptop.html" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">whole-stage code generation</a>.&#8221;</p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">To give you a teaser, we have measured the time (in nanoseconds) it takes to process a row on one core for some of the operators in Spark 1.6 vs. Spark 2.0. The table below shows the improvements&nbsp;in Spark 2.0. Spark 1.6 also included an expression code generation technique that is used&nbsp;in some state-of-the-art commercial databases, but as you can see,&nbsp;many operators became an order of magnitude faster with whole-stage code generation.</p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">You can see the power of whole-stage code generation in action in&nbsp;<a href="https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/6122906529858466/293651311471490/5382278320999420/latest.html" target="_blank" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">this notebook</a>, in which we perform aggregations and joins on 1 billion records on a single machine.</p><figure style="box-sizing: border-box; margin: 0px; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;"><figcaption style="box-sizing: border-box; text-align: center; font-style: italic; margin-bottom: 0.5em;">Cost per Row (single thread)</figcaption><table style="box-sizing: border-box; border-spacing: 0px; border-collapse: collapse; width: 652px; max-width: 100%; margin-bottom: 20px; background-color: transparent;"><thead style="box-sizing: border-box;"><tr style="box-sizing: border-box;"><th style="box-sizing: border-box; padding: 0.5em 8px; text-align: left; font-weight: 400; border-top: 0px; border-bottom: 1px solid #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: bottom;">primitive</th><th style="box-sizing: border-box; padding: 0.5em 8px; text-align: left; font-weight: 400; border-top: 0px; border-bottom: 1px solid #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: bottom;">Spark 1.6</th><th style="box-sizing: border-box; padding: 0.5em 8px; text-align: left; font-weight: 400; border-top: 0px; border-bottom: 1px solid #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: bottom;">Spark 2.0</th></tr></thead><tbody style="box-sizing: border-box;"><tr style="box-sizing: border-box;"><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">filter</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">15ns</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">1.1ns</td></tr><tr style="box-sizing: border-box;"><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">sum w/o group</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">14ns</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">0.9ns</td></tr><tr style="box-sizing: border-box;"><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">sum w/ group</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">79ns</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">10.7ns</td></tr><tr style="box-sizing: border-box;"><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">hash join</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">115ns</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">4.0ns</td></tr><tr style="box-sizing: border-box;"><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">sort (8-bit entropy)</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">620ns</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">5.3ns</td></tr><tr style="box-sizing: border-box;"><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">sort (64-bit entropy)</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">620ns</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">40ns</td></tr><tr style="box-sizing: border-box;"><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">sort-merge join</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">750ns</td><td style="box-sizing: border-box; padding: 0.5em 8px; border-top-style: solid; border-top-color: #dddddd; border-bottom-style: solid; border-bottom-color: #dddddd; border-left: 0px; border-right: 0px; line-height: 1.42857; vertical-align: top;">700ns</td></tr></tbody></table></figure><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">How does this new engine work on end-to-end queries? We did some preliminary analysis using TPC-DS queries to compare Spark 1.6 and Spark 2.0:</p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;"><a href="https://databricks.com/wp-content/uploads/2016/05/preliminary-tpc-ds-spark-2-0-vs-1-6.png" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; cursor: zoom-in; background-color: transparent;"><br style="box-sizing: border-box;" /><img size-full=""  wp-image-7218"="" src="https://databricks.com/wp-content/uploads/2016/05/preliminary-tpc-ds-spark-2-0-vs-1-6.png" alt="Preliminary TPC-DS Spark 2.0 vs 1.6" width="703" height="380" style="box-sizing: border-box; border: 0px; vertical-align: middle; max-width: 100%; height: auto; display: block; margin-left: auto; margin-right: auto;" /></a></p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">Beyond whole-stage code generation to improve performance, a lot of work has also gone into improving the Catalyst optimizer for general query optimizations such as nullability propagation, as well as a new vectorized Parquet decoder that improved Parquet scan throughput by 3X.&nbsp;<a href="https://databricks.com/blog/2016/05/23/apache-spark-as-a-compiler-joining-a-billion-rows-per-second-on-a-laptop.html" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">Read this blog post</a>&nbsp;for more detail on the optimizations&nbsp;in Spark 2.0.</p><h3>Smarter: Structured Streaming</h3><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">Spark Streaming has long led the big data space as one of the first systems&nbsp;unifying batch and streaming computation. When its streaming API, called DStreams, was&nbsp;introduced in Spark 0.7, it offered developers with several powerful properties: exactly-once semantics, fault-tolerance at scale, strong consistency guarantees and high throughput.</p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">However, after working with hundreds of real-world deployments of Spark Streaming, we found that applications that need to make decisions in real-time often require&nbsp;<span style="box-sizing: border-box; font-weight: 600;">more than just a streaming engine</span>. They require deep integration of the batch stack and the streaming stack, interaction&nbsp;with external storage systems, as well as the ability to cope with changes in business logic. As a result, enterprises want more than just a streaming engine; instead they need a full stack that enables them to develop end-to-end&nbsp;<span style="box-sizing: border-box; font-weight: 600;">&#8220;continuous applications.&#8221;</span></p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">Spark 2.0&nbsp;tackles these use cases through a new API called Structured Streaming. Compared to existing streaming systems, Structured Streaming makes three key improvements:</p><ol style="box-sizing: border-box; margin-top: 0px; margin-bottom: 1.3em; padding-left: 1em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;"><li style="box-sizing: border-box; margin-bottom: 0.5em;"><span style="box-sizing: border-box; font-weight: 600;">Integrated API with batch jobs.</span>&nbsp;To run a streaming computation, developers simply write a batch computation against the DataFrame / Dataset API, and Spark automatically<em style="box-sizing: border-box;">incrementalizes</em>&nbsp;the computation to run it in a streaming fashion (i.e. update the result as data comes in). This powerful design&nbsp;means that developers don&#8217;t have to manually manage state, failures, or keeping the application in sync with batch jobs.&nbsp;Instead, the&nbsp;streaming job&nbsp;always gives the same answer as a batch job on the same data.</li><li style="box-sizing: border-box; margin-bottom: 0.5em;"><span style="box-sizing: border-box; font-weight: 600;">Transactional interaction&nbsp;with storage systems.</span>&nbsp;Structured Streaming handles fault tolerance and consistency holistically across the engine and storage systems, making it easy to write applications that update a live database used for serving, join in&nbsp;static data, or move data reliably&nbsp;between storage systems.</li><li style="box-sizing: border-box; margin-bottom: 0.5em;"><span style="box-sizing: border-box; font-weight: 600;">Rich integration with the rest of Spark.</span>&nbsp;Structured Streaming supports interactive queries on streaming data through Spark SQL, joins against static data, and many libraries that already use DataFrames, letting developers build complete applications instead of just streaming pipelines. In the future, expect more integrations with MLlib and other libraries.</li></ol><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">Spark 2.0 ships with an initial, alpha version of Structured Streaming, as a (surprisingly small!) extension to the DataFrame/Dataset API. This&nbsp;makes&nbsp;it&nbsp;easy to adopt for existing Spark users&nbsp;that want to answer new questions in real-time. Other key features include support for event-time based processing, out-of-order/delayed data, interactive queries, and interaction with non-streaming data sources and sinks.</p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">We also updated the Databricks workspace to&nbsp;support Structured Streaming. For example, when launching a streaming query, the notebook UI&nbsp;will automatically display its status.<a href="https://databricks.com/wp-content/uploads/2016/07/image01.png" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; cursor: zoom-in; background-color: transparent;"><img size-full=""  wp-image-8332"="" src="https://databricks.com/wp-content/uploads/2016/07/image01.png" alt="image01" width="1978" height="834" style="box-sizing: border-box; border: 0px; vertical-align: middle; max-width: 100%; height: auto;" /></a></p><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">Streaming is clearly a broad topic, so stay tuned for&nbsp;a series of blog posts with&nbsp;more details on Structured Streaming in Apache Spark 2.0.</p><h3>Conclusion</h3><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">Spark users initially came to Apache Spark for its ease-of-use and performance. Spark 2.0 doubles down on these while extending it to support an even wider range of workloads. Enjoy the new release on Databricks.</p><h3>Read More</h3><p style="box-sizing: border-box; margin: 0px 0px 1.3em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;">You can also import the following notebooks and try them on&nbsp;<a href="https://databricks.com/try-databricks" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">Databricks Community Edition</a>with Spark 2.0.</p><ul style="box-sizing: border-box; margin-top: 0px; margin-bottom: 1.3em; padding-left: 0.5em; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; line-height: 24.2857px; background-color: #ffffff;"><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><a href="https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/6122906529858466/431554386690884/4814681571895601/latest.html" target="_blank" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">SparkSession: A new entry point</a></li><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><a href="https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/6122906529858466/431554386690871/4814681571895601/latest.html" target="_blank" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">Datasets: A more streamlined API</a></li><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><a href="https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/6122906529858466/293651311471490/5382278320999420/latest.html" target="_blank" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">Performance of whole-stage code generation</a></li><li style="box-sizing: border-box; margin-bottom: 0.5em; position: relative; list-style: none; padding-left: 1em;"><a href="https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/6122906529858466/100429797847215/4814681571895601/latest.html" target="_blank" style="box-sizing: border-box; color: #1cb1c2; text-decoration: none; background-color: transparent;">Machine learning pipeline persistence</a></li></ul><p style="box-sizing: border-box; margin: 0px 0px 2.5em; line-height: 16px; color: #444444; font-family: &quot;Source Sans Pro&quot;, Helvetica, Arial, sans-serif; font-size: 17px; background-color: #ffffff;"><img src="https://databricks.com/wp-content/themes/databricks/assets/images/blog/Databricks-logo-bug.png?v=2.90" alt="Databricks Blog" width="15" height="16" style="box-sizing: border-box; border: 0px; vertical-align: middle; max-width: 100%; height: auto;" /></p><img src ="http://www.blogjava.net/xiaomage234/aggbug/431778.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/xiaomage234/" target="_blank">小马歌</a> 2016-09-08 14:51 <a href="http://www.blogjava.net/xiaomage234/archive/2016/09/08/431778.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>从小数据分析到大数据平台，这十几年来大数据开源技术是如何演进的？</title><link>http://www.blogjava.net/xiaomage234/archive/2016/09/08/431776.html</link><dc:creator>小马歌</dc:creator><author>小马歌</author><pubDate>Thu, 08 Sep 2016 06:45:00 GMT</pubDate><guid>http://www.blogjava.net/xiaomage234/archive/2016/09/08/431776.html</guid><wfw:comment>http://www.blogjava.net/xiaomage234/comments/431776.html</wfw:comment><comments>http://www.blogjava.net/xiaomage234/archive/2016/09/08/431776.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/xiaomage234/comments/commentRss/431776.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/xiaomage234/services/trackbacks/431776.html</trackback:ping><description><![CDATA[&nbsp;&nbsp;&nbsp;&nbsp; 摘要: from:http://chuansong.me/n/465862351096本文整理自QCon北京Fangjin Yang的英文主题演讲。关注&#8220;大数据杂谈&#8221;公众号，点击&#8220;加群学习&#8221;，更多大牛一手技术分享等着你。演讲整理：刘继伟在QCon 2016 北京站上，Druid开源项目的负责人，同时也是一家位于旧金山的技术公司共同创始人的Fangjin Ya...&nbsp;&nbsp;<a href='http://www.blogjava.net/xiaomage234/archive/2016/09/08/431776.html'>阅读全文</a><img src ="http://www.blogjava.net/xiaomage234/aggbug/431776.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/xiaomage234/" target="_blank">小马歌</a> 2016-09-08 14:45 <a href="http://www.blogjava.net/xiaomage234/archive/2016/09/08/431776.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Druid：一个用于大数据实时处理的开源分布式系统</title><link>http://www.blogjava.net/xiaomage234/archive/2016/09/08/431777.html</link><dc:creator>小马歌</dc:creator><author>小马歌</author><pubDate>Thu, 08 Sep 2016 06:45:00 GMT</pubDate><guid>http://www.blogjava.net/xiaomage234/archive/2016/09/08/431777.html</guid><wfw:comment>http://www.blogjava.net/xiaomage234/comments/431777.html</wfw:comment><comments>http://www.blogjava.net/xiaomage234/archive/2016/09/08/431777.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/xiaomage234/comments/commentRss/431777.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/xiaomage234/services/trackbacks/431777.html</trackback:ping><description><![CDATA[<p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 25.2px; clear: both; width: 610px; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; background-color: #ffffff;"><a href="http://druid.io/" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Druid</a>是一个用于大数据实时查询和分析的高容错、高性能开源分布式系统，旨在快速处理大规模的数据，并能够实现快速查询和分析。尤其是当发生代码部署、机器故障以及其他产品系统遇到宕机等情况时，Druid仍能够保持100%正常运行。创建Druid的最初意图主要是为了解决查询延迟问题，当时试图使用Hadoop来实现交互式查询分析，但是很难满足实时分析的需要。而Druid提供了以交互方式访问数据的能力，并权衡了查询的灵活性和性能而采取了特殊的存储格式。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 25.2px; clear: both; width: 610px; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; background-color: #ffffff;">Druid功能介于<a href="http://www.vldb.org/pvldb/vol5/p1436_alexanderhall_vldb2012.pdf" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">PowerDrill</a>和<a href="http://research.google.com/pubs/pub36632.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Dremel</a>之间，它几乎实现了Dremel的所有功能，并且从PowerDrill吸收一些有趣的数据格式。Druid允许以类似Dremel和PowerDrill的方式进行单表查询，同时还增加了一些新特性，如为局部嵌套数据结构提供列式存储格式、为快速过滤做索引、实时摄取和查询、高容错的分布式体系架构等。从官方得知，Druid的具有以下主要特征：</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; line-height: 25.2px; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">为分析而设计</span>&#8212;&#8212;Druid是为<a href="http://en.wikipedia.org/wiki/Online_analytical_processing" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">OLAP</a>工作流的探索性分析而构建，它支持各种过滤、聚合和查询等类；</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">快速的交互式查询</span>&#8212;&#8212;Druid的低延迟数据摄取架构允许事件在它们创建后毫秒内可被查询到；</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">高可用性</span>&#8212;&#8212;Druid的数据在系统更新时依然可用，规模的扩大和缩小都不会造成数据丢失；</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">可扩展</span>&#8212;&#8212;Druid已实现每天能够处理数十亿事件和TB级数据。</li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 25.2px; clear: both; width: 610px; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; background-color: #ffffff;">Druid应用最多的是类似于广告分析创业公司<a href="http://metamarkets.com/" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Metamarkets</a>中的应用场景，如广告分析、互联网广告系统监控以及网络监控等。当业务中出现以下情况时，Druid是一个很好的技术方案选择：</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; line-height: 25.2px; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">需要交互式聚合和快速探究大量数据时；</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">需要实时查询分析时；</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">具有大量数据时，如每天数亿事件的新增、每天数10T数据的增加；</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">对数据尤其是大数据进行实时分析时；</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">需要一个高可用、高容错、高性能数据库时。</li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 25.2px; clear: both; width: 610px; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; background-color: #ffffff;">一个Druid集群有各种类型的节点（Node）组成，每个节点都可以很好的处理一些的事情，这些节点包括对非实时数据进行处理存储和查询的<a href="http://druid.io/docs/latest/Historical.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Historical节点</a>、实时摄取数据、监听输入数据流的<a href="http://druid.io/docs/latest/Realtime.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Realtime节</a>、监控Historical节点的<a href="http://druid.io/docs/latest/Coordinator.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Coordinator节点</a>、接收来自外部客户端的查询和将查询转发到Realtime和Historical节点的<a href="http://druid.io/docs/latest/Broker.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Broker节点</a>、负责索引服务的<a href="http://druid.io/docs/latest/Indexing-Service.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Indexer节点</a>。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 25.2px; clear: both; width: 610px; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; background-color: #ffffff;">查询操作中数据流和各个节点的关系如下图所示：</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 25.2px; clear: both; width: 610px; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; background-color: #ffffff;"><img _href="img://null" _p="true" src="http://cdn4.infoqstatic.com/statics_s2_20160831-0533u1/resource/news/2015/04/druid-data/zh/resources/11.png" width="530" style="border: 0px; margin: 0px 10px 10px 0px; padding: 0px; max-width: 100%;"  alt="" /></p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 25.2px; clear: both; width: 610px; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; background-color: #ffffff;">如下图是Druid集群的管理层架构，该图展示了相关节点和集群管理所依赖的其他组件（如负责服务发现的<a href="http://druid.io/docs/latest/ZooKeeper.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">ZooKeeper集群</a>）的关系：</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 25.2px; clear: both; width: 610px; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; background-color: #ffffff;"><img _href="img://null" _p="true" src="http://cdn4.infoqstatic.com/statics_s2_20160831-0533u1/resource/news/2015/04/druid-data/zh/resources/22.png" width="530" style="border: 0px; margin: 0px 10px 10px 0px; padding: 0px; max-width: 100%;"  alt="" /></p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 25.2px; clear: both; width: 610px; font-family: &quot;Lantinghei SC&quot;, &quot;Open Sans&quot;, Arial, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, STHeiti, &quot;WenQuanYi Micro Hei&quot;, SimSun, Helvetica, sans-serif; background-color: #ffffff;">Druid已基于<a href="http://www.apache.org/licenses/LICENSE-2.0" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Apache License 2.0</a>协议开源，代码托管在<a href="https://github.com/druid-io/druid" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">GitHub</a>，其当前最新稳定版本是<a href="https://github.com/druid-io/druid/releases" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">0.7.1.1</a>。当前，Druid已有63个代码贡献者和将近2000个关注。Druid的主要贡献者包括广告分析创业公司Metamarkets、电影流媒体网站<a href="https://www.netflix.com/global" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Netflix</a>、Yahoo等公司。Druid官方还对Druid同<a href="http://druid.io/docs/latest/Druid-vs-Impala-or-Shark.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Shark</a>、<a href="http://druid.io/docs/latest/Druid-vs-Vertica.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Vertica</a>、<a href="http://druid.io/docs/latest/Druid-vs-Cassandra.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Cassandra</a>、<a href="http://druid.io/docs/latest/Druid-vs-Hadoop.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Hadoop</a>、<a href="http://druid.io/docs/latest/Druid-vs-Spark.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Spark</a>、<a href="http://druid.io/docs/latest/Druid-vs-Elasticsearch.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Elasticsearch</a>等在容错能力、灵活性、查询性能等方便进行了对比说明。更多关于Druid的信息，大家还可以参考官方提供的<a href="http://druid.io/docs/latest/Tutorial%3a-A-First-Look-at-Druid.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">入门教程</a>、<a href="http://static.druid.io/docs/druid.pdf" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">白皮书</a>、<a href="http://druid.io/docs/latest/Design.html" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">设计文档</a>等。</p><img src ="http://www.blogjava.net/xiaomage234/aggbug/431777.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/xiaomage234/" target="_blank">小马歌</a> 2016-09-08 14:45 <a href="http://www.blogjava.net/xiaomage234/archive/2016/09/08/431777.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>用大数据思维做运维监控是怎样一种体验?</title><link>http://www.blogjava.net/xiaomage234/archive/2016/09/06/431755.html</link><dc:creator>小马歌</dc:creator><author>小马歌</author><pubDate>Tue, 06 Sep 2016 08:50:00 GMT</pubDate><guid>http://www.blogjava.net/xiaomage234/archive/2016/09/06/431755.html</guid><wfw:comment>http://www.blogjava.net/xiaomage234/comments/431755.html</wfw:comment><comments>http://www.blogjava.net/xiaomage234/archive/2016/09/06/431755.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/xiaomage234/comments/commentRss/431755.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/xiaomage234/services/trackbacks/431755.html</trackback:ping><description><![CDATA[from:http://www.36dsj.com/archives/55359<br /><br /><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">作者：祝威廉</p><ul style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;"><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">工程数据，譬如工单数量，SLA可用性，基础资源，故障率，报警统计</strong></li><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">业务数据，譬如业务DashBoard,Trace调用链，业务拓扑切换，业务指标，业务基准数据，业务日志挖掘</strong></li><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">数据可视化</strong></li></ul><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">当然，这篇文章谈的是运维都有哪些数据，哪些指标，以及数据呈现。并没有谈及如何和大数据相关的架构做整合，从而能让这些数据真的变得活起来。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">比较凑巧的是，原先百度的桑文峰的分享也讲到日志的多维度分析，吃完饭的时候，一位优酷的朋友也和我探讨了关于业务监控的的问题。而我之前发表在肉饼铺子里的一篇文章《 大数据给公司带来了什么 》也特地提到了大数据对于整个运维的帮助，当时因为这篇内容的主旨是罗列大数据的用处，自然没法细讲运维和大数据的整合这一块。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">上面的文字算引子，在步入正式的探讨前，有一点我觉得值得强调：</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">虽然这里讲的是如何将大数据思维/架构应用于运维，平台化运维工作，但是和大数据本质上没有关系，我们只是将大数据处理的方式和思想应用在运维工作上。所以，即使你现在所在的公司没有数据团队支撑，也是完全可以通过现有团队完成这件事情的。</p><h3><strong style="box-sizing: border-box;">1 运维监控现状</strong></h3><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">很多公司的运维的监控具有如下特质：</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">只能监控基础运维层次，通过zabbix等工具提供服务器,CPU,内存等相关的监控。这部分重要，但确实不是运维的核心。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">对业务的监控是最复杂的，而现在很多公司的要么还处于Shell脚本的刀耕火种阶段，要么开发能力较强，但是还是东一榔头西一棒子，不同的业务需要不同的监控系统，人人都可以根据的自己的想法开发一个监控的工具也好，系统也好，平台也好。总之是比较凌乱的。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">使用第三方的监控平台。这个似乎在Rails/NodeJS/Pythone相关语系开发的产品中比较常见。我不做过多评价，使用后冷暖自知。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">当然也有抽象得很好的，比如点评网的运维监控据说就做得相当好，运维很闲，天天没事就根据自己的监控找开发的茬，让开发持续改进。不过他们的指导思想主要有两个：</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">运维自动化。怎么能够实现这个目标就怎么搞，这严重依赖于搞的人的规划能力和经验。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">抽象化，根据实际面临的问题做出抽象，得到对应的系统，比如需要发布，于是又发布系统，需要管理配置文件，所以有配管系统，需要日志分析所以有了有日志分析系统。然而这样是比较零散的。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">有点扯远，我们还是focus在监控上。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">如果以大数据的思维去思考，我们应该如何做好监控这件事情?</p><h3><strong style="box-sizing: border-box;">2 罗列出你的数据源</strong></h3><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">《大数据对于运维的意义》这篇文章也讲了，主要有工程数据，业务数据。所有的数据源都有一个共性，就是 日志 。无论文本的也好，二进制的也好。所以日志是整个信息的源头。日志包含的信息足以让我们追查到下面几件事情：</p><ul style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;"><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">系统健康状况监控</strong></li><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">查找故障根源</strong></li><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">系统瓶颈诊断和调优</strong></li><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">追踪安全相关问题</strong></li><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">从日志我们可以挖掘出什么?</strong></li></ul><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">我觉得抽象起来就一个： 指标 。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">指标可以再进行分类：</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">业务层面，如团购业务每秒访问数，团购券每秒验券数，每分钟支付、创建订单等</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">应用层面，每个应用的错误数，调用过程，访问的平均耗时，最大耗时，95线等</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">系统资源层面：如cpu、内存、swap、磁盘、load、主进程存活等</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">网络层面： 如丢包、ping存活、流量、tcp连接数等</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">每个分类里的每个小点其实都是一个指标。</p><h3><strong style="box-sizing: border-box;">3 如何统一实现</strong></h3><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">千万不要针对具体问题进行解决，大数据架构上的一个思维就是：我能够提供一个平台让大家方便解决这些问题么? 而不是，这个问题我能解决么?</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">先来看看架构图：</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;"></p><center style="box-sizing: border-box; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;"><img size-full=""  wp-image-55362"="" src="http://www.36dsj.com/wp-content/uploads/2016/07/713.jpg" alt="架构" width="623" height="275" data-tag="bdshare" data-bd-imgshare-binded="1" style="box-sizing: border-box; border: 0px; vertical-align: middle; margin: 0px auto; display: block; max-width: 100%; height: auto;" /></center><span style="color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">因为目前我负责应用层的研发，业务还比较少，主要就需要监控三个系统：</span><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;"></p><ul style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;"><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">推荐</strong></li><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">搜索</strong></li><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">统一查询引擎</strong></li></ul><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">所以监控的架构设计略简单些。如果你希望进行日志存储以及事后批量分析，则可以采用淘宝的这套架构方式：</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;"></p><center style="box-sizing: border-box; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;"><img size-full=""  wp-image-55363"="" src="http://www.36dsj.com/wp-content/uploads/2016/07/714.jpg" alt="架构方式" width="635" height="461" data-tag="bdshare" data-bd-imgshare-binded="1" style="box-sizing: border-box; border: 0px; vertical-align: middle; margin: 0px auto; display: block; max-width: 100%; height: auto;" /></center><span style="color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">稍微说明下，日志收集Agent可以使用Flume,鹰眼Storm集群，其实就是Storm集群，当然有可能是淘宝内部Java版的，Storm(或第一幅图的SparkStreaming)做两件事情。</span><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;"></p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">将日志过滤，格式化，或存储起来</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">进行实时计算，将指标数据存储到HBase里去</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">到目前为止，我们没有做任何的开发，全部使用大数据里通用的一些组件。至于这些组件需要多少服务器，就看对应的日志量规模了，三五台到几百台都是可以的。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">需要开发的地方只有两个点，有一个是一次性的，有一个则是长期。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">先说说一次性的，其实就是大盘展示系统。这个就是从HBase里取出数据做展示。这个貌似也有开源的一套，ELK。不过底层不是用的HBase存储，而是ES。这里就不详细讨论。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">长期的则是SparkStreaming(淘宝是使用Storm，我建议用SparkStreaming,因为SparkStreaming可以按时间窗口，也可以按量统一做计算)，这里你需要定义日志的处理逻辑，生成我上面提到的各项指标。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">这里有一个什么好处呢，就是平台化了，对新的监控需求响应更快了，开发到上线可能只要几个小时的功夫。如果某个系统某天需要一个新的监控指标，我们只要开发个SparkStreaming程序，丢到平台里去，这事就算完了。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">第一幅图的平台我是已经实现了的。我目前在SparkStreaming上只做了三个方面比较基础的监控，不过应该够用了。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">状态码大盘。 HTTP响应码的URL(去掉query参数)排行榜。比如你打开页面就可以看到发生500错误的top100的URL，以及该URL所归属的系统。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">响应耗时大盘。 URL请求耗时排行榜。比如你打开页面就可以看到5分钟内平均响应耗时top100的URL(去掉query参数)。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">还有就是Trace系统。 类似Google的Dapper,淘宝的EagleEye。给出一个唯一的UUID,可以追踪到特定一个Request的请求链路。每个依赖服务的响应情况，比如响应时间。对于一个由几个甚至几百个服务组成的大系统，意义非常大，可以方便的定位出到底是那个系统的哪个API的问题。这个最大的难点是需要统一底层的RPC/HTTP调用框架，进行埋点。因为我使用的是自研的ServiceFramework框架，通讯埋点就比较简单。如果是在一个业务线复杂，各个系统使用不同技术开发，想要做这块就要做好心理准备了。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">现在，如果你想要监控一个系统是不是存活，你不在需要取写脚本去找他的pid看进程是不是存在，系统发现在一定的周期内没有日志，就可以认为它死了。而系统如果有异常，比如有大量的慢查询，大盘一定能展示出来。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">描述到这，我们可以看到，这套架构的优势在哪：</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">基本上没有需要自己开发的系统。从日志收集，到日志存储，到结果存储等，统统都是现成的组件。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">可扩展性好。每个组件都是集群模式的，没有单点故障。每个组件都是可水平扩展的，日志量大了，加机器就好。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">开发更集中了。你只要关注日志实际的分析处理，提炼指标即可。</p><h3><strong style="box-sizing: border-box;">4 大数据思维</strong></h3><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">对于运维的监控，利用大数据思维，需要分三步走：</p><ul style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;"><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">找到数据</strong></li><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">分析定义从数据里中我能得到什么</strong></li><li style="box-sizing: border-box;"><strong style="box-sizing: border-box;">从大数据平台中挑选你要的组件完成搭积木式开发</strong></li></ul><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">所有系统最可靠的就是日志输出，系统是不是正常，发生了什么情况，我们以前是出了问题去查日志，或者自己写个脚本定时去分析。现在这些事情都可以整合到一个已有的平台上，我们唯一要做的就是 定义处理日志的的逻辑 。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">这里有几点注意的：</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">如果你拥有复杂的产品线，那么日志格式会是一个很痛苦的事情。以为这中间Storm(或者SparkStreaming)的处理环节你需要做大量的兼容适配。我个人的意见是，第一，没有其他更好的办理，去兼容适配吧，第二，推动大家统一日志格式。两件事情一起做。我一个月做不完，那我用两年时间行么?总有一天大家都会有统一的日志格式的。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">如果你的研发能力有富余,或者有大数据团队支撑，那么可以将进入到SparkStreaming中的数据存储起来，然后通过SparkSQL等做即席查询。这样，有的时候原先没有考虑的指标，你可以直接基于日志做多维度分析。分析完了，你觉得好了，需要固化下来，那再去更新你的SparkStreaming程序。</p><h3><strong style="box-sizing: border-box;">后话</strong></h3><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">我做上面第一幅图架构实现时，从搭建到完成SparkStreaming程序开发，到数据最后进入HBase存储，大概只花了一天多的时间。当然为了完成那个Trace的指标分析，我修改ServiceFramework框架大约改了两三天。因为Trace分析确实比较复杂。当然还有一个比较消耗工作量的，是页面可视化，我这块自己还没有能力做，等招个Web开发工程师再说了。</p><p style="box-sizing: border-box; margin: 0px 0px 16px; color: #666666; font-family: &quot;Microsoft Yahei&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 27px; background-color: #ffffff;">End.</p><img src ="http://www.blogjava.net/xiaomage234/aggbug/431755.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/xiaomage234/" target="_blank">小马歌</a> 2016-09-06 16:50 <a href="http://www.blogjava.net/xiaomage234/archive/2016/09/06/431755.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>深度访谈：华为开源数据格式CarbonData项目，实现大数据即席查询秒级响应</title><link>http://www.blogjava.net/xiaomage234/archive/2016/09/06/431751.html</link><dc:creator>小马歌</dc:creator><author>小马歌</author><pubDate>Tue, 06 Sep 2016 07:49:00 GMT</pubDate><guid>http://www.blogjava.net/xiaomage234/archive/2016/09/06/431751.html</guid><wfw:comment>http://www.blogjava.net/xiaomage234/comments/431751.html</wfw:comment><comments>http://www.blogjava.net/xiaomage234/archive/2016/09/06/431751.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/xiaomage234/comments/commentRss/431751.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/xiaomage234/services/trackbacks/431751.html</trackback:ping><description><![CDATA[<p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #0080ff; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #000000; box-sizing: border-box !important; word-wrap: break-word !important;">华为宣布开源了<span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 28px; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData项目，该项目于6月3日<span style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">通过Apache社区投票，成功进入Apache孵化器。<span style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData是一种低时延查询、存储和计算分离的轻量化文件存储格式。那么相比SQL on Hadoop方案、传统NoSQL或相对ElasticSearch等搜索系统，<span style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData具有什么样的优势呢？<span style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData的技术架构是什么样子的？未来有什么样的规划？我们采访了<span style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData项目的技术负责人为大家解惑。</span></span></span></span></span></span></span><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><br style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;" /></strong></span></p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #0080ff; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">InfoQ：</strong></span>请问CarbonData是什么时候开始进行的项目？为什么现在向Apache孵化器开源呢？开源发展历程和项目目前状态是怎么样的？</p><blockquote style="margin: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; border-left-width: 3px; border-left-color: #dbdbdb; max-width: 100%; color: #3e3e3e; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 28.8px; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><p style="margin: 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #00d100; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 28px; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData</span>：</span></strong>CarbonData项目是华为公司从多<wbr style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">年数据处理经验和行业理解中逐步积累起来的，2015年我们对系<wbr style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">统进行了一次架构重构，使其演化为HDFS上的一套通用的列式存<wbr style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">储，支持和Spark引擎对接后形成一套分布式OLAP分析的解<wbr style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">决方案。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">华为一直是面向电信、金融、IT企业等用户提供大数据平台解决方案的供应商，从众多客户场景中我们不断提炼数据特征，总结出了一些典型的对大数据分析的诉求，逐步形成了CarbonData这个架构。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">因为在IT领域，只有开源开放，才能最终让更多的客户和合作伙伴的数据连接在一起，产生更大商业价值。<strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">开源是为了构建E2E生态，CarbonData是数据存储层技术，要发挥价值，需要与计算层、查询层有效集成在一起，形成完成真正的生态发挥价值。</strong></p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">又因为Apache是目前大数据领域最权威的开源组织，其中的Hadoop，Spark已成为大数据开源的事实标准，我们也非常认可Apache以Community驱动技术进步的理念，所以我们选择进入Apache，与社区一同构建能力，使CarbonData融入大数据生态。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">目前CarbonData开源项目已经在6月3日通过Apache社区投票，成功进入Apache孵化器。github地址：https://github.com/apache/incubator-carbondata。欢迎大家参与到Apache CarbonData社区： https://github.com/apache/incubator-carbondata/blob/master/docs/How-to-contribute-to-Apache-CarbonData.md。</p></blockquote><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #0080ff; box-sizing: border-box !important; word-wrap: break-word !important;">InfoQ：</span></strong>请问是什么原因或机遇促使您们产生做CarbonData这个项目的想法的？之前的项目中遇到什么样的困难？</p><blockquote style="margin: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; border-left-width: 3px; border-left-color: #dbdbdb; max-width: 100%; color: #3e3e3e; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 28.8px; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #00d100; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 28px; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData</span>：</span></strong>我们一直面临着很多高性能数据分析诉求，在传统的做法里，一般是使用数据库加BI工具实现报表、DashBoard和交互式查询等业务，但随着企业数据日益增大，业务驱动的分析灵活性要求逐渐增大，也有部分客户希望有除SQL外更强大的分析功能，所以传统的方式渐渐满足不了客户需求，让我们产生了做CarbonData这个项目的想法。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">需求一般来源于几方面。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">第一，在部署上</strong>，区别于以往的单机系统，企业客户希望有一套分布式方案来应对日益增多的数据，随时可以通过增加通用服务器的方式scale out横向扩展。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">第二，在业务功能上</strong>，很多企业的业务都处在从传统数据库逐渐转移到大数据平台的迁移过程中，这就要求大数据平台要有较高兼容老业务的能力，这里面主要包含的是对完整的标准SQL支持，以及多种分析场景的支持。同时为了节约成本，企业希望&#8220;一份数据支持多种使用场景&#8221;，例如大规模扫描和计算的批处理场景，OLAP多维交互式分析场景，明细数据即席查询，主键低时延点查，以及对实时数据的实时查询等场景，都希望平台能给予支持，且达到秒级查询响应。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">第三，在易用性上</strong>，企业客户以往使用BI工具，业务分析的OLAP模型是需要在BI工具中建立的，这就会导致有的场景下数据模型的灵活性和分析手段受到限制，而在大数据时代，大数据开源领域已经形成了一个生态系统，社区随时都在进步，经常会冒出一些新型的分析工具，所以企业客户都希望能跟随社区不断改进自己的系统，在自己的数据里快速用上新型的分析工具，得到更大的商业价值。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">要同时达到上诉要求，无疑对大数据平台是一个很大的挑战。为了满足这些要求，我们开始不断在实际项目中积累经验，也尝试了很多不同的解决方案，但都没有发现能用一套方案解决所有问题。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">大家首先会想到的是，在涉及到低时延查询的分布式存储中，<strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">一般常用的是KV型NoSQL数据库（如HBase，Cassandra）</strong>，可以解决主键低时延查询的问题，但如果业务的查询模式稍作改变，例如对多维度灵活组合的查询，就会使点查变为全表扫描，使性能急剧下降。有的场景下，这时可以通过加入二级索引来缓解该问题，但这又带来了二级索引的维护和同步等管理问题，所以KV型存储并不是解决企业问题的通用方案。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">那么，如果要解决通用的多维查询问题，有<strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">时我们会想到用多维时序数据库的方案（如Linkedin Pinot）</strong>，他们的特点是数据都以时间序列的方式进入系统并经过数据预聚合和建立索引，因为是预计算，所以应对多维查询时非常快，数据也非常及时，同时具备多维分析和实时处理的优点，在性能监控、实时指标分析的场景里应用较多。但它在支持的查询类型上也有一定限制，因为做了数据预计算，所以这种架构一般无法应对明细数据查询，以及不支持Join多表关联分析，这无疑给企业使用场景带来了一定的限制。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">另外一类是搜索系统（如Apache Solr，ElasticSearch）</strong>，搜索系统可以做多维汇总也可以查询明细数据，它也具备基于倒排索引的快速布尔查询，并发也较高，似乎正是我们希望寻找的方案。但在实际应用中我们发现两个问题：<strong style="margin: 0px; padding: 0px; max-width: 100%; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">一是</strong><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">由于搜索系统一般是针对非结构化数据而设计的，系统的数据膨胀率一般都比较高，在企业关系型数据模型下数据存储不够紧凑，造成数据量较大，</span><strong style="margin: 0px; padding: 0px; max-width: 100%; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">二是</strong><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">搜索系统的数据组织方式和计算引擎密切相关，这就导致了数据入库后只能用相应的搜索引擎处理，这又一定程度打破了企业客户希望应用多种社区分析工具的初衷，所以搜索系统也有他自己的适用场景。</span></p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">最后一类系统，就是目前社区里大量涌现的SQL on Hadoop方案，以Hive, SparkSQL, Flink为代表</strong>，这类系统的特点是计算和存储相分离，针对存储在HDFS上的文件提供标准SQL功能，他们在部署性和易用性上可以满足企业客户需求，业务场景上也能覆盖扫描，汇聚，详单等各类场景，可见可以将他们视为一类通用的解决方案。为了提高性能，Spark，Flink等开源项目通过不断优化自身架构提升计算性能，但提升重点都放在计算引擎和SQL优化器的增强上，在存储和数据组织上改进并不是重点。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">所以，可以看出当前的很多大数据系统虽然都能支持各类查询场景，但他们<strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">都是偏向某一类场景设计的</strong>，在不是其目标场景的情况下要么不支持要么退化为全表扫描，所以导致企业为了应对批处理，多维分析，明细数据查询等场景，客户常常需要通过复制多份数据，每种场景要维护一套数据。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData的设计初衷正是为了打破这种限制，做到只保存一份数据，最优化地支撑多种使用场景</strong><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">。</strong></p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><img data-ratio="0.3296500920810313" data-s="300,640" data-type="png" data-w="543" src="http://read.html5.qq.com/image?src=forum&amp;q=5&amp;r=0&amp;imgflag=7&amp;imageUrl=http://mmbiz.qpic.cn/mmbiz/cokWkYcF4DeVKYqrBI8ZVMychkVQkoM61JL28fpR46Ob1ueU2GyXWJ27eXSb4jCHXiaFCN9fFicAlRulDCzd9Ffw/0?wx_fmt=png" style="margin: 0px; padding: 0px; border: 0px; max-width: 100%; height: auto !important; box-sizing: border-box !important; word-wrap: break-word !important;"  alt="" /><br style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;" /></p></blockquote><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #0080ff; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">InfoQ:</strong></span>能否具体谈谈CarbonData的技术架构？有何特征和优势呢？</p><blockquote style="margin: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; border-left-width: 3px; border-left-color: #dbdbdb; max-width: 100%; color: #3e3e3e; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 28.8px; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #00d100; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 28px; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData</span>：</strong></span>整个大数据时代的开启，可以说是源自于Google的MapReduce论文，他引发了Hadoop开源项目以及后续一系列的生态发展。他的&#8220;伟大&#8221;之处在于计算和存储解耦的架构，使企业的部分业务（主要是批处理）从传统的垂直方案中解放出来，计算和存储可以按需扩展极大提升了业务发展的敏捷性，让众多企业普及了这一计算模式，从中受益。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">虽然MapReduce开启了大数据时代，但它是通过纯粹的暴力扫描+分布式计算来提升批处理性能，所以并不能解决客户对所有查询场景的<strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">低时延查询</strong>要求。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">在目前的生态中，最接近于客户要求的其实是搜索引擎类方案。通过良好的数据组织和索引，搜索引擎能提供多种快速的查询功能，但偏偏搜索引擎的存储层又和计算引擎是<strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">紧耦合</strong>的，并不符合企业对&#8221;一份数据，多种场景&#8221;的期望。</span></p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">这给了我们启发，我们何不为通用计算引擎打造更一个高效的数据组织来满足客户需求呢，做到既利用计算和存储解耦架构又能提供高性能查询。抱着这个想法，我们启动了CarbonData项目。针对更多的业务，使计算和存储相分离，这也成了CarbonData的<strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">架构设计理念</strong>。</span></p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">确立了这个理念后，我们很自然地选择了<strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">基于HDFS+通用计算引擎的架构</strong>，因为这个架构可以很好地提供Scale out能力。下一步我们问自己这个架构里还缺什么？这个架构中，HDFS提供文件的复制和读写能力，计算引擎负责读取文件和分布式计算，分工很明确，可以说他们分别定位于解决存储管理和计算的问题。<span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">但不难看出，为了适应更多场景，HDFS做了很大的&#8220;牺牲&#8221;，它牺牲了对文件内容的理解，正是由于放弃了对文件内容的理解，导致计算只能通过全扫描的方式来进行，可以说最终导致的是存储和计算都无法很好的利用数据特征来做优化。</span></p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">所以针对这个问题，我们把CarbonData的<strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">发力重点</strong>放在对数据组织的优化上，通过数据组织最终是要提升IO性能和计算性能。为此，CarbonData做了如下工作。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData基础特性</strong></p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">1. 多维数据聚集：</strong>在入库时对数据按多个维度进行重新组织，使数据在&#8220;多维空间上更内聚&#8221;，在存储上获得更好的压缩率，在计算上获得更好的数据过滤效率。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">2. 带索引的列存文件结构：</strong>首先，CarbonData为多类场景设计了多个级别的索引，并融入了一些搜索的特性，有跨文件的多维索引，文件内的多维索引，每列的minmax索引，以及列内的倒排索引等。其次，为了适应HDFS的存储特点，CarbonData的索引和数据文件存放在一起，一部分索引本身就是数据，另一部分索引存放在文件的元数据结构中，他们都能随HDFS提供本地化的访问能力。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">3. 列组：</strong>整体上，CarbonData是一种列存结构，但相对于行存来说，列存结构在应对明细数据查询时会有数据还原代价高的问题，所以为了提升明显数据查询性能，CarbonData支持列组的存储方式，用户可以把某些不常作为过滤条件但又需要作为结果集返回的字段作为列组来存储，经过CarbonData编码后会将这些字段使用行存的方式来存储以提升查询性能。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">4. 数据类型：</strong>目前CarbonData支持所有数据库的常用基本类型，以及Array，Struct复杂嵌套类型。同时社区也有人提出支持Map数据类型，我们计划未来添加Map数据类型。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">5. 压缩：</strong>目前CarbonData支持Snappy压缩，压缩是针对每列分别进行的，因为列存的特点使得压缩非常高效。数据压缩率基于应用场景不同一般在2到8之间。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">6. Hadoop集成：</strong>通过支持InputFormat/OutputFormat接口，CarbonData可以利用Hadoop的分布式优点，也能在所有以Hadoop为基础的生态系统中使用。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData高级特性</strong></p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">1. 可计算的编码方式：</strong>除了常见的Delta，RLE，Dictionary，BitPacking等编码方式外，CarbonData还支持将多列进行联合编码，以及应用了全局字典编码来实现免解码的计算，计算框架可以直接使用经过编码的数据来做聚合，排序等计算，这对需要大量shuffle的查询来说性能提升非常明显。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">2. 与计算引擎联合优化：</strong>为了高效利用CarbonData经过优化后的数据组织，CarbonData提供了有针对性的优化策略，目前CarbonData社区首先做了和Spark的深度集成，其中基于SparkSQL框架增强了过滤下压，延迟物化，增量入库等特性，同时支持所有DataFrame API。相信未来通过社区的努力，会有更多的计算框架与CarbonData集成，发挥数据组织的价值。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">目前这些特性都已经合入Apache CarbonData主干，欢迎大家使用。</p></blockquote><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #0080ff; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">InfoQ：</strong></span>在哪些场景推荐使用呢？性能测试结果如何？有没有应用案例，目前在国内的使用情况和用户规模？</p><blockquote style="margin: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; border-left-width: 3px; border-left-color: #dbdbdb; max-width: 100%; color: #3e3e3e; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 28.8px; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><p style="margin: 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #00d100; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 28px; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData：</span></strong></span>推荐场景：<wbr style="margin: 0px; padding: 0px; max-width: 100%; color: #3e3e3e; font-family: 微软雅黑, sans-serif; font-size: 12px; line-height: 28px; white-space: normal; box-sizing: border-box !important; word-wrap: break-word !important;">希望一份存储同时满足快速扫描，多维分析，明细数据查询的场景。<wbr style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">在华为的客户使用案例中，对比业界已有的列存方案，Carbon<wbr style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">Data可以带来<strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">5~30倍性能提升</strong>。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">性能测试数据及应用案例等更多信息，请关注微信公众号ApacheCarbonData，及社区https://github.com/apache/incubator-carbondata。</p></blockquote><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #0080ff; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">InfoQ：</strong></span>CarbonData能和当前正火的Spark完美结合吗？还能兼容哪些主流框架呢？</p><blockquote style="margin: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; border-left-width: 3px; border-left-color: #dbdbdb; max-width: 100%; color: #3e3e3e; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 28.8px; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #00d100; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 28px; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData</span>：</strong></span>目前CarbonData已与Spark做了深度集成，具体见上述高级特性。</p></blockquote><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #0080ff; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">InfoQ：</strong></span>您们的项目在未来有什么样的发展规划？还会增加什么功能吗？如何保证开源之后的项目的持续维护工作呢？</p><blockquote style="margin: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; border-left-width: 3px; border-left-color: #dbdbdb; max-width: 100%; color: #3e3e3e; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 28.8px; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #00d100; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 28px; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData</span>：</span></strong>接下来社区重点工作是，提升系统易用性、完善生态集成（如：与Flink,Kafka等集成，实现数据实时导入CarbonData）。</p><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData开源的第一个月，就有几百个commits提交，和20多个贡献者参与，所以后续这个项目会持续的活跃。10多个核心贡献者也将会持续参与社区建设。</p></blockquote><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #0080ff; box-sizing: border-box !important; word-wrap: break-word !important;">InfoQ：</span></strong>在CarbonData设计研发并进入Apache孵化器的过程中，经历了哪些阶段，经历过的最大困难是什么？有什么样的感受或经验可以和大家分享的吗？</p><blockquote style="margin: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; border-left-width: 3px; border-left-color: #dbdbdb; max-width: 100%; color: #3e3e3e; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 28.8px; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #00d100; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 28px; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData</span>：</span></strong>CarbonData团队大多数人都有参与Apache Hadoop、Spark等社区开发的经验，我们对社区流程和工作方式都很熟悉。最大的困难是进入孵化器阶段，去说服Apache社区接纳大数据生态新的高性能数据格式CarbonData。我们通过5月份在美国奥斯丁的开源盛会OSCON上，做CarbonData技术主题演讲和现场DEMO演示，展示了CarbonData优秀的架构和良好的性能效果。</p></blockquote><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #0080ff; box-sizing: border-box !important; word-wrap: break-word !important;">InfoQ：</span></strong>您们是一个团队吗？如何保证您们团队的优秀成长？</p><blockquote style="margin: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; border-left-width: 3px; border-left-color: #dbdbdb; max-width: 100%; color: #3e3e3e; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 28.8px; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><p style="margin: 15px 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; line-height: 1.75em; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; color: #00d100; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="margin: 0px; padding: 0px; max-width: 100%; line-height: 28px; box-sizing: border-box !important; word-wrap: break-word !important;">CarbonData</span>：</span></strong>CarbonData团队是一个全球化的（工程师来自中国、美国、印度）团队，这种全球化工作模式的经验积累，让我们能快速的适应Apache开源社区工作模式。</p></blockquote><p style="margin: 0px; padding: 0px; color: #333333; max-width: 100%; clear: both; min-height: 1em; white-space: pre-wrap; font-family: &quot;Helvetica Neue&quot;, Helvetica, &quot;Hiragino Sans GB&quot;, &quot;Microsoft YaHei&quot;, 微软雅黑, Arial, sans-serif; font-size: 18px; line-height: 28.8px; box-sizing: border-box !important; word-wrap: break-word !important; background-color: #ffffff;"><strong style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">采访嘉宾：</strong>Apache CarbonData的PMC、Committers李昆、陈亮。</p><img src ="http://www.blogjava.net/xiaomage234/aggbug/431751.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/xiaomage234/" target="_blank">小马歌</a> 2016-09-06 15:49 <a href="http://www.blogjava.net/xiaomage234/archive/2016/09/06/431751.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>ElasticSearch安装和配置head、bigdesk、IkAnalyzer</title><link>http://www.blogjava.net/xiaomage234/archive/2016/04/15/430105.html</link><dc:creator>小马歌</dc:creator><author>小马歌</author><pubDate>Fri, 15 Apr 2016 06:03:00 GMT</pubDate><guid>http://www.blogjava.net/xiaomage234/archive/2016/04/15/430105.html</guid><wfw:comment>http://www.blogjava.net/xiaomage234/comments/430105.html</wfw:comment><comments>http://www.blogjava.net/xiaomage234/archive/2016/04/15/430105.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/xiaomage234/comments/commentRss/430105.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/xiaomage234/services/trackbacks/430105.html</trackback:ping><description><![CDATA[&nbsp;&nbsp;&nbsp;&nbsp; 摘要: from:http://my.oschina.net/pangyangyang/blog/361753ElasticSearch的安装http://www.elasticsearch.org/下载最新的ElastiSearch版本。解压下载文件。cd到${esroot}/bin/，执行elasticsearch启动。使用curl -XPOST localhost:9200/_shutdown关闭E...&nbsp;&nbsp;<a href='http://www.blogjava.net/xiaomage234/archive/2016/04/15/430105.html'>阅读全文</a><img src ="http://www.blogjava.net/xiaomage234/aggbug/430105.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/xiaomage234/" target="_blank">小马歌</a> 2016-04-15 14:03 <a href="http://www.blogjava.net/xiaomage234/archive/2016/04/15/430105.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Hadoop十年解读与发展预测</title><link>http://www.blogjava.net/xiaomage234/archive/2016/03/29/429867.html</link><dc:creator>小马歌</dc:creator><author>小马歌</author><pubDate>Tue, 29 Mar 2016 08:59:00 GMT</pubDate><guid>http://www.blogjava.net/xiaomage234/archive/2016/03/29/429867.html</guid><wfw:comment>http://www.blogjava.net/xiaomage234/comments/429867.html</wfw:comment><comments>http://www.blogjava.net/xiaomage234/archive/2016/03/29/429867.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/xiaomage234/comments/commentRss/429867.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/xiaomage234/services/trackbacks/429867.html</trackback:ping><description><![CDATA[&nbsp;&nbsp;&nbsp;&nbsp; 摘要: from:http://www.infoq.com/cn/articles/hadoop-ten-years-interpretation-and-development-forecast编者按：Hadoop于2006年1月28日诞生，至今已有10年，它改变了企业对数据的存储、处理和分析的过程，加速了大数据的发展，形成了自己的极其火爆的技术生态圈，并受到非常广泛的应用。在2016年Hadoop十岁...&nbsp;&nbsp;<a href='http://www.blogjava.net/xiaomage234/archive/2016/03/29/429867.html'>阅读全文</a><img src ="http://www.blogjava.net/xiaomage234/aggbug/429867.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/xiaomage234/" target="_blank">小马歌</a> 2016-03-29 16:59 <a href="http://www.blogjava.net/xiaomage234/archive/2016/03/29/429867.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>搜索引擎选择： Elasticsearch与Solr</title><link>http://www.blogjava.net/xiaomage234/archive/2016/03/17/429700.html</link><dc:creator>小马歌</dc:creator><author>小马歌</author><pubDate>Thu, 17 Mar 2016 07:16:00 GMT</pubDate><guid>http://www.blogjava.net/xiaomage234/archive/2016/03/17/429700.html</guid><wfw:comment>http://www.blogjava.net/xiaomage234/comments/429700.html</wfw:comment><comments>http://www.blogjava.net/xiaomage234/archive/2016/03/17/429700.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/xiaomage234/comments/commentRss/429700.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/xiaomage234/services/trackbacks/429700.html</trackback:ping><description><![CDATA[<h1>搜索引擎选型调研文档</h1><h2><a name="t2" style="margin: 0px; padding: 0px;"></a>Elasticsearch简介<a href="http://fuxiaopang.gitbooks.io/learnelasticsearch" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;"><sup style="margin: 0px; padding: 0px;">*</sup></a></h2><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">Elasticsearch是一个<span style="margin: 0px; padding: 0px;">实时的<span style="margin: 0px; padding: 0px;">分布式搜索和分析引擎。它可以帮助你用前所未有的速度去处理大规模数据。</span></span></p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">它可以用于<span style="margin: 0px; padding: 0px;">全文搜索，<span style="margin: 0px; padding: 0px;">结构化搜索以及<span style="margin: 0px; padding: 0px;">分析，当然你也可以将这三者进行组合。</span></span></span></p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">Elasticsearch是一个<span style="margin: 0px; padding: 0px;">建立在全文搜索引擎 Apache Lucene&#8482; 基础上的搜索引擎，可以说Lucene是当今最先进，最高效的全功能开源搜索引擎框架。</span></p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">但是Lucene只是一个框架，要充分利用它的功能，需要使用JAVA，并且在程序中集成Lucene。需要很多的学习了解，才能明白它是如何运行的，Lucene确实非常复杂。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">Elasticsearch使用Lucene作为内部引擎，但是在使用它做全文搜索时，只需要使用统一开发好的API即可，而不需要了解其背后复杂的Lucene的运行原理。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">当然Elasticsearch并不仅仅是Lucene这么简单，它不但包括了全文搜索功能，还可以进行以下工作:</p><ul style="margin: 10px 0px 0px 20px; padding: 0px; word-break: break-all; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: disc;"><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px;">分布式实时文件存储，并将每一个字段都编入索引，使其可以被搜索。</p></li><li style="margin: 0px; padding: 0px; list-style: disc;"><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px;">实时分析的分布式搜索引擎。</p></li><li style="margin: 0px; padding: 0px; list-style: disc;"><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px;">可以扩展到上百台服务器，处理PB级别的结构化或非结构化数据。</p></li></ul><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">这么多的功能被集成到一台服务器上，你可以轻松地通过客户端或者任何你喜欢的程序语言与ES的RESTful API进行交流。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">Elasticsearch的<span style="margin: 0px; padding: 0px;">上手是非常简单的。它附带了很多<span style="margin: 0px; padding: 0px;">非常合理的默认值，这让初学者很好地避免一上手就要面对复杂的理论，</span></span></p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">它安装好了就可以使用了，用<span style="margin: 0px; padding: 0px;">很小的学习成本就可以变得很有生产力。</span></p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">随着越学越深入，还可以利用Elasticsearch更多高级的功能，整个引擎可以很灵活地进行配置。可以根据自身需求来定制属于自己的Elasticsearch。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">使用案例：</p><ul style="margin: 10px 0px 0px 20px; padding: 0px; word-break: break-all; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: disc;"><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px;">维基百科使用Elasticsearch来进行全文搜做并高亮显示关键词，以及提供search-as-you-type、did-you-mean等搜索建议功能。</p></li><li style="margin: 0px; padding: 0px; list-style: disc;"><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px;">英国卫报使用Elasticsearch来处理访客日志，以便能将公众对不同文章的反应实时地反馈给各位编辑。</p></li><li style="margin: 0px; padding: 0px; list-style: disc;"><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px;">StackOverflow将全文搜索与地理位置和相关信息进行结合，以提供more-like-this相关问题的展现。</p></li><li style="margin: 0px; padding: 0px; list-style: disc;"><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px;">GitHub使用Elasticsearch来检索超过1300亿行代码。</p></li><li style="margin: 0px; padding: 0px; list-style: disc;"><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px;">每天，Goldman Sachs使用它来处理5TB数据的索引，还有很多投行使用它来分析股票市场的变动。</p></li></ul><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">但是Elasticsearch并不只是面向大型企业的，它还帮助了很多类似DataDog以及Klout的创业公司进行了功能的扩展。</p><h2><a name="t3" style="margin: 0px; padding: 0px;"></a>Elasticsearch的优缺点<a href="http://stackoverflow.com/questions/10213009/solr-vs-elasticsearch" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;"><sup style="margin: 0px; padding: 0px;">*</sup></a><a href="http://huangx.in/22/translation-solr-vs-elasticsearch" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;"><sup style="margin: 0px; padding: 0px;">*</sup></a>:</h2><h3><a name="t4" style="margin: 0px; padding: 0px;"></a>优点</h3><ol style="margin: 10px 0px 0px 20px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: decimal;">Elasticsearch是分布式的。不需要其他组件，分发是实时的，被叫做&#8221;Push replication&#8221;。</li><li style="margin: 0px; padding: 0px; list-style: decimal;">Elasticsearch 完全支持 Apache Lucene 的接近实时的搜索。</li><li style="margin: 0px; padding: 0px; list-style: decimal;">处理<span style="margin: 0px; padding: 0px;">多租户（<a href="http://en.wikipedia.org/wiki/Multitenancy" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;">multitenancy</a>）不需要特殊配置，而Solr则需要更多的高级设置。</span></li><li style="margin: 0px; padding: 0px; list-style: decimal;">Elasticsearch 采用 Gateway 的概念，使得完备份更加简单。</li><li style="margin: 0px; padding: 0px; list-style: decimal;">各节点组成对等的网络结构，某些节点出现故障时会自动分配其他节点代替其进行工作。</li></ol><h3><a name="t5" style="margin: 0px; padding: 0px;"></a>缺点</h3><ol style="margin: 10px 0px 0px 20px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: decimal;">只有一名开发者（当前Elasticsearch GitHub组织已经不只如此，已经有了相当活跃的维护者）</li><li style="margin: 0px; padding: 0px; list-style: decimal;">还不够自动（不适合当前新的Index Warmup API）</li></ol><h2><a name="t6" style="margin: 0px; padding: 0px;"></a>Solr简介<a href="http://zh.wikipedia.org/wiki/Solr" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;"><sup style="margin: 0px; padding: 0px;">*</sup></a></h2><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">Solr（读作&#8220;solar&#8221;）是Apache Lucene项目的开源企业搜索平台。其主要功能包括<span style="margin: 0px; padding: 0px;">全文检索、<span style="margin: 0px; padding: 0px;">命中标示、<span style="margin: 0px; padding: 0px;">分面搜索、<span style="margin: 0px; padding: 0px;">动态聚类、<span style="margin: 0px; padding: 0px;">数据库集成，以及<span style="margin: 0px; padding: 0px;">富文本（如Word、PDF）的处理。Solr是<span style="margin: 0px; padding: 0px;">高度可扩展的，并提供了<span style="margin: 0px; padding: 0px;">分布式搜索和索引复制。Solr是<span style="margin: 0px; padding: 0px;">最流行的企业级搜索引擎，Solr4 还增加了NoSQL支持。</span></span></span></span></span></span></span></span></span></p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">Solr是用Java编写、运行在Servlet容器（如 Apache Tomcat 或Jetty）的一个独立的全文搜索服务器。 Solr采用了 Lucene Java 搜索库为核心的全文索引和搜索，并具有类似REST的HTTP/XML和JSON的API。Solr强大的外部配置功能使得无需进行Java编码，便可对 其进行调整以适应多种类型的应用程序。Solr有一个插件架构，以支持更多的高级定制。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">因为2010年 Apache Lucene 和 Apache Solr 项目合并，两个项目是由同一个Apache软件基金会开发团队制作实现的。提到技术或产品时，Lucene/Solr或Solr/Lucene是一样的。</p><h2><a name="t7" style="margin: 0px; padding: 0px;"></a>Solr的优缺点</h2><h3><a name="t8" style="margin: 0px; padding: 0px;"></a>优点</h3><ol style="margin: 10px 0px 0px 20px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: decimal;">Solr有一个更大、更成熟的用户、开发和贡献者社区。</li><li style="margin: 0px; padding: 0px; list-style: decimal;">支持添加多种格式的索引，如：HTML、PDF、微软 Office 系列软件格式以及 JSON、XML、CSV 等纯文本格式。</li><li style="margin: 0px; padding: 0px; list-style: decimal;">Solr比较成熟、稳定。</li><li style="margin: 0px; padding: 0px; list-style: decimal;">不考虑建索引的同时进行搜索，速度更快。</li></ol><h3><a name="t9" style="margin: 0px; padding: 0px;"></a>缺点</h3><ol style="margin: 10px 0px 0px 20px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: decimal;">建立索引时，搜索效率下降，实时索引搜索效率不高。</li></ol><h2><a name="t10" style="margin: 0px; padding: 0px;"></a>Elasticsearch与Solr的比较<a href="http://blog.socialcast.com/realtime-search-solr-vs-elasticsearch/" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;"><sup style="margin: 0px; padding: 0px;">*</sup></a></h2><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">当单纯的对已有数据进行搜索时，Solr更快。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><img src="http://i.zhcy.tk/images/search_fresh_index_while_idle.png" alt="Search Fesh Index While Idle" style="margin: 0px; padding: 0px; border: 0px; max-width: 900px;" /></p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">当实时建立索引时, Solr会产生io阻塞，查询性能较差, Elasticsearch具有明显的优势。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><img src="http://i.zhcy.tk/images/search_fresh_index_while_indexing.png" alt="search_fresh_index_while_indexing" style="margin: 0px; padding: 0px; border: 0px; max-width: 900px;" /></p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">随着数据量的增加，Solr的搜索效率会变得更低，而Elasticsearch却没有明显的变化。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><img src="http://i.zhcy.tk/images/search_fresh_index_while_indexing2.png" alt="search_fresh_index_while_indexing" style="margin: 0px; padding: 0px; border: 0px; max-width: 900px;" /></p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">综上所述，Solr的架构不适合实时搜索的应用。</p><h2><a name="t11" style="margin: 0px; padding: 0px;"></a>实际生产环境测试<a href="http://blog.socialcast.com/realtime-search-solr-vs-elasticsearch/" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;"><sup style="margin: 0px; padding: 0px;">*</sup></a></h2><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">下图为将搜索引擎从Solr转到Elasticsearch以后的平均查询速度有了50倍的提升。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><img src="http://i.zhcy.tk/images/average_execution_time.jpg" alt="average_execution_time" style="margin: 0px; padding: 0px; border: 0px; max-width: 900px;" /></p><h2><a name="t12" style="margin: 0px; padding: 0px;"></a>Elasticsearch 与 Solr 的比较总结</h2><ul style="margin: 10px 0px 0px 20px; padding: 0px; word-break: break-all; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: disc;">二者安装都很简单；</li><li style="margin: 0px; padding: 0px; list-style: disc;">Solr 利用 Zookeeper 进行分布式管理，而 Elasticsearch 自身带有分布式协调管理功能;</li><li style="margin: 0px; padding: 0px; list-style: disc;">Solr 支持更多格式的数据，而 Elasticsearch 仅支持json文件格式；</li><li style="margin: 0px; padding: 0px; list-style: disc;">Solr 官方提供的功能更多，而 Elasticsearch 本身更注重于核心功能，高级功能多有第三方插件提供；</li><li style="margin: 0px; padding: 0px; list-style: disc;">Solr 在传统的搜索应用中表现好于 Elasticsearch，但在处理实时搜索应用时效率明显低于 Elasticsearch。</li></ul><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">Solr 是传统搜索应用的有力解决方案，但 Elasticsearch 更适用于新兴的实时搜索应用。</p><h2><a name="t13" style="margin: 0px; padding: 0px;"></a>其他基于Lucene的开源搜索引擎解决方案<a href="http://mail-archives.apache.org/mod_mbox/hbase-user/201006.mbox/%3C149150.78881.qm@web50304.mail.re2.yahoo.com%3E" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;"><sup style="margin: 0px; padding: 0px;">*</sup></a></h2><ol style="margin: 10px 0px 0px 20px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: decimal;">直接使用&nbsp;<a href="http://lucene.apache.org/" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;">Lucene</a></li></ol><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">说明：Lucene 是一个 JAVA 搜索类库，它本身并不是一个完整的解决方案，需要额外的开发工作。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">优点：成熟的解决方案，有很多的成功案例。apache 顶级项目，正在持续快速的进步。庞大而活跃的开发社区，大量的开发人员。它只是一个类库，有足够的定制和优化空间：经过简单定制，就可以满足绝大部分常见的需求；经过优化，可以支持 10亿+ 量级的搜索。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">缺点：需要额外的开发工作。所有的扩展，分布式，可靠性等都需要自己实现；非实时，从建索引到可以搜索中间有一个时间延迟，而当前的&#8220;近实时&#8221;(Lucene Near Real Time search)搜索方案的可扩展性有待进一步完善</p><ul style="margin: 10px 0px 0px 20px; padding: 0px; word-break: break-all; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: disc;"><a href="http://katta.sourceforge.net/" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;">Katta</a></li></ul><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">说明：基于 Lucene 的，支持分布式，可扩展，具有容错功能，准实时的搜索方案。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">优点：开箱即用，可以与 Hadoop 配合实现分布式。具备扩展和容错机制。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">缺点：只是搜索方案，建索引部分还是需要自己实现。在搜索功能上，只实现了最基本的需求。成功案例较少，项目的成熟度稍微差一些。因为需要支持分布式，对于一些复杂的查询需求，定制的难度会比较大。</p><ul style="margin: 10px 0px 0px 20px; padding: 0px; word-break: break-all; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: disc;"><a href="http://svn.apache.org/repos/asf/hadoop/mapreduce/trunk/src/contrib/index/README" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;">Hadoop contrib/index</a></li></ul><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">说明：Map/Reduce 模式的，分布式建索引方案，可以跟 Katta 配合使用。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">优点：分布式建索引，具备可扩展性。</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">缺点：只是建索引方案，不包括搜索实现。工作在批处理模式，对实时搜索的支持不佳。</p><ul style="margin: 10px 0px 0px 20px; padding: 0px; word-break: break-all; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: disc;"><a href="http://sna-projects.com/" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;">LinkedIn 的开源方案</a></li></ul><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">说明：基于 Lucene 的一系列解决方案，包括 准实时搜索 zoie ，facet 搜索实现 bobo ，机器学习算法 decomposer ，摘要存储库 krati ，数据库模式包装 sensei 等等</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">优点：经过验证的解决方案，支持分布式，可扩展，丰富的功能实现</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">缺点：与 linkedin 公司的联系太紧密，可定制性比较差</p><ul style="margin: 10px 0px 0px 20px; padding: 0px; word-break: break-all; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: disc;"><a href="https://github.com/tjake/Lucandra" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;">Lucandra</a></li></ul><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">说明：基于 Lucene，索引存在 cassandra 数据库中</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">优点：参考 cassandra 的优点</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">缺点：参考 cassandra 的缺点。另外，这只是一个 demo，没有经过大量验证</p><ul style="margin: 10px 0px 0px 20px; padding: 0px; word-break: break-all; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;"><li style="margin: 0px; padding: 0px; list-style: disc;"><a href="https://github.com/akkumar/hbasene" target="_blank" style="margin: 0px; padding: 0px; color: #258fb8; text-decoration: none; outline-width: 0px;">HBasene</a></li></ul><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">说明：基于 Lucene，索引存在 HBase 数据库中</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">优点：参考 HBase 的优点</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">缺点：参考 HBase 的缺点。另外，在实现中，lucene terms 是存成行，但每个 term 对应的 posting lists 是以列的方式存储的。随着单个 term 的 posting lists 的增大，查询时的速度受到的影响会非常大</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">&nbsp;</p><p style="margin-top: 10px; margin-bottom: 0px; padding: 0px; color: #444444; font-family: Amethysta, serif; font-size: 18px; line-height: 36px; text-align: justify; background-color: #faf7ef;">转载：http://blog.csdn.net/jameshadoop/article/details/44905643</p><img src ="http://www.blogjava.net/xiaomage234/aggbug/429700.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/xiaomage234/" target="_blank">小马歌</a> 2016-03-17 15:16 <a href="http://www.blogjava.net/xiaomage234/archive/2016/03/17/429700.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>解读2015之大数据篇：大数据的黄金时代</title><link>http://www.blogjava.net/xiaomage234/archive/2016/01/15/429064.html</link><dc:creator>小马歌</dc:creator><author>小马歌</author><pubDate>Fri, 15 Jan 2016 07:01:00 GMT</pubDate><guid>http://www.blogjava.net/xiaomage234/archive/2016/01/15/429064.html</guid><wfw:comment>http://www.blogjava.net/xiaomage234/comments/429064.html</wfw:comment><comments>http://www.blogjava.net/xiaomage234/archive/2016/01/15/429064.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/xiaomage234/comments/commentRss/429064.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/xiaomage234/services/trackbacks/429064.html</trackback:ping><description><![CDATA[<p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">编者按</span></p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">2015年，整个IT技术领域发生了许多深刻而又复杂的变化，InfoQ策划了&#8220;解读2015&#8221;年终技术盘点系列文章，希望能够给读者清晰地梳理出技术领域在这一年的发展变化，回顾过去，继续前行。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">本文是大数据解读篇，在这篇文章里我们将回顾2015展望2016，看看过去的一年里广受关注的技术有哪些进展，了解下数据科学家这个职业的火热。&nbsp;在关键技术进展部分我们在大数据生态圈众多技术中选取了Hadoop、Spark、Elasticsearch和Apache&nbsp;Kylin四个点，分别请了四位专家：Hulu的董西成、明略数据的梁堰波、<span style="margin: 0px; border: 0px; padding: 0px; line-height: 20.8px;">精硕科技</span>的卢亿雷、eBay的韩卿，来为大家解读2015里的进展。</p><div style="margin: 0px; border: 0px; height: 0px; clear: both; font-size: 0px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"></div><div visible=""  stacked"="" style="margin: 20px 0px 20px 20px; border: 1px solid #dfdfdf; float: right; width: 315px; overflow: hidden; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><div only1=""  fullwidth"="" style="margin: 0px; border-width: 0px 1px 0px 0px; border-right-style: dotted; border-right-color: #dfdfdf; float: left; width: 315px; background: none 50% 0% repeat-y #f7f7f7;"><div sponsored=""  only2"="" style="margin: 0px; padding-top: 10px; padding-left: 10px; border: 0px; float: left; width: 315px; min-height: 150px;"><p style="margin: 0px 0px 8px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: both; width: 315px; font-weight: 600;">相关厂商内容</p><div style="margin: 0px; padding-top: 5px; border: 0px; float: left; width: 311.84375px; min-height: 100px;"><h3><a href="http://www.infoq.com/infoq/url.action?i=8955&amp;t=f" target="_blank" style="text-decoration: none; color: #170000; margin: 0px; border: 0px; padding: 0px; outline: none !important;">Twitter Messaging的架构演化之路</a></h3><h3><a href="http://www.infoq.com/infoq/url.action?i=8956&amp;t=f" target="_blank" style="text-decoration: none; color: #170000; margin: 0px; border: 0px; padding: 0px; outline: none !important;">业务核心架构，根据业务需求设计合理架构</a></h3><h3><a href="http://www.infoq.com/infoq/url.action?i=8957&amp;t=f" target="_blank" style="text-decoration: none; color: #170000; margin: 0px; border: 0px; padding: 0px; outline: none !important;">QCon北京2016大会，4月21-23日，与您相约北京国际会议中心，2月21前报名享8折优惠！</a></h3><div style="margin: 0px; border: 0px; height: 0px; clear: both; font-size: 0px;"></div></div></div></div><div only2"="" style="margin: 0px; padding-top: 10px; border: 0px; float: left; width: 315px; clear: left;"><p style="margin: 0px 0px 8px; padding: 0px 0px 0px 10px; border: 0px; float: none; line-height: 1.8; clear: both; width: 315px; font-weight: 600;">相关赞助商</p><a href="http://www.infoq.com/infoq/url.action?i=8959&amp;t=f" target="_blank" style="margin: 0px; border: 0px; padding: 0px; color: #000000 !important; outline: none !important;"><img src="http://cdn.infoqstatic.com/statics_s2_20160105-0313u5/resource/sponsorship/featuredcategory/6555/QConSHlogo.jpg" border="0" style="border: 0px; margin: 0px 0px 5px 5px; padding: 0px; float: right;" alt="" /></a><div style="margin: 0px; padding-right: 10px; padding-left: 10px; border: 0px;"><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 295px;"><span style="margin: 0px; border: 0px; padding: 0px; line-height: 20.8px;">QCon北京2016大会，4月21-23日，北京&#183;国际会议中心，<a href="http://www.infoq.com/infoq/url.action?i=8958&amp;t=f" target="_blank" style="margin: 0px; border: 0px; padding: 0px 0px 2px; width: auto; display: inline; clear: both; text-decoration: none !important; color: #286ab2 !important; outline: none !important;">精彩内容</a>邀您参与！</span></p></div></div><div style="margin: 0px; border: 0px; height: 0px; clear: both; font-size: 0px;"></div></div><h2>回顾2015年的关键技术进展：</h2><h3><span style="margin: 0px; border: 0px; padding: 0px;">Hadoop：</span></h3><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">Hadoop作为大数据平台中最基础与重要的系统，在2015年提高稳定性的同时，发布了多个重要功能与特性，这使得Hadoop朝着多类型存储介质和异构集群的方向迈进了一大步。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">HDFS&nbsp;</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">HDFS&nbsp;之前是一个以磁盘单存储介质为主的分布式文件系统。但随着近几年新存储介质的兴起，支持多存储介质早就提上了日程。如今，HDFS&nbsp;已经对多存储介质有了良好的支持，包括&nbsp;Disk、Memory&nbsp;和&nbsp;SSD&nbsp;等，对异构存储介质的支持，使得&nbsp;HDFS&nbsp;朝着异构混合存储方向发展。目前HDFS支持的存储介质如下：</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">ARCHIVE：高存储密度但耗电较少的存储介质，通常用来存储冷数据。</p><div id="lowerFullwidthVCR" style="margin: 0px; border: 0px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"></div><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">DISK：磁盘介质，这是HDFS最早支持的存储介质。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">SSD：固态硬盘，是一种新型存储介质，目前被不少互联网公司使用。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">RAM_DISK&nbsp;：数据被写入内存中，同时会往该存储介质中再（异步）写一份。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">YARN</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">YARN作为一个分布式数据操作系统，主要作用是资源管理和资源调度。在过去一年，YARN新增了包括基于标签的调度、对长服务的支持、对&nbsp;Docker&nbsp;的支持等多项重大功能。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">&nbsp;基于标签的调度，使得&nbsp;YARN&nbsp;能够更好地支持异构集群调度。它的基本思想是，通过打标签的方式为不同的节点赋予不同的属性，这样，一个大的Hadoop集群按照节点类型被分成了若干个逻辑上相互独立（可能交叉）的集群。这种集群跟物理上独立的集群很不一样，用户可以很容易地通过动态调整&nbsp;label，实现不同类型节点数目的增减，这具有很好的灵活性。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">对长服务的支持，使得YARN逐渐变为一个通用资源管理和调度系统。目前，YARN既支持像类似&nbsp;MapReduce，Spark&nbsp;的短作业，也支持类似&nbsp;Web&nbsp;Service，MySQL&nbsp;这样的长服务。&nbsp;支持长服务是非常难的一件事情，YARN&nbsp;需要解决以下问题：服务注册、日志滚动、ResourceManager&nbsp;HA、NodeManager&nbsp;HA（NM&nbsp;重启过程中，不影响&nbsp;Container）和&nbsp;ApplicationMaster&nbsp;永不停止，重启后接管之前的&nbsp;Container。截止2.7.0版本，以上问题都已经得到了比较完整的解决。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">对Docker的支持，使得YARN能够为上层应用提供更好的打包、隔离和运行方式。YARN通过引入一种新的ContainerExecutor，即DockerContainerExecutor，实现了对Docker的支持，但目前仍然是alpha版本，不建议在生产环境中使用。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">HBase</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">在&nbsp;2015&nbsp;年，HBase&nbsp;迎来了一个里程碑&#8212;&#8212;HBase&nbsp;1.0&nbsp;release，这也代表着&nbsp;HBase&nbsp;走向了稳定。&nbsp;HBase新增特性包括：更加清晰的接口定义，多&nbsp;Region&nbsp;副本以支持高可用读，Family&nbsp;粒度的&nbsp;Flush以及RPC&nbsp;读写队列分离等。</p><h3><span style="margin: 0px; border: 0px; padding: 0px;">Spark：</span></h3><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">2015年的Spark发展很快，JIRA数目和PR数目都突破了10000，contributors数目超过了1000，可以说是目前最火的开源大数据项目。这一年Spark发布了多个版本，每个版本都有一些亮点：</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2014年12月，<a href="http://www.infoq.com/cn/news/2014/12/spark-1.2-release-mllib-sql" target="_blank" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Spark&nbsp;1.2发布</a>引入ML&nbsp;pipeline作为机器学习的接口。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年3月，<a href="http://www.infoq.com/cn/news/2015/03/apache-1.3-released" target="_blank" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Spark&nbsp;1.3发布</a>引入了DataFrame作为Spark的一个核心组件。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年6月，Spark&nbsp;1.4发布引入R语言作为Spark的接口。R语言接口在问世一个多月之后的调查中就有18%的用户使用。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年9月，<a href="http://www.infoq.com/cn/news/2015/09/apache-spark-1-5" target="_blank" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Spark&nbsp;1.5发布</a>。Tungsten项目第一阶段的产出合并入DataFrame的执行后端，DataFrame的执行效率得到大幅提升。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2016年1月，<a href="http://www.infoq.com/cn/news/2016/01/spark-16-release" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Spark&nbsp;1.6发布</a>引入Dataset接口。</li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">Spark目前支持四种语言的接口，除了上面提到的R语言的使用率以外，Python的使用率也有很大提升，从2014年的38%提升到2015年的58%；而Scala接口的使用率有所下降，从84%下降到71%。同时Spark的部署环境也有所变化，51%的部署在公有云上，48%&nbsp;使用standalone方式部署，而在YARN上的只有40%了。可见Spark已经超越Hadoop，形成了自己的生态系统。而在形成Spark生态系统中起到关键作用的一个feature就是外部数据源支持，Spark可以接入各种数据源的数据，然后把数据导入Spark中进行计算、分析、挖掘和机器学习，然后可以把结果在写出到各种各样的数据源。到目前为止Spark已经支持非常多的外部数据源，像Parquet/JSON/CSV/JDBC/ORC/HBase/Cassandra/Mongodb等等。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">上面这些调查数据来自美国，中国的情况有所区别，但是还是有一定的借鉴意义的。国内的Spark应用也越来越多：腾讯的Spark规模到了8000+节点，日处理数据1PB+。阿里巴巴运行着目前最长时间的Spark&nbsp;Job：1PB+数据规模的Spark&nbsp;Job长达1周的时间。百度的硅谷研究院也在探索Spark+Tachyon的应用场景。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">Spark&nbsp;MLlib的ALS算法已经在很多互联网公司用于其推荐系统中。基本上主流的互联网公司都已经部署了Spark平台并运行了自己的业务。上面说的更多的互联网的应用，实际上Spark的应用场景有很多。在Databricks公司的调查中显示主要应用依次是：商务智能、数据仓库、推荐系统、日志处理、欺诈检测等。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">除了互联网公司以外，传统IT企业也把Spark作为其产品的一个重要组成。IBM在今年6月的Spark&nbsp;summit期间宣布重点支持Spark这个开源项目，同时还开源了自己的机器学习系统SystemML并推进其与Spark的更好合作。美国大数据巨头Cloudera，Hortonworks和MapR都表示Spark是其大数据整体解决方案的核心产品。可以预见Spark是未来若干年最火的大数据项目。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">在深度学习方面2015年可谓非常热闹，如Google开源其第二代机器学习系统TensorFlow，Facebook开源Torch和人工智能硬件服务器Big Sur等等。Spark社区也不甘落后，在1.5版本中发布了一个神经网络分类器MultiplayerPerceptronClassifier作为其深度学习的雏形。虽然这个模型还有很多地方需要优化，大家不妨尝试下，毕竟它是唯一一个基于通用计算引擎的分布式深度学习系统。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">除了现在非常火的深度学习，在传统统计和机器学习领域，Spark这一年也有非常大的变化，包括GLM的全面支持，SparkR&nbsp;GLM的支持，A/B&nbsp;test，以及像WeightesLeastSquares这样的底层优化算法等。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">具体内容可以看梁堰波在InfoQ上的年终回顾：《<a href="http://www.infoq.com/cn/articles/2015-Review-Spark" target="_blank" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">解读2015之Spark篇：新生态系统的形成</span></a>》。</p><h3><span style="margin: 0px; border: 0px; padding: 0px;">Elasticsearch：</span></h3><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">Elasticsearch&nbsp;是一个可伸缩的开源全文搜索和分析引擎。它可以快速地存储、搜索和分析海量数据。Elasticsearch&nbsp;基于成熟的&nbsp;Apache&nbsp;Lucene&nbsp;构建，在设计时就是为大数据而生，能够轻松的进行大规模的横向扩展，以支撑PB级的结构化和非结构化海量数据的处理。Elasticsearch生态圈发展状态良好，整合了众多外围辅助系统，如监控Marvel，分析Logstash，安全Shield等。近年来不断发展受到广泛应用，如Github、StackOverflow、维基百科等，是数据库技术中倍受关注的一匹黑马。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">Elasticsearch在今年下半年发布了2.0版本，性能提升不少，主要改变为：</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">Pipeline&nbsp;Aggregation</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">流式聚合，像管道一样，对聚合的结果进行再次聚合。原来client端需要做的计算工作，下推到ES，简化&nbsp;client代码，更容易构建强大的查询。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">Query/Filter&nbsp;合并</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">取消filters，所有的filter语句自动转换为query语句。在上下文语义是query时，进行相关性计算；上下文语&nbsp;义是filter时，简单排除b不匹配的doc，像现在的filter所做的一样。这个重构以为着所有的query执行会以最&nbsp;有效的顺序自动优化。例如，子查询和地理查询会首先执行一个快速的模糊步骤，然后用一个稍慢的精确&nbsp;步骤截断结果。在filter上下文中，cache有意义时，经常使用的语句会被自动缓存。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">可配置的store&nbsp;compression</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">存储的field，例如_source字段，可以使用默认的LZ4算法快速压缩，或者使用DEFLATE算法减少index&nbsp;size。对于日志类的应用尤其有用，旧的索引库在优化前可以切换到best_compression。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">Hardening</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">Elasticsearch运行于&nbsp;Java&nbsp;Security&nbsp;Manager之下，在安全性上标志着一个巨大的飞跃。Elasticsearch难于探测，黑客在系统上&nbsp;的影响也被严格限制。在索引方面也有加强：&nbsp;indexing请求ack前，doc会被fsync，默认写持久化&nbsp;所有的文件都计算checksum，提前检测文件损坏&nbsp;所有的文件rename操作都是原子的（atomic），避免部分写文件&nbsp;对于系统管理员来讲，一个需求较多的变化是，可以避免一个未配置的node意外加入Elasticsearch集群网络：默认绑&nbsp;定localhost&nbsp;only，&nbsp;multicast也被移除，鼓励使用unicast。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">Performance&nbsp;and&nbsp;Resilience</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">除上所述，Elasticsearch和Lucene还有很多小的变化，使其更加稳定可靠，易于配置，例如：</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">默认doc&nbsp;value，带来更少的heap&nbsp;usage，filter&nbsp;caching&nbsp;更多使用&nbsp;bitsets&nbsp;type&nbsp;mappings&nbsp;大清理，更安全可靠，无二义性&nbsp;cluster&nbsp;stat&nbsp;使用diff进行快速变化传播，带来更稳定的大规模集群</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">Core&nbsp;plugins</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">官方支持的core&nbsp;plugins同时发布，和Elasticsearch核心使用相同的版本号。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">Marvel&nbsp;2.0.0&nbsp;free&nbsp;to&nbsp;use&nbsp;in&nbsp;production</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">Marvel免费。</p><h3><span style="margin: 0px; border: 0px; padding: 0px;">Apache&nbsp;Kylin：</span></h3><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">Apache&nbsp;Kylin是一个开源的分布式分析引擎，提供Hadoop之上的SQL查询接口及多维分析（OLAP）能力以支持超大规模数据，最初由eBay&nbsp;Inc.&nbsp;开发并贡献至开源社区。最初于2014年10月1日开源，并于同年11月加入Aapche孵化器项目，并在一年后的2015年11月顺利毕业成为Apache顶级项目，是eBay全球贡献至Apache软件基金会（ASF）的第一个项目，也是全部由在中国的华人团队整体贡献至Apache的第一个项目。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">在eBay，已经上线两个生产环境平台，有着诸多的应用，包括用户行为分析、点击分析、商户分析、交易分析等应用，最新的Streaming分析项目也已经上线。目前在eBay平台上最大的单个cube包含了超过1000亿的数据，90%查询响应时间小于1.5秒，95%的查询响应时间小于5秒。同时Apache&nbsp;Kylin在eBay外部也有很多的用户，包括京东、美团、百度地图、网易、唯品会、Expedia、Expotional等很多国内外公司也已经在实际环境中使用起来，把Apache&nbsp;Kylin作为他们大数据分析的基础之一。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">过去的一年多是Apache&nbsp;Kylin发展的重要的一年：</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2014年10月1日，Kylin&nbsp;代码在github.com上正式开源</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2014年11月25日，正式加入Apache孵化器并正式启用Apache&nbsp;Kylin作为项目名称</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年6月10日，Apache&nbsp;Kylin&nbsp;v0.7.1-incubating发布，这是加入Apache后的第一个版本，依据Apache的规范作了很多修改，特别是依赖包，license等方面，同时简化了安装，设置等，并同时提供二进制安装包</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年9月6日，Apache&nbsp;Kylin&nbsp;v1.0-incubating正式发布，增强了SQL处理，提升了HBase&nbsp;coprocessor&nbsp;的性能，同时提供了Zeppelin&nbsp;Interpreter等</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年9月16日，Apache&nbsp;Kylin与Spark，Kafka，Storm，H2O，Flink，Elasticsearch，Mesos等一起荣获InfoWorld&nbsp;Bossie&nbsp;Awards&nbsp;2015：最佳开源大数据工具奖，这是业界对Kylin的认可</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年11月18日，Apache&nbsp;Kylin正式毕业成为Apache顶级项目</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年12月15日，Apache&nbsp;Kylin&nbsp;v1.2正式发布，这是升级为顶级项目后的第一个版本，提供了对Excel，PowerBI，Tableau&nbsp;9等的支持，对高基维度增强了支持，修复了多个关键Bug等</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2016年，Apache&nbsp;Kylin将迎来重要的2.x版本，该版本对底层架构和设计作了重大重构，提供可插拔的设计及Lambda架构，同时提供对历史数据查询，Streaming及Realtime查询等，同时在性能，任务管理，UI等各个方面提供增强。</li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">同时，过去一年也是社区发展的重要一年，在过去一年内发展了来自eBay，美团，京东，明略数据，网易等众多committer，社区每天的讨论也是非常热闹。社区提交了很多新特性和Bug修复，包括来自美团的不同HBase写入，来自京东的明细数据查询，来自网易的多Hive源等多个重大特性为Apache&nbsp;Kylin带来了巨大的增强。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">社区合作</span></p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">在开源后的一年时间内，Apache&nbsp;Kylin也和其他社区建立了良好的合作关系，Apache&nbsp;Calcite作为Kylin&nbsp;的SQL引擎被深入的整合进来，我们也向Calcite提交了很多改进和修复，Calcite的作者，Julian&nbsp;Hyde也是Kylin的mentor。HBase是Kylin的存储层，在实际运维中，我们碰到过无数问题，从可靠性到性能到其他各个方面，Kylin社区和HBase社区积极合作解决了绝大部分关键问题。另外，现在越来越多的用户考虑使用Apache&nbsp;Zeppelin作为前端查询和展现的工具，为此我们开发了Kylin&nbsp;Interperter并贡献给了Zeppelin，目前可以直接从最新版的Zeppelin代码库中看到这块。同样，我们也和其他各个社区积极合作，包括Spark，Kafka等，为构建和谐的社区氛围和形成良好合作打下了坚实的基础。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">技术发展</span></p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">技术上，这一年来Apache&nbsp;Kylin主要在以下几个方面</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">Fast&nbsp;Cubing</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">在现在的版本中，Cube的计算依赖MapReduce，并且需要多个步骤的MR&nbsp;Job来完成计算，且MR&nbsp;Job的多少和维度相关，越多的维度会带来更多的MR&nbsp;job。而每一次MR&nbsp;job的启停都需要等待集群调度，并且MR&nbsp;job之间的数据需要多次在HDFS落地和传输，从而导致消耗了大量的集群资源。为此我们引入了一种新的算法：Fast&nbsp;Cubing。一个MapReduce即可完成Cub的计算，测试结果表明整个Cubing的时间可以降低30～50%左右，网络传输可以下降5倍，这在超大规模数据集的计算上带来了客观的性能改进。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">Streaming&nbsp;OLAP</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">Kylin作为一个预计算系统，不可避免的有着数据刷新延迟的限制，这在大部分用户案例中并不是问题，但随着业务和技术的发展，Streaming甚至Realtime的需求越来越高。2015年Kylin的主要发展都在Streaming&nbsp;OLAP上，为了支持低延迟的数据刷新，从整体的架构和设计上都做了相当大的重新设计，目前已经可以支持从Kafka读取数据并进行聚合计算的能力，同时提供SQL接口为前端客户端提供标准的访问接口，数据延迟已经可以做到分钟级别。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">Spark&nbsp;Cubing</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">Spark作为MapReduce的一种替代方案一直在社区中被问及Kylin是否可以支持直接使用Spark来作为计算。为此我们在2015年下半年实现了同样算法的Spark&nbsp;Cubing引擎，目前还在测试中。</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">可插拔架构</span></li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">为了更广泛的可扩展性，并支持如上各种新特性，Kylin在2.x的代码中引入了可插拔架构和设计，从而解决了对特定技术的依赖问题。在新的设计中，数据源可以从Hive，SparkSQL等各种SQL&nbsp;on&nbsp;Hadoop技术读取，并支持Kafka；在计算引擎方面，除了MapReduce方面的Fast&nbsp;Cubing外，实现了Spark&nbsp;Cubing，Streaming&nbsp;Cubing等多种计算框架，并为将来其他计算框架留下了扩展接口；在存储上，HBase目前依然是唯一的存储层，但在上层设计中已经很好的进行了抽象，很容易可以扩展到其他Key－Value系统。</p><h2>大数据与机器学习</h2><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">机器学习是数据分析不可缺少的一部分。机器学习被赞誉为大数据分析和商务智能发展的未来，成功的机器学习项目依赖于很多因素，包括选择正确的主题，运行环境，合理的机器学习模型，最重要的是现有的数据，大数据为机器学习提供了很好的用武之地。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">机器学习正很快从一个被很少人关注的技术主题转变为被很多人使用的管理工具。优秀的算法，大数据和高性能的计算资源的条件的满足使得机器学习快速发展，机器学习在今年第一次进入Gartner技术成熟曲线的报告中，并且进入大数据一样的应用期；而机器学习也是报告中第一个出现的技术。2015年是机器学习丰收年，发生了很多令人瞩目的大事。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">各大巨头开源：</p><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年1月，<a href="http://www.infoq.com/cn/news/2015/01/facebook-fbcunn" target="_blank" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Facebook开源</a>前沿深度学习工具&#8220;<a href="http://www.infoq.com/cn/news/2015/01/facebook-open-source-torch" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Torch</a>&#8221;。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年4月，亚马逊启动其机器学习平台Amazon&nbsp;Machine&nbsp;Learning，这是一项全面的托管服务，让开发者能够轻松使用历史数据开发并部署预测模型。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年11月，<a href="http://www.infoq.com/cn/news/2015/11/tensorflow" target="_blank" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">谷歌开源</a>其机器学习平台TensorFlow。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">同一月，<a href="http://www.infoq.com/cn/news/2015/11/tensorflow-vs-dmtk-vs-systemml" target="_blank" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">IBM开源SystemML</a>并成为Apache官方孵化项目。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">同时，微软亚洲研究院将分布式机器学习工具DMTK通过Github开源。DMTK由一个服务于分布式机器学习的框架和一组分布式机器学习算法组成，可将机器学习算法应用到大数据中。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">2015年12月，Facebook开源针对神经网络研究的服务器&#8220;<a href="http://www.infoq.com/cn/news/2015/12/Facebook-BigSur-OpenSource" target="_blank" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">Big&nbsp;Sur</a>&#8221;，配有高性能图形处理单元（GPUs），转为深度学习方向设计的芯片。</li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">大公司不仅是用开源社区来增强自己的机器学习工具，而且也会以收购来提升自身的机器学习实力。如IBM于今年3月收购了AIchemyAPI，AIchemyAPI能够利用深度学习人工智能，搜集企业、网站发行的图片和文字等来进行文本识别和数据分析。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">此外，2015年不仅仅是关于大公司的，利用机器学习的各种创业公司也占了同等地位。比如EverString完成B轮融资，该公司利用企业内部销售数据，和不断主动挖掘分析全球新闻数据，社交媒体等外部数据，通过机器学习自动建立量化客户模型，为企业预测潜在客户。</p><h2>数据科学家的崛起</h2><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">大数据需要数据分析，数据分析需要人才。数据科学是早就存在的词汇，而数据科学家却是近年来突然出现的新词。在Google、Amazon、Quora、Facebook等大公司的背后，都有一批数据科学专业人才，将大量数据变为可开发有价值的金矿。在大数据时代，数据科学家等分析人才的需求在激增。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">据相关报告，国内大数据人才缺口目前已达百万，一名高级数据挖掘工程师月薪高达30K-50K。招聘网站上的每天都会产生大量的大数据相关职位需求。据拉勾网提供的统计来看，从2014年到2015年，IT行业关于大数据的岗位需求增长了2.4倍。人才培养迫在眉睫。复旦大学于今年成立了全国首个大数据学院。阿里云于年底宣布新增30所合作高校，开设云计算大数据专业,计划用3年时间培养5万名数据科学家。各知名大学也将数据科学设为硕士课程。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">无论是国内还是国外，数据科学都是目前炙手可热的研究领域，数据科学家、数据分析师都是非常火爆的职位，几乎所有的产业都需要数据科学家来从大量的数据中挖掘有价值的信息。大数据分析领域的专属首席级别头衔也愈发多见。美国政府今年任命了DJ&nbsp;Patil作为政府的首席数据科学家（Chief&nbsp;Data&nbsp;Scientist），这也是美国政府内部首次设立&#8220;数据科学家&#8221;这个职位。</p><h2>展望2016：</h2><ul style="margin: 0px 0px 15px 10px; padding: 0px; border: 0px; clear: left; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">Hadoop。对于&nbsp;HDFS，会朝着异构存储介质方向发展，尤其是对新兴存储介质的支持；对于&nbsp;YARN，会朝着通用资源管理和调度方向发展，而不仅仅限于大数据处理领域，在加强对&nbsp;MapReduce、Spark等短类型应用支持的同时，加强对类似Web&nbsp;Service&nbsp;等长服务的支持；</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">对于HBase，将会花费更多精力在稳定性和性能方面，正尝试的技术方向包括：对于&nbsp;HDFS&nbsp;多存储介质的使用；减少对&nbsp;ZooKeeper&nbsp;的使用以及通过使用堆外内存缓解Java&nbsp;GC的影响。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">Spark&nbsp;2.0预计明年三四月份发布，将会确立以DataFrame和Dataset为核心的体系架构。同时在各方面的性能上会有很大的提升。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">Apache&nbsp;Kylin&nbsp;2.0即将发布，随着各项改进的不断完善，该版本将在2016年在OLAP&nbsp;on&nbsp;Hadoop上更进一步！</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">Elasticsearch开源搜索平台，机器学习，Data&nbsp;graphics，数据可视化在2016年会更加火热。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">大数据会越来越大，IOT、社交媒体依然是一个主要的推动因素。</li><li style="margin: 0px 0px 0px 15px; padding: 0px; border: 0px; float: none; clear: none;">大数据的安全和隐私会持续受到关注。</li></ul><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">&nbsp;</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">专家介绍：</span></p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">董西成</span>&nbsp;就职于Hulu，专注于分布式计算和资源管理系统等相关技术。《Hadoop&nbsp;技术内幕：深入解析&nbsp;MapReduce&nbsp;架构设计与实现原理》和《Hadoop&nbsp;技术内幕：深入解&nbsp;析&nbsp;YARN&nbsp;架构设计与实现原理》作者，dongxicheng.org&nbsp;博主。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">梁堰波</span>&nbsp;明略数据技术合伙人，开源爱好者，Apache&nbsp;Spark项目核心贡献者。北京航空航天大学计算机硕士，曾就职于Yahoo!、美团网、法国电信从事机器学习和推荐系统相关的工作，在大数据、机器学习和分布式系统领域具备丰富的项目经验。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">卢亿雷</span>&nbsp;精硕科技(AdMaster)技术副总裁兼总架构师，大数据资深专家，CCF（中国计算学会）大数据专委委员，北航特聘教授。主要负责数据的采集、清洗、存储、挖掘等整个数据流过程，确保提供高可靠、高可用、高扩展、高性能系统服务，提供Hadoop/HBase/Storm/Spark/ElasticSearch等离线、流式及实时分布式计算服务。对分布式存储和分布式计算、超大集群、大数据分析等有深刻理解及实践经验。有超过10年云计算、云存储、大数据经验。曾在联想、百度、Carbonite工作，并拥有多篇大数据相关的专利和论文。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><span style="font-weight: 600; margin: 0px; border: 0px; padding: 0px;">韩卿</span>(Luke&nbsp;Han)&nbsp;eBay全球分析基础架构部(ADI)&nbsp;大数据平台产品负责人，Apache&nbsp;Kylin&nbsp;副总裁，联合创始人，管理和驱动着Apache&nbsp;Kylin的愿景，路线图，特性及计划等，在全球各地不同部门中发展客户，开拓内外部合作伙伴及管理开源社区等，建立与大数据厂商，集成商及最终用户的联系已构建健壮的Apache&nbsp;Kylin生态系统。在大数据，数据仓库，商务智能等方面拥有超过十年的工作经验。</p><p style="margin: 0px 0px 15px; padding: 0px; border: 0px; float: none; line-height: 1.8; clear: none; width: 610px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;">&nbsp;</p><div style="margin: 0px; border: 0px; height: 0px; clear: both; font-size: 0px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"></div><div style="margin: 0px 0px 5px; border: 0px; font-family: 'Lantinghei SC', 'Open Sans', Arial, 'Hiragino Sans GB', 'Microsoft YaHei', 微软雅黑, STHeiti, 'WenQuanYi Micro Hei', SimSun, Helvetica, sans-serif; background-color: #ffffff;"><span style="margin: 0px; border: 0px; padding: 0px; font-weight: 600;">【<a href="http://2016.qconbeijing.com/" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">QCon北京2016</a>】大会火热筹备中，腾讯社交网络质量部副总经理吴凯华、美团网技术总监王栋、奇虎360系统部总监肖康等专家将担任专题出品人，策划实践驱动的技术分享。另，100+位讲师积极邀约中，欢迎<a href="http://www.infoq.com/cn/news/2015/12/qcon-bj-2016" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">自荐或推荐</a>。现在<a href="http://2016.qconbeijing.com/apply" style="text-decoration: none; color: #286ab2; outline: none !important; margin: 0px; border: 0px; padding: 0px;">购票</a>，可享8折优惠，5人之上团购优惠多多。</span></div><img src ="http://www.blogjava.net/xiaomage234/aggbug/429064.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/xiaomage234/" target="_blank">小马歌</a> 2016-01-15 15:01 <a href="http://www.blogjava.net/xiaomage234/archive/2016/01/15/429064.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss>