Big Data Ingestion and streaming product introduction
Flume
Flume isdistributed system for collecting log data from many sources, aggregating it,and writing it to HDFS. It is designed to be reliable and highly available, whileproviding a simple, flexible, and intuitive programming model based onstreaming data flows. Flume provides extensibility for online analyticapplications that process data stream in situ. Flume and Chukwa share similar goalsand features. However, there are some notable differences. Flume maintains acentral list of ongoing data flows, stored redundantly in Zookeeper. Incontrast, Chukwa distributes this information more broadly among its services.Flume adopts a “hop-by-hop” model, while in Chukwa the agents on each machineare responsible for deciding what data to send.
Chukwa
Log processing wasone of the original purposes of MapReduce. Unfortunately, Hadoop is hard to usefor this purpose. Writing MapReduce jobs to process logs is somewhat tediousand the batch nature of MapReduce makes it difficult to use with logs that aregenerated incrementally across many machines. Furthermore, HDFS stil does notsupport appending to existing files. Chukwa is a Hadoop subproject that bridgesthat gap between log handling and MapReduce. It provides a scalable distributedsystem for monitoring and analysis of log-based data. Some of the durabilityfeatures include agent-side replying of data to recover from errors. See alsoFlume.
Sqoop
Apache Sqoop is atool designed for efficiently transferring bulk data between Apache Hadoop andstructured datastores such as relational databases. It offers two-wayreplication with both snapshots and incremental updates.
Kafka
Apache Kafka is adistributed publishes-subscribe messaging system. It is designed to providehigh throughput persistent messaging that’s scalable and allows for paralleldata loads into Hadoop. Its features include the use of compression to optimizeIO performance and mirroring to improve availability, scalability and tooptimize performance in multiple-cluster scenarios.
Storm
Hadoop is ideal forbatch-mode processing over massive data sets, but it doesn’t supportevent-stream (a.k.a. message-stream) processing, i.e., responding to individualevents within a reasonable time frame. (For limited scenarios, you could use aNoSQL database like HBase to capture incoming data in the form of appendupdates.) Storm is a general-purpose, event-processing system that is growingin popularity for addressing this gap in Hadoop. Like Hadoop, Storm uses acluster of services for scalability and reliability. In Storm terminology youcreate a topology that runs continuously over a stream of incoming data, whichis analogous to a Hadoop job that runs as a batch process over a fixed data setand then terminates. An apt analogy is a continuous stream of water flowingthrough plumbing. The data sources for the topology are called spouts and eachprocessing node is called a bolt. Bolts can perform arbitrarily sophisticatedcomputations on the data, including output to data stores and other services.It is common for organizations to run a combination of Hadoop and Stormservices to gain the best features of both platforms.
Big Data Ingestion and streaming product introduction的更多相关文章
- timer Compliant Controller project (1)--Product introduction meeting
Last week ,I lead the meeting for new project. i'm very excited. The meeting is divided into the fo ...
- [Data Structures and Algorithms - 1] Introduction & Mathematics
References: 1. Stanford University CS97SI by Jaehyun Park 2. Introduction to Algorithms 3. Kuangbin' ...
- An Introduction to Text Mining using Twitter Streaming
Text mining is the application of natural language processing techniques and analytical methods to t ...
- (转)Introduction to Gradient Descent Algorithm (along with variants) in Machine Learning
Introduction Optimization is always the ultimate goal whether you are dealing with a real life probl ...
- [转]Efficiently Paging Through Large Amounts of Data
本文转自:http://msdn.microsoft.com/en-us/library/bb445504.aspx Scott Mitchell April 2007 Summary: This i ...
- Spark Streaming官方文档学习--下
Accumulators and Broadcast Variables 这些不能从checkpoint重新恢复 如果想启动检查点的时候使用这两个变量,就需要创建这写变量的懒惰的singleton实例 ...
- 【Repost】A Practical Intro to Data Science
Are you a interested in taking a course with us? Learn about our programs or contact us at hello@zip ...
- 100 open source Big Data architecture papers for data professionals
zhuan :https://www.linkedin.com/pulse/100-open-source-big-data-architecture-papers-anil-madan Big Da ...
- Apache Spark 2.2.0 中文文档 - Spark Streaming 编程指南 | ApacheCN
Spark Streaming 编程指南 概述 一个入门示例 基础概念 依赖 初始化 StreamingContext Discretized Streams (DStreams)(离散化流) Inp ...
随机推荐
- 一步一步写算法(之prim算法 下)
原文:一步一步写算法(之prim算法 下) [ 声明:版权所有,欢迎转载,请勿用于商业用途. 联系信箱:feixiaoxing @163.com] 前两篇博客我们讨论了prim最小生成树的算法,熟悉 ...
- a链接bug
<a href=”javascript:void(0);” class="btn" id="btn"></a> 在ios系统中(andr ...
- springbatch操作CSV文件
一.需求分析 使用Spring Batch对CSV文件进行读写操作: 读取一个含有四个字段的CSV文件(id, name, age, score), 对文件做简单的处理, 然后输出到还有一个csv文件 ...
- JS链表
链表 我们可以看到在javascript概念中的队列与栈都是一种特殊的线性表的结构,也是一种比较简单的基于数组的顺序存储结构.由于javascript的解释器针对数组都做了直接的优化,不会存在在很多编 ...
- loadrunner常用函数总结
事务函数:lr_end_sub_transaction 标记子事务的结束以便进行性能分析lr_end_transaction 标记 LoadRunner 事务的结束lr_end_transaction ...
- Windows在结构objective C开发环境
对于近期打算iPhone.iPod touch和iPad开发一些应用程序,所以.需要开始学习Objective C(苹果推出的类似C语言的开发语言).因为苹果的自我封闭的产业链发展模式(从芯片.机器. ...
- OpenSUSE13.1安装MongoDB
添加数据库源: sudo zypper ar http://download.opensuse.org/repositories/server:/database/openSUSE_13.1/serv ...
- 小贴士——提高PHP程序在NGINX代理服务器的性能
NGINX本身就是面向最大性能的代理服务器,因此在使用NGINX,并没有性能调整的配置工作.但是却有很多选项可用于定制NGINX的行为,利用底层硬件和操作系统. 下面将介绍用于提供PHP在NGINX的 ...
- css-缩写
border缩写 /*缩写前*/ element{ border-top-width:1px; border-top-style:solid; border-top-color:#cccccc; } ...
- Android多画面幻灯片:ViewPager基础上,利用与PagerTabStrip出生缺陷(源代码)
近期使用ViewPager.读了几个人说是不是很清晰的信息,干脆自己写demo总结下. 样例非常easy.Activity里有三个界面能够滑动.每个界面都有一个button并设置好了监听.PagerT ...