feature   

strom (trident) spark streaming 说明
并行框架
基于DAG的任务并行计算引擎(task parallel continuous computational engine Using DAG)
基于spark的数据并行计算引擎(data parallel general purpose batch processing engine)

数据处理模式
(one at a time)一次处理一个事件(消息)
trident: (Micro-batch)一次   处理多个事件
(Micro-batch)一次   处理多个事件

延时
小于一秒
trident(数秒)
数秒)

Thanks for the article!
Could you please explain this point in a bit more detail? "But, it relies on transactions to update state, which is slower and often has to be implemented by the user."
If I want to write my output to a persistent store e.g. redis, then why would it be slower in Storm than in Spark Streaming?

Reply

Replies
  1. Hi Josh, please check out the slide about Storm/Trident here: http://spark-summit.org/wp-content/uploads/2013/10/Spark-Summit-2013-Spark-Streaming.pdf
    If you want exactly-once semantics with Trident, you have to store a per-state transaction ID for each state. I.e., in word-count, for each word, you would store both the count as well as a transaction ID; each key-value pair would look like: (Key:word, Value: count, txid). Before updating the count, you would read in the old transaction ID to make sure it's up to date, and this read causes extra latency. If you are using redis in memory, that might be okay, but if it has to go to disk then that would add noticeable latency to the update. Whereas in Spark, you don't have to store a per-state transaction ID.
    For the details of Trident transactional processing, see http://storm.apache.org/documentation/Trident-state

  2. Hi Xinh, thanks for the explanation. I see, isn't that similar to Spark checkpointing - where it saves states to HDFS every ~10 seconds? or is your point that with Storm it would (by default) persist the state much more frequently than Spark?

  3. Hi Josh, yes, the fault tolerance in Spark involves periodic (~10 second) checkpointing of RDDs. Yes, my point is that with Storm Trident the persistence occurs when each batch is processed, and by default that occurs a lot more than once every 10 seconds. And, in tuning any of these parameters, there's a tradeoff in the frequency of persistence vs. recovery time in the case of failure.

容错
至少一次
trident:精确一次
精确一次
源出处
BackType and Twitter
UCB
实现语言
Clojure scala
API支持
java、python、ruby等
jscala、java、python

平台集成
NA(基于zookeeper)
spark(所以可以统一(或共用)时事处理与历史数据的处理)

产品、支持
Storm has been around for several years and has run in production at Twitter since 2011, as well as at many other companies
Meanwhile, Spark Streaming is a newer project; its only production deployment (that I am aware of) has been at Sharethrough since 2013.

计算理论框架   
Storm is the streaming solution in the Hortonworks Hadoop data platform
Spark Streaming is in both MapR's distribution and Cloudera's Enterprise data platformDatabricks

集群集成,部署方式
依赖zookeeper,standalone,messo
standalone,yarn,messo   

google trend   



bug燃烧图   

https://issues.apache.org/jira/browse/STORM/

https://issues.apache.org/jira/browse/SPARK/
可见spark问题解决比storm要及时得多









spark streaming 与 storm的对比的更多相关文章

  1. Spark Straming,Spark Streaming与Storm的对比分析

    Spark Straming,Spark Streaming与Storm的对比分析 一.大数据实时计算介绍 二.大数据实时计算原理 三.Spark Streaming简介 3.1 SparkStrea ...

  2. Spark Streaming与Storm的对比及使用场景

    Spark Streaming与Storm都可以做实时计算,那么在做技术选型的时候到底应该选择哪个呢?通过下图可以从计算模型.计算延迟.吞吐量.事物.容错性.动态并行度等方方面进行对比. 对比点    ...

  3. Spark Streaming与Storm的对比

  4. Apache 流框架 Flink,Spark Streaming,Storm对比分析(一)

    本文由  网易云发布. 1.Flink架构及特性分析 Flink是个相当早的项目,开始于2008年,但只在最近才得到注意.Flink是原生的流处理系统,提供high level的API.Flink也提 ...

  5. Apache 流框架 Flink,Spark Streaming,Storm对比分析(二)

    本文由  网易云发布. 本文内容接上一篇Apache 流框架 Flink,Spark Streaming,Storm对比分析(一) 2.Spark Streaming架构及特性分析 2.1 基本架构 ...

  6. Apache 流框架 Flink,Spark Streaming,Storm对比分析(2)

    此文已由作者岳猛授权网易云社区发布. 欢迎访问网易云社区,了解更多网易技术产品运营经验. 2.Spark Streaming架构及特性分析 2.1 基本架构 基于是spark core的spark s ...

  7. spark streaming与storm比较

  8. Apache 流框架 Flink,Spark Streaming,Storm对比分析(1)

    此文已由作者岳猛授权网易云社区发布. 欢迎访问网易云社区,了解更多网易技术产品运营经验. 1.Flink架构及特性分析 Flink是个相当早的项目,开始于2008年,但只在最近才得到注意.Flink是 ...

  9. spark streaming (一)

    实时计算介绍 Spark Streaming, 其实就是一种Spark提供的, 对于大数据, 进行实时计算的一种框架. 它的底层, 其实, 也是基于我们之前讲解的Spark Core的. 基本的计算模 ...

随机推荐

  1. vue封装swiper

    参考:https://github.com/surmon-china/vue-awesome-swiper npm install vue-awesome-swiper --save 全局引入 imp ...

  2. element-ui 表格可编辑添加删除

    <template> <div id="Cold_all"> <div class="Cold_Left"> <el- ...

  3. Python—selenium模块(浏览器自动化工具)

    selenium可以用来完成浏览器自动化相关的操作,写一些代码制定一些基于浏览器自动化的相关操作(行为动作),当代码执行后,浏览器就会自动触发相关的事件 安装方法: pip install selen ...

  4. java_day11_IO流

    第十一章:IO流 1.流的概念 流是个抽象的概念,是对输入输出设备的抽象,Java程序中,对于数据的输入/输出操作都是以"流"的方式进行.设备可以是文件,网络,内存等 流具有方向性 ...

  5. 工控安全入门之 Ethernet/IP

    工控安全入门之 Ethernet/IP Ethernet/IP 与 Modbus 相比,EtherNet/IP 是一个更现代化的标准协议.由工作组 ControlNet International 与 ...

  6. python 时间对应计算

    import re import time def parse_time(date): if re.match('刚刚', date): date = time.strftime('%Y-%m-%d ...

  7. QT字符串QString

    字符串转数值 --------------------------------------------------------------------------------------------- ...

  8. 解决tomcat控制台乱码+清除过期缓存条目后可用空间仍不足 - 请考虑增加缓存的最大空间问题

    一.乱码 1.打开Tomcat的目录,找到conf文件夹,一般修改server.xml中的编码集,改为utf-8即可 2.若server.xml中编码设置的就是utf-8,可以修改logging.pr ...

  9. BZOJ1601 [Usaco2008 Oct]灌水[最小生成树]

    显然分析可知这个图最后连起来是一个森林,每棵树有一个根再算一个代价.那么这些跟需要连向某一点一个建立水库的代价,且根可以有多个但不能没有,则考虑用超级源点0向所有点连虚边,Prim跑MST即可保证有至 ...

  10. Hive中将文件加载到数据库表失败解决办法

    Hive中将文件加载到数据库表失败解决办法(hive创建表失败) 遇到的问题: FAILED: Execution Error, return code 1 from org.apache.hadoo ...