https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html

http://www.slideshare.net/databricks/a-deep-dive-into-structured-streaming

 

Structured Streaming is a scalable and fault-tolerant stream processing engine built on the Spark SQL engine.
You can express your streaming computation the same way you would express a batch computation on static data.

The Spark SQL engine will take care of running it incrementally and continuously and updating the final result as streaming data continues to arrive. You can use the Dataset/DataFrame API in Scala, Java or Python to express streaming aggregations, event-time windows, stream-to-batch joins, etc. The computation is executed on the same optimized Spark SQL engine.

Finally, the system ensures end-to-end exactly-once fault-tolerance guarantees through checkpointing and Write Ahead Logs.

In short, Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing without the user having to reason about streaming.

你可以像在静态数据源上一样,使用DataFrame接口去执行SQL,这些SQL会跑在和batch相同的optimized Spark SQL engine上
并且可以保证exactly-once fault-tolerance,通过checkpointing and Write Ahead Logs

 

只是将DStream抽象,换成DataFrame,即table

这样就可以进行结构化的操作,

并且基本和处理batch数据一样,

可以看到差别不大

 

整个过程是这样的,

可以看到,这里的output模式是complete,因为有聚合,所以每次输出需要,输出until now的统计数据

输出的mode,分为,

The “Output” is defined as what gets written out to the external storage. The output can be defined in different modes

  • Complete Mode - The entire updated Result Table will be written to the external storage. It is up to the storage connector to decide how to handle writing of the entire table.

  • Append Mode - Only the new rows appended in the Result Table since the last trigger will be written to the external storage. This is applicable only on the queries where existing rows in the Result Table are not expected to change.

  • Update Mode - Only the rows that were updated in the Result Table since the last trigger will be written to the external storage (not available yet in Spark 2.0). Note that this is different from the Complete Mode in that this mode does not output the rows that are not changed.

complete mode上面的例子已经给出

append mode,就是每次只输出增量,这个对于没有聚合的场景就是合适的

 

Window Operations on Event Time

spark认为自己对于Event time是天然支持的,只需要把它作为dataframe里面的一个列,然后做groupby即可以

然后对于late data,因为是增量输出的,所以也是可以handle的

 

Fault Tolerance Semantics

Delivering end-to-end exactly-once semantics was one of key goals behind the design of Structured Streaming.
To achieve that, we have designed the Structured Streaming sources, the sinks and the execution engine to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing. Every streaming source is assumed to have offsets (similar to Kafka offsets, or Kinesis sequence numbers) to track the read position in the stream. The engine uses checkpointing and write ahead logs to record the offset range of the data being processed in each trigger. The streaming sinks are designed to be idempotent for handling reprocessing. Together, using replayable sources and idempotant sinks, Structured Streaming can ensure end-to-end exactly-once semantics under any failure.

首先依赖source是可以依据offset replay,而sink是幂等的,这样只需要通过Write Ahead Logs记录offset,checkpoint记录state,就可以做到exactly once,因为本质是batch

Structured Streaming Programming Guide的更多相关文章

  1. Structured Streaming Programming Guide结构化流编程指南

    目录 Overview Quick Example Programming Model Basic Concepts Handling Event-time and Late Data Fault T ...

  2. Spark Streaming Programming Guide

    参考,http://spark.incubator.apache.org/docs/latest/streaming-programming-guide.html Overview SparkStre ...

  3. spark第六篇:Spark Streaming Programming Guide

    预览 Spark Streaming是Spark核心API的扩展,支持高扩展,高吞吐量,实时数据流的容错流处理.数据可以从Kafka,Flume或TCP socket等许多来源获取,并且可以使用复杂的 ...

  4. Spark Structured streaming框架(1)之基本使用

     Spark Struntured Streaming是Spark 2.1.0版本后新增加的流计算引擎,本博将通过几篇博文详细介绍这个框架.这篇是介绍Spark Structured Streamin ...

  5. Spark Structured Streaming框架(2)之数据输入源详解

    Spark Structured Streaming目前的2.1.0版本只支持输入源:File.kafka和socket. 1. Socket Socket方式是最简单的数据输入源,如Quick ex ...

  6. Spark Structured Streaming框架(5)之进程管理

    Structured Streaming提供一些API来管理Streaming对象.用户可以通过这些API来手动管理已经启动的Streaming,保证在系统中的Streaming有序执行. 1. St ...

  7. Spark Structured Streaming框架(4)之窗口管理详解

    1. 结构 1.1 概述 Structured Streaming组件滑动窗口功能由三个参数决定其功能:窗口时间.滑动步长和触发时间. 窗口时间:是指确定数据操作的长度: 滑动步长:是指窗口每次向前移 ...

  8. Spark Structured Streaming框架(3)之数据输出源详解

    Spark Structured streaming API支持的输出源有:Console.Memory.File和Foreach.其中Console在前两篇博文中已有详述,而Memory使用非常简单 ...

  9. Spark Structured Streaming框架(2)之数据输入源详解

    Spark Structured Streaming目前的2.1.0版本只支持输入源:File.kafka和socket. 1. Socket Socket方式是最简单的数据输入源,如Quick ex ...

随机推荐

  1. HDU 4342History repeat itself 数学

    C - History repeat itself Time Limit:1000MS     Memory Limit:32768KB      Description Tom took the D ...

  2. BZOJ2783: [JLOI2012]树 dfs+set

    2783: [JLOI2012]树 Time Limit: 1 Sec  Memory Limit: 128 MBSubmit: 588  Solved: 347 Description 数列 提交文 ...

  3. 使用iterator出现的死循环

    public static void main(String[] args) { List<String> list = new ArrayList<String>(); li ...

  4. Java 时间架构图

    Java 的Calendar,Date,TimeZone,Locale和DateFormat的关系图如下: 说明: milliseconds表示毫秒. milliseconds = "实际时 ...

  5. HDU4859 海岸线(最小割)

    题目大概就是说一个n*m的地图,地图上每一块是陆地或浅海域或深海域,可以填充若干个浅海域使其变为陆地,问能得到的最长的陆地海岸线是多少. 也是很有意思的一道题. 一开始想歪了,想着,不考虑海岸线重合的 ...

  6. GridView点击排序

    快速预览:GridView无代码分页排序GridView选中,编辑,取消,删除GridView正反双向排序GridView和下拉菜单DropDownList结合GridView和CheckBox结合鼠 ...

  7. 斐波那契数[XDU1049]

    Problem 1049 - 斐波那契数 Time Limit: 1000MS   Memory Limit: 65536KB   Difficulty: Total Submit: 1673  Ac ...

  8. jquery插件 源码

    下面是对Jquery几个经常用到的地方进行的增强. 功能是参考百度七巧板JS框架来完成的. 一.页面属性 $.page.getHeight():获取页面高度 $.page.getWidth():获取页 ...

  9. HDU 4671 Partition(定理题)

    题目链接 这题,明显考察搜索能力...在中文版的维基百科中找到了公式. #include <cstdio> #include <cstring> #include <st ...

  10. overload和override的区别(转)

    overload和override的区别 override(重写) 1.方法名.参数.返回值相同.2.子类方法不能缩小父类方法的访问权限.3.子类方法不能抛出比父类方法更多的异常(但子类方法可以不抛出 ...