Accumulators and Broadcast Variables
这些不能从checkpoint重新恢复

如果想启动检查点的时候使用这两个变量,就需要创建这写变量的懒惰的singleton实例。
下面是一个例子:
  1. def getWordBlacklist(sparkContext):
  2. if ('wordBlacklist' not in globals()):
  3. globals()['wordBlacklist'] = sparkContext.broadcast(["a", "b", "c"])
  4. return globals()['wordBlacklist']
  5. def getDroppedWordsCounter(sparkContext):
  6. if ('droppedWordsCounter' not in globals()):
  7. globals()['droppedWordsCounter'] = sparkContext.accumulator(0)
  8. return globals()['droppedWordsCounter']
  9. def echo(time, rdd):
  10. # Get or register the blacklist Broadcast
  11. blacklist = getWordBlacklist(rdd.context)
  12. # Get or register the droppedWordsCounter Accumulator
  13. droppedWordsCounter = getDroppedWordsCounter(rdd.context)
  14. # Use blacklist to drop words and use droppedWordsCounter to count them
  15. def filterFunc(wordCount):
  16. if wordCount[0] in blacklist.value:
  17. droppedWordsCounter.add(wordCount[1])
  18. False
  19. else:
  20. True
  21. counts = "Counts at time %s %s" % (time, rdd.filter(filterFunc).collect())
  22. wordCounts.foreachRDD(echo)

DataFrame and SQL Operations
通过创建SparkSession的懒惰的singleton实例,可以从失败中恢复。
  1. # Lazily instantiated global instance of SparkSession
  2. def getSparkSessionInstance(sparkConf):
  3. if ('sparkSessionSingletonInstance' not in globals()):
  4. globals()['sparkSessionSingletonInstance'] = SparkSession\
  5. .builder\
  6. .config(conf=sparkConf)\
  7. .getOrCreate()
  8. return globals()['sparkSessionSingletonInstance']
  9. ...
  10. # DataFrame operations inside your streaming program
  11. words = ... # DStream of strings
  12. def process(time, rdd):
  13. print("========= %s =========" % str(time))
  14. try:
  15. # Get the singleton instance of SparkSession
  16. spark = getSparkSessionInstance(rdd.context.getConf())
  17. # Convert RDD[String] to RDD[Row] to DataFrame
  18. rowRdd = rdd.map(lambda w: Row(word=w))
  19. wordsDataFrame = spark.createDataFrame(rowRdd)
  20. # Creates a temporary view using the DataFrame
  21. wordsDataFrame.createOrReplaceTempView("words")
  22. # Do word count on table using SQL and print it
  23. wordCountsDataFrame = spark.sql("select word, count(*) as total from words group by word")
  24. wordCountsDataFrame.show()
  25. except:
  26. pass
  27. words.foreachRDD(process)

MLlib Operations
streaming machine learning algorithms (e.g. Streaming Linear Regression, Streaming KMeans, etc.) 
can simultaneously learn from the streaming data as well as apply the model on the streaming data
for a much larger class of machine learning algorithms
you can learn a learning model offline (i.e. using historical data) and then apply the model online on streaming data


Caching / Persistence
DStreams also allow developers to persist the stream’s data in memory
using the persist() method on a DStream will automatically persist every RDD of that DStream in memory
For window-based operations like reduceByWindow and reduceByKeyAndWindow and state-based operations like updateStateByKey, this is implicitly true(对这些操作,默认实现自动的缓存)
For input streams that receive data over the network (such as, Kafka, Flume, sockets, etc.), the default persistence level is set to replicate the data to two nodes for fault-tolerance.(网络数据,默认存两份来容错)

You can mark an RDD to be persisted using the persist() or cache() methods on it.
levels are set by passing a StorageLevel object to persist()
The cache() method is a shorthand for using the default storage level, which is StorageLevel.MEMORY_ONLY
unlike RDDs, the default persistence level of DStreams keeps the data serialized in memory



Checkpointing

Spark Streaming needs to checkpoint enough information to a fault- tolerant storage system such that it can recover from failures,There are two types of data that are checkpointed.
  1. Metadata checkpointing - Saving of the information defining the streaming computation to fault-tolerant storage like HDFS. This is used to recover from failure of the node running the driver of the streaming application. Metadata includes:
    Configuration - The configuration that was used to create the streaming application.
    DStream operations - The set of DStream operations that define the streaming application.
    Incomplete batches - Batches whose jobs are queued but have not completed yet.
  2. Data checkpointing - Saving of the generated RDDs to reliable storage. 


When to enable Checkpointing
  1. Usage of stateful transformations - If either updateStateByKey or reduceByKeyAndWindow (with inverse function) is used in the application, then the checkpoint directory must be provided to allow for periodic(周期的) RDD checkpointing.
  2. Recovering from failures of the driver running the application - Metadata checkpoints are used to recover with progress information.
How to configure Checkpointing
    Checkpointing can be enabled by setting a directory in a fault-tolerant, reliable file system (e.g., HDFS, S3, etc.) to which the checkpoint information will be saved
    done by using streamingContext.checkpoint(checkpointDirectory)
     if you want to make the application recover from driver failures, you should rewrite your streaming application to have the following behavior:
  1. When the program is being started for the first time, it will create a new StreamingContext, set up all the streams and then call start().
  2. When the program is being restarted after failure, it will re-create a StreamingContext from the checkpoint data in the checkpoint directory.
This behavior is made simple by using StreamingContext.getOrCreate
  1. # Function to create and setup a new StreamingContext
  2. def functionToCreateContext():
  3. sc = SparkContext(...) # new context
  4. ssc = new StreamingContext(...)
  5. lines = ssc.socketTextStream(...) # create DStreams
  6. ...
  7. ssc.checkpoint(checkpointDirectory) # set checkpoint directory
  8. return ssc
  9. # Get StreamingContext from checkpoint data or create a new one
  10. context = StreamingContext.getOrCreate(checkpointDirectory, functionToCreateContext)
  11. # Do additional setup on context that needs to be done,
  12. # irrespective of whether it is being started or restarted
  13. context. ...
  14. # Start the context
  15. context.start()
  16. context.awaitTermination()
如果checkpointDirectory存在,会从检查点重新新建
如果路径不存在,函数functionToCreateContext会创建新的context
You can also explicitly(明确的) create a StreamingContext from the checkpoint data and start the computation by using
  1. StreamingContext.getOrCreate(checkpointDirectory, None).
In addition to using getOrCreate one also needs to ensure that the driver process gets restarted automatically on failure,This is further discussed in the Deployment section

At small batch sizes (say 1 second), checkpointing every batch may significantly reduce operation throughput。
he default interval is a multiple of the batch interval that is at least 10 seconds
It can be set by using dstream.checkpoint(checkpointInterval)
Typically, a checkpoint interval of 5 - 10 sliding intervals of a DStream is a good setting to try.



Deploying Applications

Requirements
  • Cluster with a cluster manager
  • Package the application JAR
    If you are using spark-submit to start the application, then you will not need to provide Spark and Spark Streaming in the JAR. However, if your application uses advanced sources (e.g. Kafka, Flume), then you will have to package the extra artifact they link to, along with their dependencies, in the JAR that is used to deploy the application.
  • Configuring sufficient memory for the executors
    Note that if you are doing 10 minute window operations, the system has to keep at least last 10 minutes of data in memory. So the memory requirements for the application depends on the operations used in it.
  • Configuring checkpointing
  • Configuring automatic restart of the application driver
    • Spark Standalone 
      the Standalone cluster manager can be instructed to supervise the driver, and relaunch it if the driver fails either due to non-zero exit code, or due to failure of the node running the driver.
    • YARN automatically restarting an application
    • Mesos  Marathon has been used to achieve this with Mesos
  • Configuring write ahead logs
     If enabled, all the data received from a receiver gets written into a write ahead log in the configuration checkpoint directory.
  • Setting the max receiving rate


Upgrading Application Code
两种机制去更新代码
  1. 更新的应用和旧的应用并行的执行,Once the new one (receiving the same data as the old one) has been warmed up and is ready for prime time, the old one be can be brought down.这要求,数据源可以向两个地方发送数据。
  2. 优雅的停止,就是处理完接受到的数据之后再停止。ensure data that has been received is completely processed before shutdown。Then the upgraded application can be started, which will start processing from the same point where the earlier application left off.为了实现这个需要数据源的数据是可以缓存的。


Monitoring Applications
 


Performance Tuning

目的或者方式
  1. Reducing the processing time of each batch of data by efficiently using cluster resources.
  2. Setting the right batch size such that the batches of data can be processed as fast as they are received (that is, data processing keeps up with the data ingestion).

Level of Parallelism in Data Receiving
Level of Parallelism in Data Processing

Spark Streaming官方文档学习--下的更多相关文章

  1. Spark Streaming官方文档学习--上

    官方文档地址:http://spark.apache.org/docs/latest/streaming-programming-guide.html Spark Streaming是spark ap ...

  2. Spark监控官方文档学习笔记

    任务的监控和使用 有几种方式监控spark应用:Web UI,指标和外部方法 Web接口 每个SparkContext都会启动一个web UI,默认是4040端口,用来展示一些信息: 一系列调度的st ...

  3. Spring 4 官方文档学习(十一)Web MVC 框架

    介绍Spring Web MVC 框架 Spring Web MVC的特性 其他MVC实现的可插拔性 DispatcherServlet 在WebApplicationContext中的特殊的bean ...

  4. Spark SQL 官方文档-中文翻译

    Spark SQL 官方文档-中文翻译 Spark版本:Spark 1.5.2 转载请注明出处:http://www.cnblogs.com/BYRans/ 1 概述(Overview) 2 Data ...

  5. Spring 4 官方文档学习(十二)View技术

    关键词:view technology.template.template engine.markup.内容较多,按需查用即可. 介绍 Thymeleaf Groovy Markup Template ...

  6. Spring 4 官方文档学习(十一)Web MVC 框架之配置Spring MVC

    内容列表: 启用MVC Java config 或 MVC XML namespace 修改已提供的配置 类型转换和格式化 校验 拦截器 内容协商 View Controllers View Reso ...

  7. Spring Data Commons 官方文档学习

    Spring Data Commons 官方文档学习   -by LarryZeal Version 1.12.6.Release, 2017-07-27 为知笔记版本在这里,带格式. Table o ...

  8. Spring 4 官方文档学习(十一)Web MVC 框架之resolving views 解析视图

    接前面的Spring 4 官方文档学习(十一)Web MVC 框架,那篇太长,故另起一篇. 针对web应用的所有的MVC框架,都会提供一种呈现views的方式.Spring提供了view resolv ...

  9. Spring Boot 官方文档学习(一)入门及使用

    个人说明:本文内容都是从为知笔记上复制过来的,样式难免走样,以后再修改吧.另外,本文可以看作官方文档的选择性的翻译(大部分),以及个人使用经验及问题. 其他说明:如果对Spring Boot没有概念, ...

随机推荐

  1. PS4 的下载速度问题

    折腾了好久了 AC68u路由自启动修改 hosts 问题,打算FQ另外改善 ps4 下载速度太慢问题. 后来看到几个dns, 直接修改后就速度超快,也不用在路由中添加了, 直接在 ps4 中网络设置中 ...

  2. ubuntu 修改终端命令显示的颜色

    转于  http://www.blogbus.com/riusksk-logs/62891140.html 修改当前用户 gedit ~/.bashrc 在最后一行下面添加这行 PS1='${debi ...

  3. WKWebView与Js实战(OC版)

    前言 上一篇专门讲解了WKWebView相关的所有类.代理的所有API.那么本篇讲些什么呢?当然是实战了! 本篇文章教大家如何使用WKWebView去实现常用的一些API操作.当然,也会有如何与JS交 ...

  4. Rsync原理介绍及配置应用

    1.前言 基于LAN或WAN的网络应用之间进行数据传输或者同步非常普遍,比如远程数据镜像.备份.复制.同步,数据下载.上传.共享等等.对此,最简单.直接的做法是对数据进行完全复制.然而,数据在网络上来 ...

  5. 使用glob()查找文件

    大部分PHP函数的函数名从字面上都可以理解其用途,但是当你看到 glob() 的时候,你也许并不知道这是用来做什么的,其实glob()和scandir() 一样,可以用来查找文件,请看下面的用法:摘自 ...

  6. java JPEGImageEncoder;图像处理

    在Eclipse中处理图片,需要引入两个包: import com.sun.image.codec.jpeg.JPEGCodec; import com.sun.image.codec.jpeg.JP ...

  7. c#danliemosih

    using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace 打印机 ...

  8. 哈希-Gold Balanced Lineup 分类: POJ 哈希 2015-08-07 09:04 2人阅读 评论(0) 收藏

    Gold Balanced Lineup Time Limit: 2000MS Memory Limit: 65536K Total Submissions: 13215 Accepted: 3873 ...

  9. c++map的用法 分类: POJ 2015-06-19 18:36 11人阅读 评论(0) 收藏

    c++map的用法 分类: 资料 2012-11-14 21:26 10573人阅读 评论(0) 收藏 举报 最全的c++map的用法 此文是复制来的0.0 1. map最基本的构造函数: map&l ...

  10. 2016年6月23日 星期四 --出埃及记 Exodus 14:20

    2016年6月23日 星期四 --出埃及记 Exodus 14:20 coming between the armies of Egypt and Israel. Throughout the nig ...