转发请注明原创地址 http://www.cnblogs.com/dongxiao-yang/p/7994357.html

spark-streaming定时对 DStreamGraph 和 JobScheduler 做 Checkpoint,来记录整个 DStreamGraph 的变化和每个 batch 的 job 的完成情况,Checkpoint 发起的间隔默认的是和 batchDuration 一致;即每次 batch 发起、提交了需要运行的 job 后就做 Checkpoint。另外在 job 完成了更新任务状态的时候再次做一下 Checkpoint。

一 checkpoint生成

job生成

  1. private def generateJobs(time: Time) {
  2. // Checkpoint all RDDs marked for checkpointing to ensure their lineages are
  3. // truncated periodically. Otherwise, we may run into stack overflows (SPARK-6847).
  4. ssc.sparkContext.setLocalProperty(RDD.CHECKPOINT_ALL_MARKED_ANCESTORS, "true")
  5. Try {
  6. jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batch
  7. graph.generateJobs(time) // generate jobs using allocated block
  8. } match {
  9. case Success(jobs) =>
  10. val streamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time)
  11. jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToInputInfos))
  12. case Failure(e) =>
  13. jobScheduler.reportError("Error generating jobs for time " + time, e)
  14. PythonDStream.stopStreamingContextIfPythonProcessIsDead(e)
  15. }
  16. eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false))
  17. }

job 完成

  1. private def clearMetadata(time: Time) {
  2. ssc.graph.clearMetadata(time)
  3.  
  4. // If checkpointing is enabled, then checkpoint,
  5. // else mark batch to be fully processed
  6. if (shouldCheckpoint) {
  7. eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = true))
  8. } else {
  9. // If checkpointing is not enabled, then delete metadata information about
  10. // received blocks (block data not saved in any case). Otherwise, wait for
  11. // checkpointing of this batch to complete.
  12. val maxRememberDuration = graph.getMaxInputStreamRememberDuration()
  13. jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)
  14. jobScheduler.inputInfoTracker.cleanup(time - maxRememberDuration)
  15. markBatchFullyProcessed(time)
  16. }
  17. }

上文里面的eventLoop是JobGenerator内部的一个消息事件队列的封装,eventLoop内部会有一个后台线程不断的去消费事件,所以DoCheckpoint这种类型的事件会经过processEvent ->

doCheckpoint  由checkpointWriter把生成的Checkpoint对象写到外部存储:

  1. /** Processes all events */
  2. private def processEvent(event: JobGeneratorEvent) {
  3. logDebug("Got event " + event)
  4. event match {
  5. case GenerateJobs(time) => generateJobs(time)
  6. case ClearMetadata(time) => clearMetadata(time)
  7. case DoCheckpoint(time, clearCheckpointDataLater) =>
  8. doCheckpoint(time, clearCheckpointDataLater)
  9. case ClearCheckpointData(time) => clearCheckpointData(time)
  10. }
  11. }
  12.  
  13. /** Perform checkpoint for the give `time`. */
  14. private def doCheckpoint(time: Time, clearCheckpointDataLater: Boolean) {
  15. if (shouldCheckpoint && (time - graph.zeroTime).isMultipleOf(ssc.checkpointDuration)) {
  16. logInfo("Checkpointing graph for time " + time)
  17. ssc.graph.updateCheckpointData(time)
  18. checkpointWriter.write(new Checkpoint(ssc, time), clearCheckpointDataLater)
  19. }
  20. }

doCheckpoint在调用checkpointWriter写数据到hdfs之前,首先会运行一下ssc.graph.updateCheckpointData(time),这个方法的主要作用是更新DStreamGraph里所有input和output stream对应的checkpointData属性,调用链路为DStreamGraph.updateCheckpointData -> Dstream.updateCheckpointData -> checkpointData.update

  1. def updateCheckpointData(time: Time) {
  2. logInfo("Updating checkpoint data for time " + time)
  3. this.synchronized {
  4. outputStreams.foreach(_.updateCheckpointData(time))
  5. }
  6. logInfo("Updated checkpoint data for time " + time)
  7. }
  8.  
  9. private[streaming] def updateCheckpointData(currentTime: Time) {
  10. logDebug(s"Updating checkpoint data for time $currentTime")
  11. checkpointData.update(currentTime)
  12. dependencies.foreach(_.updateCheckpointData(currentTime))
  13. logDebug(s"Updated checkpoint data for time $currentTime: $checkpointData")
  14. }
  15. private[streaming]
  16. class DirectKafkaInputDStreamCheckpointData extends DStreamCheckpointData(this) {
  17. def batchForTime: mutable.HashMap[Time, Array[(String, Int, Long, Long)]] = {
  18. data.asInstanceOf[mutable.HashMap[Time, Array[OffsetRange.OffsetRangeTuple]]]
  19. }
  20.  
  21. override def update(time: Time): Unit = {
  22. batchForTime.clear()
  23. generatedRDDs.foreach { kv =>
  24. val a = kv._2.asInstanceOf[KafkaRDD[K, V]].offsetRanges.map(_.toTuple).toArray
  25. batchForTime += kv._1 -> a
  26. }
  27. }
  28.  
  29. override def cleanup(time: Time): Unit = { }
  30.  
  31. override def restore(): Unit = {
  32. batchForTime.toSeq.sortBy(_._1)(Time.ordering).foreach { case (t, b) =>
  33. logInfo(s"Restoring KafkaRDD for time $t ${b.mkString("[", ", ", "]")}")
  34. generatedRDDs += t -> new KafkaRDD[K, V](
  35. context.sparkContext,
  36. executorKafkaParams,
  37. b.map(OffsetRange(_)),
  38. getPreferredHosts,
  39. // during restore, it's possible same partition will be consumed from multiple
  40. // threads, so dont use cache
  41. false
  42. )
  43. }
  44. }
  45. }

以DirectKafkaInputDStream为例,代码里重写了checkpointData的update等接口,所以DirectKafkaInputDStream会在checkpoint之前把正在运行的kafkaRDD对应的topic,partition,fromoffset,untiloffset全部存储到checkpointData里面data这个HashMap的属性当中,用于写checkpoint时进行序列化。

一个checkpoint里面包含的对象列表如下:

  1. class Checkpoint(ssc: StreamingContext, val checkpointTime: Time)
  2. extends Logging with Serializable {
  3. val master = ssc.sc.master
  4. val framework = ssc.sc.appName
  5. val jars = ssc.sc.jars
  6. val graph = ssc.graph
  7. val checkpointDir = ssc.checkpointDir
  8. val checkpointDuration = ssc.checkpointDuration
  9. val pendingTimes = ssc.scheduler.getPendingTimes().toArray
  10. val sparkConfPairs = ssc.conf.getAll

二 从checkpoint恢复服务

spark-streaming启用checkpoint代码里的StreamingContext必须严格按照官方demo实例的架构使用,即所有的streaming逻辑都放在一个返回StreamingContext的createContext方法上,

通过StreamingContext.getOrCreate方法进行初始化,在CheckpointReader.read找到checkpoint文件并且成功反序列化出checkpoint对象后,返回一个包含该checkpoint对象的StreamingContext,这个时候程序里的createContext就不会被调用,反之如果程序是第一次启动会通过createContext初始化StreamingContext

  1. def getOrCreate(
  2. checkpointPath: String,
  3. creatingFunc: () => StreamingContext,
  4. hadoopConf: Configuration = SparkHadoopUtil.get.conf,
  5. createOnError: Boolean = false
  6. ): StreamingContext = {
  7. val checkpointOption = CheckpointReader.read(
  8. checkpointPath, new SparkConf(), hadoopConf, createOnError)
  9. checkpointOption.map(new StreamingContext(null, _, null)).getOrElse(creatingFunc())
  10. }
  11.  
  12. def read(
  13. checkpointDir: String,
  14. conf: SparkConf,
  15. hadoopConf: Configuration,
  16. ignoreReadError: Boolean = false): Option[Checkpoint] = {
  17. val checkpointPath = new Path(checkpointDir)
  18.  
  19. val fs = checkpointPath.getFileSystem(hadoopConf)
  20.  
  21. // Try to find the checkpoint files
  22. val checkpointFiles = Checkpoint.getCheckpointFiles(checkpointDir, Some(fs)).reverse
  23. if (checkpointFiles.isEmpty) {
  24. return None
  25. }
  26.  
  27. // Try to read the checkpoint files in the order
  28. logInfo(s"Checkpoint files found: ${checkpointFiles.mkString(",")}")
  29. var readError: Exception = null
  30. checkpointFiles.foreach { file =>
  31. logInfo(s"Attempting to load checkpoint from file $file")
  32. try {
  33. val fis = fs.open(file)
  34. val cp = Checkpoint.deserialize(fis, conf)
  35. logInfo(s"Checkpoint successfully loaded from file $file")
  36. logInfo(s"Checkpoint was generated at time ${cp.checkpointTime}")
  37. return Some(cp)
  38. } catch {
  39. case e: Exception =>
  40. readError = e
  41. logWarning(s"Error reading checkpoint from file $file", e)
  42. }
  43. }
  44.  
  45. // If none of checkpoint files could be read, then throw exception
  46. if (!ignoreReadError) {
  47. throw new SparkException(
  48. s"Failed to read checkpoint from directory $checkpointPath", readError)
  49. }
  50. None
  51. }
  52. }

在从checkpoint恢复的过程中DStreamGraph由checkpoint恢复,下文的代码调用路径StreamingContext.graph->DStreamGraph.restoreCheckpointData ->   DStream.restoreCheckpointData->checkpointData.restore

  1. private[streaming] val graph: DStreamGraph = {
  2. if (isCheckpointPresent) {
  3. _cp.graph.setContext(this)
  4. _cp.graph.restoreCheckpointData()
  5. _cp.graph
  6. } else {
  7. require(_batchDur != null, "Batch duration for StreamingContext cannot be null")
  8. val newGraph = new DStreamGraph()
  9. newGraph.setBatchDuration(_batchDur)
  10. newGraph
  11. }
  12. }
  13.  
  14. def restoreCheckpointData() {
  15. logInfo("Restoring checkpoint data")
  16. this.synchronized {
  17. outputStreams.foreach(_.restoreCheckpointData())
  18. }
  19. logInfo("Restored checkpoint data")
  20. }
  21.  
  22. private[streaming] def restoreCheckpointData() {
  23. if (!restoredFromCheckpointData) {
  24. // Create RDDs from the checkpoint data
  25. logInfo("Restoring checkpoint data")
  26. checkpointData.restore()
  27. dependencies.foreach(_.restoreCheckpointData())
  28. restoredFromCheckpointData = true
  29. logInfo("Restored checkpoint data")
  30. }
  31. }
  32.  
  33. override def restore(): Unit = {
  34. batchForTime.toSeq.sortBy(_._1)(Time.ordering).foreach { case (t, b) =>
  35. logInfo(s"Restoring KafkaRDD for time $t ${b.mkString("[", ", ", "]")}")
  36. generatedRDDs += t -> new KafkaRDD[K, V](
  37. context.sparkContext,
  38. executorKafkaParams,
  39. b.map(OffsetRange(_)),
  40. getPreferredHosts,
  41. // during restore, it's possible same partition will be consumed from multiple
  42. // threads, so dont use cache
  43. false
  44. )
  45. }
  46. }

仍然以DirectKafkaInputDStreamCheckpointData为例,这个方法从上文保存的checkpoint.data对象里获取RDD运行时的对应信息恢复出停止时的KafkaRDD。

  1. private def restart() {
  2. // If manual clock is being used for testing, then
  3. // either set the manual clock to the last checkpointed time,
  4. // or if the property is defined set it to that time
  5. if (clock.isInstanceOf[ManualClock]) {
  6. val lastTime = ssc.initialCheckpoint.checkpointTime.milliseconds
  7. val jumpTime = ssc.sc.conf.getLong("spark.streaming.manualClock.jump", 0)
  8. clock.asInstanceOf[ManualClock].setTime(lastTime + jumpTime)
  9. }
  10.  
  11. val batchDuration = ssc.graph.batchDuration
  12.  
  13. // Batches when the master was down, that is,
  14. // between the checkpoint and current restart time
  15. val checkpointTime = ssc.initialCheckpoint.checkpointTime
  16. val restartTime = new Time(timer.getRestartTime(graph.zeroTime.milliseconds))
  17. val downTimes = checkpointTime.until(restartTime, batchDuration)
  18. logInfo("Batches during down time (" + downTimes.size + " batches): "
  19. + downTimes.mkString(", "))
  20.  
  21. // Batches that were unprocessed before failure
  22. val pendingTimes = ssc.initialCheckpoint.pendingTimes.sorted(Time.ordering)
  23. logInfo("Batches pending processing (" + pendingTimes.length + " batches): " +
  24. pendingTimes.mkString(", "))
  25. // Reschedule jobs for these times
  26. val timesToReschedule = (pendingTimes ++ downTimes).filter { _ < restartTime }
  27. .distinct.sorted(Time.ordering)
  28. logInfo("Batches to reschedule (" + timesToReschedule.length + " batches): " +
  29. timesToReschedule.mkString(", "))
  30. timesToReschedule.foreach { time =>
  31. // Allocate the related blocks when recovering from failure, because some blocks that were
  32. // added but not allocated, are dangling in the queue after recovering, we have to allocate
  33. // those blocks to the next batch, which is the batch they were supposed to go.
  34. jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batch
  35. jobScheduler.submitJobSet(JobSet(time, graph.generateJobs(time)))
  36. }
  37.  
  38. // Restart the timer
  39. timer.start(restartTime.milliseconds)
  40. logInfo("Restarted JobGenerator at " + restartTime)
  41. }

最后,在restart的过程中,JobGenerator通过当前时间和上次程序停止的时间计算出程序重启过程中共有多少batch没有生成,加上上一次程序死掉的过程中有多少正在运行的job,全部

进行Reschedule,补跑任务。

参考文档

1Driver 端长时容错详解

2Spark Streaming揭秘 Day33 checkpoint的使用

spark-streaming的checkpoint机制源码分析的更多相关文章

  1. 《深入理解Spark:核心思想与源码分析》(前言及第1章)

    自己牺牲了7个月的周末和下班空闲时间,通过研究Spark源码和原理,总结整理的<深入理解Spark:核心思想与源码分析>一书现在已经正式出版上市,目前亚马逊.京东.当当.天猫等网站均有销售 ...

  2. 《深入理解Spark:核心思想与源码分析》(第2章)

    <深入理解Spark:核心思想与源码分析>一书前言的内容请看链接<深入理解SPARK:核心思想与源码分析>一书正式出版上市 <深入理解Spark:核心思想与源码分析> ...

  3. 《深入理解Spark:核心思想与源码分析》一书正式出版上市

    自己牺牲了7个月的周末和下班空闲时间,通过研究Spark源码和原理,总结整理的<深入理解Spark:核心思想与源码分析>一书现在已经正式出版上市,目前亚马逊.京东.当当.天猫等网站均有销售 ...

  4. 《深入理解Spark:核心思想与源码分析》正式出版上市

    自己牺牲了7个月的周末和下班空闲时间,通过研究Spark源码和原理,总结整理的<深入理解Spark:核心思想与源码分析>一书现在已经正式出版上市,目前亚马逊.京东.当当.天猫等网站均有销售 ...

  5. Spark Streaming揭秘 Day26 JobGenerator源码图解

    Spark Streaming揭秘 Day26 JobGenerator源码图解 今天主要解析一下JobGenerator,它相当于一个转换器,和机器学习的pipeline比较类似,因为最终运行在Sp ...

  6. 《深入理解Spark:核心思想与源码分析》——SparkContext的初始化(叔篇)——TaskScheduler的启动

    <深入理解Spark:核心思想与源码分析>一书前言的内容请看链接<深入理解SPARK:核心思想与源码分析>一书正式出版上市 <深入理解Spark:核心思想与源码分析> ...

  7. Spark Streaming揭秘 Day22 架构源码图解

    Spark Streaming揭秘 Day22 架构源码图解 今天主要是通过图解的方式,对SparkStreaming的架构进行一下回顾. 下面这个是其官方标准的流程描述. SparkStreamin ...

  8. Springboot学习04-默认错误页面加载机制源码分析

    Springboot学习04-默认错误页面加载机制源码分析 前沿 希望通过本文的学习,对错误页面的加载机制有这更神的理解 正文 1-Springboot错误页面展示 2-Springboot默认错误处 ...

  9. ApplicationEvent事件机制源码分析

    <spring扩展点之三:Spring 的监听事件 ApplicationListener 和 ApplicationEvent 用法,在spring启动后做些事情> <服务网关zu ...

随机推荐

  1. AxureRP7超强部件库打包下载

    摘要: 很多刚刚开始学习Axure的朋友都喜欢到网上搜罗各种部件库(组件库)widgets library ,但是网络中真正实用的并且适合你使用的少之又少,最好的办法就是自己制作适合自己工作内容的部件 ...

  2. 分析成绩 Exercise07_04

    import java.util.Scanner; /** * @author 冰樱梦 * 时间:2018年下半年 * 题目:分析成绩 * */ public class Exercise07_04 ...

  3. React Native使用Navigator组件进行页面导航报this.props....is not a function错误

    在push的时候定义回调函数: this.props.navigator.push({ component: nextVC, title: titleName, passProps: { //回调 g ...

  4. EF 通用数据层父类方法小结

    MSSql 数据库 数据层 父类 增删改查: using System;using System.Collections.Generic;using System.Data;using System. ...

  5. FFmpeg学习起步 —— 环境搭建

    下面是我搭建FFmpeg学习环境的步骤. 一.在Ubuntu下 从http://www.ffmpeg.org/download.html下载最新的FFmpeg版本,我的版本是ffmpeg-2.7.2. ...

  6. Restful Web Service部署到weblogic 12c

    介绍一下环境: 首先需要下载一个jaxrs-ri-2.22.2.zip的包 采用Jdeveloper 12c版本,jdk1.8 WebLogic Server 12.2.1版本 Restful项目建立 ...

  7. Win7用IIS发布网站系统 部署项目

    1.首先确保系统上已经安装IIS [控制面板]→[程序]→[程序和功能]→[打开或关闭Windows功能] 选中Internet 信息服务下面的所有选项,点击确定. 2. 获得发布好的程序文件 若没有 ...

  8. javascript快速入门22--Ajax简介

    Ajax是什么? 首先,Ajax是什么?一个很酷的新兴词汇!仅仅是某种早就有了的技术的一种新说法而已! Ajax是指一种创建交互式网页应用的网页开发技术.要谈到网页应用程序,则必须从WEB的历史来讲: ...

  9. 深入理解AMD和RequireJS!

    AMD 基于commonJS规范的nodeJS出来以后,服务端的模块概念已经形成,很自然地,大家就想要客户端模块.而且最好两者能够兼容,一个模块不用修改,在服务器和浏览器都可以运行.但是,由于一个重大 ...

  10. Spark(四) -- Spark工作机制

    一.应用执行机制 一个应用的生命周期即,用户提交自定义的作业之后,Spark框架进行处理的一系列过程. 在这个过程中,不同的时间段里,应用会被拆分为不同的形态来执行. 1.应用执行过程中的基本组件和形 ...