转发请注明原创地址 http://www.cnblogs.com/dongxiao-yang/p/7994357.html

spark-streaming定时对 DStreamGraph 和 JobScheduler 做 Checkpoint,来记录整个 DStreamGraph 的变化和每个 batch 的 job 的完成情况,Checkpoint 发起的间隔默认的是和 batchDuration 一致;即每次 batch 发起、提交了需要运行的 job 后就做 Checkpoint。另外在 job 完成了更新任务状态的时候再次做一下 Checkpoint。

一 checkpoint生成

job生成

  private def generateJobs(time: Time) {
// Checkpoint all RDDs marked for checkpointing to ensure their lineages are
// truncated periodically. Otherwise, we may run into stack overflows (SPARK-6847).
ssc.sparkContext.setLocalProperty(RDD.CHECKPOINT_ALL_MARKED_ANCESTORS, "true")
Try {
jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batch
graph.generateJobs(time) // generate jobs using allocated block
} match {
case Success(jobs) =>
val streamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time)
jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToInputInfos))
case Failure(e) =>
jobScheduler.reportError("Error generating jobs for time " + time, e)
PythonDStream.stopStreamingContextIfPythonProcessIsDead(e)
}
eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false))
}

job 完成

  private def clearMetadata(time: Time) {
ssc.graph.clearMetadata(time) // If checkpointing is enabled, then checkpoint,
// else mark batch to be fully processed
if (shouldCheckpoint) {
eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = true))
} else {
// If checkpointing is not enabled, then delete metadata information about
// received blocks (block data not saved in any case). Otherwise, wait for
// checkpointing of this batch to complete.
val maxRememberDuration = graph.getMaxInputStreamRememberDuration()
jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)
jobScheduler.inputInfoTracker.cleanup(time - maxRememberDuration)
markBatchFullyProcessed(time)
}
}

上文里面的eventLoop是JobGenerator内部的一个消息事件队列的封装,eventLoop内部会有一个后台线程不断的去消费事件,所以DoCheckpoint这种类型的事件会经过processEvent ->

doCheckpoint  由checkpointWriter把生成的Checkpoint对象写到外部存储:

  /** Processes all events */
private def processEvent(event: JobGeneratorEvent) {
logDebug("Got event " + event)
event match {
case GenerateJobs(time) => generateJobs(time)
case ClearMetadata(time) => clearMetadata(time)
case DoCheckpoint(time, clearCheckpointDataLater) =>
doCheckpoint(time, clearCheckpointDataLater)
case ClearCheckpointData(time) => clearCheckpointData(time)
}
} /** Perform checkpoint for the give `time`. */
private def doCheckpoint(time: Time, clearCheckpointDataLater: Boolean) {
if (shouldCheckpoint && (time - graph.zeroTime).isMultipleOf(ssc.checkpointDuration)) {
logInfo("Checkpointing graph for time " + time)
ssc.graph.updateCheckpointData(time)
checkpointWriter.write(new Checkpoint(ssc, time), clearCheckpointDataLater)
}
}

doCheckpoint在调用checkpointWriter写数据到hdfs之前,首先会运行一下ssc.graph.updateCheckpointData(time),这个方法的主要作用是更新DStreamGraph里所有input和output stream对应的checkpointData属性,调用链路为DStreamGraph.updateCheckpointData -> Dstream.updateCheckpointData -> checkpointData.update

  def updateCheckpointData(time: Time) {
logInfo("Updating checkpoint data for time " + time)
this.synchronized {
outputStreams.foreach(_.updateCheckpointData(time))
}
logInfo("Updated checkpoint data for time " + time)
} private[streaming] def updateCheckpointData(currentTime: Time) {
logDebug(s"Updating checkpoint data for time $currentTime")
checkpointData.update(currentTime)
dependencies.foreach(_.updateCheckpointData(currentTime))
logDebug(s"Updated checkpoint data for time $currentTime: $checkpointData")
}
private[streaming]
class DirectKafkaInputDStreamCheckpointData extends DStreamCheckpointData(this) {
def batchForTime: mutable.HashMap[Time, Array[(String, Int, Long, Long)]] = {
data.asInstanceOf[mutable.HashMap[Time, Array[OffsetRange.OffsetRangeTuple]]]
} override def update(time: Time): Unit = {
batchForTime.clear()
generatedRDDs.foreach { kv =>
val a = kv._2.asInstanceOf[KafkaRDD[K, V]].offsetRanges.map(_.toTuple).toArray
batchForTime += kv._1 -> a
}
} override def cleanup(time: Time): Unit = { } override def restore(): Unit = {
batchForTime.toSeq.sortBy(_._1)(Time.ordering).foreach { case (t, b) =>
logInfo(s"Restoring KafkaRDD for time $t ${b.mkString("[", ", ", "]")}")
generatedRDDs += t -> new KafkaRDD[K, V](
context.sparkContext,
executorKafkaParams,
b.map(OffsetRange(_)),
getPreferredHosts,
// during restore, it's possible same partition will be consumed from multiple
// threads, so dont use cache
false
)
}
}
}

以DirectKafkaInputDStream为例,代码里重写了checkpointData的update等接口,所以DirectKafkaInputDStream会在checkpoint之前把正在运行的kafkaRDD对应的topic,partition,fromoffset,untiloffset全部存储到checkpointData里面data这个HashMap的属性当中,用于写checkpoint时进行序列化。

一个checkpoint里面包含的对象列表如下:

class Checkpoint(ssc: StreamingContext, val checkpointTime: Time)
extends Logging with Serializable {
val master = ssc.sc.master
val framework = ssc.sc.appName
val jars = ssc.sc.jars
val graph = ssc.graph
val checkpointDir = ssc.checkpointDir
val checkpointDuration = ssc.checkpointDuration
val pendingTimes = ssc.scheduler.getPendingTimes().toArray
val sparkConfPairs = ssc.conf.getAll

二 从checkpoint恢复服务

spark-streaming启用checkpoint代码里的StreamingContext必须严格按照官方demo实例的架构使用,即所有的streaming逻辑都放在一个返回StreamingContext的createContext方法上,

通过StreamingContext.getOrCreate方法进行初始化,在CheckpointReader.read找到checkpoint文件并且成功反序列化出checkpoint对象后,返回一个包含该checkpoint对象的StreamingContext,这个时候程序里的createContext就不会被调用,反之如果程序是第一次启动会通过createContext初始化StreamingContext

  def getOrCreate(
checkpointPath: String,
creatingFunc: () => StreamingContext,
hadoopConf: Configuration = SparkHadoopUtil.get.conf,
createOnError: Boolean = false
): StreamingContext = {
val checkpointOption = CheckpointReader.read(
checkpointPath, new SparkConf(), hadoopConf, createOnError)
checkpointOption.map(new StreamingContext(null, _, null)).getOrElse(creatingFunc())
} def read(
checkpointDir: String,
conf: SparkConf,
hadoopConf: Configuration,
ignoreReadError: Boolean = false): Option[Checkpoint] = {
val checkpointPath = new Path(checkpointDir) val fs = checkpointPath.getFileSystem(hadoopConf) // Try to find the checkpoint files
val checkpointFiles = Checkpoint.getCheckpointFiles(checkpointDir, Some(fs)).reverse
if (checkpointFiles.isEmpty) {
return None
} // Try to read the checkpoint files in the order
logInfo(s"Checkpoint files found: ${checkpointFiles.mkString(",")}")
var readError: Exception = null
checkpointFiles.foreach { file =>
logInfo(s"Attempting to load checkpoint from file $file")
try {
val fis = fs.open(file)
val cp = Checkpoint.deserialize(fis, conf)
logInfo(s"Checkpoint successfully loaded from file $file")
logInfo(s"Checkpoint was generated at time ${cp.checkpointTime}")
return Some(cp)
} catch {
case e: Exception =>
readError = e
logWarning(s"Error reading checkpoint from file $file", e)
}
} // If none of checkpoint files could be read, then throw exception
if (!ignoreReadError) {
throw new SparkException(
s"Failed to read checkpoint from directory $checkpointPath", readError)
}
None
}
}

在从checkpoint恢复的过程中DStreamGraph由checkpoint恢复,下文的代码调用路径StreamingContext.graph->DStreamGraph.restoreCheckpointData ->   DStream.restoreCheckpointData->checkpointData.restore

  private[streaming] val graph: DStreamGraph = {
if (isCheckpointPresent) {
_cp.graph.setContext(this)
_cp.graph.restoreCheckpointData()
_cp.graph
} else {
require(_batchDur != null, "Batch duration for StreamingContext cannot be null")
val newGraph = new DStreamGraph()
newGraph.setBatchDuration(_batchDur)
newGraph
}
} def restoreCheckpointData() {
logInfo("Restoring checkpoint data")
this.synchronized {
outputStreams.foreach(_.restoreCheckpointData())
}
logInfo("Restored checkpoint data")
} private[streaming] def restoreCheckpointData() {
if (!restoredFromCheckpointData) {
// Create RDDs from the checkpoint data
logInfo("Restoring checkpoint data")
checkpointData.restore()
dependencies.foreach(_.restoreCheckpointData())
restoredFromCheckpointData = true
logInfo("Restored checkpoint data")
}
} override def restore(): Unit = {
batchForTime.toSeq.sortBy(_._1)(Time.ordering).foreach { case (t, b) =>
logInfo(s"Restoring KafkaRDD for time $t ${b.mkString("[", ", ", "]")}")
generatedRDDs += t -> new KafkaRDD[K, V](
context.sparkContext,
executorKafkaParams,
b.map(OffsetRange(_)),
getPreferredHosts,
// during restore, it's possible same partition will be consumed from multiple
// threads, so dont use cache
false
)
}
}

仍然以DirectKafkaInputDStreamCheckpointData为例,这个方法从上文保存的checkpoint.data对象里获取RDD运行时的对应信息恢复出停止时的KafkaRDD。

  private def restart() {
// If manual clock is being used for testing, then
// either set the manual clock to the last checkpointed time,
// or if the property is defined set it to that time
if (clock.isInstanceOf[ManualClock]) {
val lastTime = ssc.initialCheckpoint.checkpointTime.milliseconds
val jumpTime = ssc.sc.conf.getLong("spark.streaming.manualClock.jump", 0)
clock.asInstanceOf[ManualClock].setTime(lastTime + jumpTime)
} val batchDuration = ssc.graph.batchDuration // Batches when the master was down, that is,
// between the checkpoint and current restart time
val checkpointTime = ssc.initialCheckpoint.checkpointTime
val restartTime = new Time(timer.getRestartTime(graph.zeroTime.milliseconds))
val downTimes = checkpointTime.until(restartTime, batchDuration)
logInfo("Batches during down time (" + downTimes.size + " batches): "
+ downTimes.mkString(", ")) // Batches that were unprocessed before failure
val pendingTimes = ssc.initialCheckpoint.pendingTimes.sorted(Time.ordering)
logInfo("Batches pending processing (" + pendingTimes.length + " batches): " +
pendingTimes.mkString(", "))
// Reschedule jobs for these times
val timesToReschedule = (pendingTimes ++ downTimes).filter { _ < restartTime }
.distinct.sorted(Time.ordering)
logInfo("Batches to reschedule (" + timesToReschedule.length + " batches): " +
timesToReschedule.mkString(", "))
timesToReschedule.foreach { time =>
// Allocate the related blocks when recovering from failure, because some blocks that were
// added but not allocated, are dangling in the queue after recovering, we have to allocate
// those blocks to the next batch, which is the batch they were supposed to go.
jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batch
jobScheduler.submitJobSet(JobSet(time, graph.generateJobs(time)))
} // Restart the timer
timer.start(restartTime.milliseconds)
logInfo("Restarted JobGenerator at " + restartTime)
}

最后,在restart的过程中,JobGenerator通过当前时间和上次程序停止的时间计算出程序重启过程中共有多少batch没有生成,加上上一次程序死掉的过程中有多少正在运行的job,全部

进行Reschedule,补跑任务。

参考文档

1Driver 端长时容错详解

2Spark Streaming揭秘 Day33 checkpoint的使用

spark-streaming的checkpoint机制源码分析的更多相关文章

  1. 《深入理解Spark:核心思想与源码分析》(前言及第1章)

    自己牺牲了7个月的周末和下班空闲时间,通过研究Spark源码和原理,总结整理的<深入理解Spark:核心思想与源码分析>一书现在已经正式出版上市,目前亚马逊.京东.当当.天猫等网站均有销售 ...

  2. 《深入理解Spark:核心思想与源码分析》(第2章)

    <深入理解Spark:核心思想与源码分析>一书前言的内容请看链接<深入理解SPARK:核心思想与源码分析>一书正式出版上市 <深入理解Spark:核心思想与源码分析> ...

  3. 《深入理解Spark:核心思想与源码分析》一书正式出版上市

    自己牺牲了7个月的周末和下班空闲时间,通过研究Spark源码和原理,总结整理的<深入理解Spark:核心思想与源码分析>一书现在已经正式出版上市,目前亚马逊.京东.当当.天猫等网站均有销售 ...

  4. 《深入理解Spark:核心思想与源码分析》正式出版上市

    自己牺牲了7个月的周末和下班空闲时间,通过研究Spark源码和原理,总结整理的<深入理解Spark:核心思想与源码分析>一书现在已经正式出版上市,目前亚马逊.京东.当当.天猫等网站均有销售 ...

  5. Spark Streaming揭秘 Day26 JobGenerator源码图解

    Spark Streaming揭秘 Day26 JobGenerator源码图解 今天主要解析一下JobGenerator,它相当于一个转换器,和机器学习的pipeline比较类似,因为最终运行在Sp ...

  6. 《深入理解Spark:核心思想与源码分析》——SparkContext的初始化(叔篇)——TaskScheduler的启动

    <深入理解Spark:核心思想与源码分析>一书前言的内容请看链接<深入理解SPARK:核心思想与源码分析>一书正式出版上市 <深入理解Spark:核心思想与源码分析> ...

  7. Spark Streaming揭秘 Day22 架构源码图解

    Spark Streaming揭秘 Day22 架构源码图解 今天主要是通过图解的方式,对SparkStreaming的架构进行一下回顾. 下面这个是其官方标准的流程描述. SparkStreamin ...

  8. Springboot学习04-默认错误页面加载机制源码分析

    Springboot学习04-默认错误页面加载机制源码分析 前沿 希望通过本文的学习,对错误页面的加载机制有这更神的理解 正文 1-Springboot错误页面展示 2-Springboot默认错误处 ...

  9. ApplicationEvent事件机制源码分析

    <spring扩展点之三:Spring 的监听事件 ApplicationListener 和 ApplicationEvent 用法,在spring启动后做些事情> <服务网关zu ...

随机推荐

  1. Java中应用多线程的场景?

    最典型的应用比如tomcat,tomcat内部采用的就是多线程,上百个客户端访问同一个web应用,tomcat接入后都是把后续的处理扔给一个新的线程来处理,这个新的线程最后调用到我们的servlet程 ...

  2. Codeforces 311E Biologist

    Discription SmallR is a biologist. Her latest research finding is how to change the sex of dogs. In ...

  3. [BZOJ1559]密码

    数据范围特别小,考虑状压DP 因为要求给定的字符串在母串中出现,所以可以用AC自动机辅助DP 因为AC自动机不能处理模式串互相包含的情况,所以先把互相包含的串去掉(暴力就行,数据范围太小) 因为要状压 ...

  4. 【贪心】bzoj3709 [PA2014]Bohater

    把怪分成两类看: 一.回血>损血 则若先杀损血少的再杀损血多的,则为当前这一步提供了更高的可能性.因为血量是单增的,所以尽量用较少的血量去干♂耗血较少的怪物. 二.回血<损血 则若先杀回血 ...

  5. java--模板方法模式

    /* 需求:获取一段程序的运行时间 原理:获取程序开始和结束的时间并相减即可 获取时间:用java中已有的一个类:System.currentTimeMillis(); 当代码完成优化后,就可以解决这 ...

  6. ios(iphone/ipad)一个简单的用代码判断当前设备的方法

    直接NSLog(@"current_device:%@",[UIDevice currentDevice].model); 即可看出它输出的是当前设备,所以根据这个字符串可简单的判 ...

  7. [Linux] Ubuntu下非常给力的下载工具

    转载:http://blog.csdn.net/luojiming1990/article/details/9078447 Windows下的下载工具--迅雷,之所以下载速度快,乃是它能搜索资源.为己 ...

  8. 十个Chatbot框架介绍

    十个Chatbot框架介绍 原创 2016年12月13日 16:01:23 4616 Chatbot列表 1.  Artificial Intelligence Markup Language    ...

  9. 基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:3.安装Oracle RAC-3.3.安装前检查

    3.3. 安装前检查 1.检查节点连通性 或:./runcluvfy.sh stage -pre crsinst -n linuxrac1,linuxrac2 -fixup -verbose [gri ...

  10. php漏洞挖掘书籍

    PHP是一种被广泛使用的脚本语言,尤其适合web开发.具有跨平台,容易学习,功能强大等特点,据统计全世界超过34%的网站有php的应用,包括Yahoo.sina.163.sohu等大型门户网站.而且很 ...