5.Spark Streaming流计算框架的运行流程源码分析2
- object OnlineTheTop3ItemForEachCategory2DB {
- def main(args: Array[String]){
- val conf = new SparkConf() //创建SparkConf对象
- //设置应用程序的名称,在程序运行的监控界面可以看到名称
- conf.setAppName("OnlineTheTop3ItemForEachCategory2DB")
- conf.setMaster("spark://Master:7077") //此时,程序在Spark集群
- //设置batchDuration时间间隔来控制Job生成的频率并且创建Spark Streaming执行的入口
- val ssc = new StreamingContext(conf, Seconds(5))
- ssc.checkpoint("/root/Documents/SparkApps/checkpoint")
- val soketDStream = ssc.socketTextStream("Master", 9999)
- /// 业务处理逻辑 .....
- ssc.start()
- ssc.awaitTermination()
- }
- }
2.1 创建StreamingContext
- /**
- * Create a StreamingContext by providing the configuration necessary for a new SparkContext.
- * @param conf a org.apache.spark.SparkConf object specifying Spark parameters
- * @param batchDuration the time interval at which streaming data will be divided into batches
- */
- def this(conf: SparkConf, batchDuration: Duration) = {
- this(StreamingContext.createNewSparkContext(conf), null, batchDuration)
- }
- private[streaming] def createNewSparkContext(conf: SparkConf): SparkContext = {
- new SparkContext(conf)
- }
2)案例最开始的地方肯定要通过数据流创建一个InputDStream。
- val socketDstram = ssc.socketTextStream("Master", 9999)
socketTextStream方法定义如下:
- /**
- * Create a input stream from TCP source hostname:port. Data is received using
- * a TCP socket and the receive bytes is interpreted as UTF8 encoded `\n` delimited
- * lines.
- * @param hostname Hostname to connect to for receiving data
- * @param port Port to connect to for receiving data
- * @param storageLevel Storage level to use for storing the received objects
- * (default: StorageLevel.MEMORY_AND_DISK_SER_2)
- */
- def socketTextStream(
- hostname: String,
- port: Int,
- storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2
- ): ReceiverInputDStream[String] = withNamedScope("socket text stream") {
- socketStream[String](hostname, port, SocketReceiver.bytesToLines, storageLevel)
- }
3)可看到代码最后面调用socketStream。
- /**
- * Create a input stream from TCP source hostname:port. Data is received using
- * a TCP socket and the receive bytes it interepreted as object using the given
- * converter.
- * @param hostname Hostname to connect to for receiving data
- * @param port Port to connect to for receiving data
- * @param converter Function to convert the byte stream to objects
- * @param storageLevel Storage level to use for storing the received objects
- * @tparam T Type of the objects received (after converting bytes to objects)
- */
- def socketStream[T: ClassTag](
- hostname: String,
- port: Int,
- converter: (InputStream) => Iterator[T],
- storageLevel: StorageLevel
- ): ReceiverInputDStream[T] = {
- new SocketInputDStream[T](this, hostname, port, converter, storageLevel)
- }
4)实际上生成SocketInputDStream。
- private[streaming]
- class SocketInputDStream[T: ClassTag](
- ssc_ : StreamingContext,
- host: String,
- port: Int,
- bytesToObjects: InputStream => Iterator[T],
- storageLevel: StorageLevel
- ) extends ReceiverInputDStream[T](ssc_) {
- def getReceiver(): Receiver[T] = {
- new SocketReceiver(host, port, bytesToObjects, storageLevel)
- }
- }
SocketInputDStream继承ReceiverInputDStream。
总结一下SocketInputDStream的继承关系:
- // RDDs generated, marked as private[streaming] so that testsuites can access it
- @transient
- private[streaming] var generatedRDDs = new HashMap[Time, RDD[T]] ()
DStream的getOrCompute:
- /**
- * Get the RDD corresponding to the given time; either retrieve it from cache
- * or compute-and-cache it.
- */
- private[streaming] final def getOrCompute(time: Time): Option[RDD[T]] = {
- // If RDD was already generated, then retrieve it from HashMap,
- // or else compute the RDD
- generatedRDDs.get(time).orElse {
- // Compute the RDD if time is valid (e.g. correct time in a sliding window)
- // of RDD generation, else generate nothing.
- if (isTimeValid(time)) {
- val rddOption = createRDDWithLocalProperties(time, displayInnerRDDOps = false) {
- // Disable checks for existing output directories in jobs launched by the streaming
- // scheduler, since we may need to write output to an existing directory during checkpoint
- // recovery; see SPARK-4835 for more details. We need to have this call here because
- // compute() might cause Spark jobs to be launched.
- PairRDDFunctions.disableOutputSpecValidation.withValue(true) {
- compute(time)
- }
- }
- rddOption.foreach { case newRDD =>
- // Register the generated RDD for caching and checkpointing
- if (storageLevel != StorageLevel.NONE) {
- newRDD.persist(storageLevel)
- logDebug(s"Persisting RDD ${newRDD.id} for time $time to $storageLevel")
- }
- if (checkpointDuration != null && (time - zeroTime).isMultipleOf(checkpointDuration)) {
- newRDD.checkpoint()
- logInfo(s"Marking RDD ${newRDD.id} for time $time for checkpointing")
- }
- generatedRDDs.put(time, newRDD)
- }
- rddOption
- } else {
- None
- }
- }
- }
主要是生成RDD,再将生成的RDD放在HashMap中。具体生成RDD过程以后剖析。
2.2 启动StreamingContext
代码实例中的ssc.start() 方法启动StreamingContext,主要的逻辑发生在这个start方法中:
* 在StreamingContext调用start方法的内部其实是会启动JobScheduler的Start方法,进行消息循环,
* 在JobScheduler的start内部会构造JobGenerator和ReceiverTacker,并且调用JobGenerator和
* ReceiverTacker的start方法:
*
* 1,JobGenerator启动后会不断的根据batchDuration生成一个个的Job
* 其实这里的Job不是Spark Core中所指的Job,它只是基于DStreamGraph而生成的RDD的DAG
* 而已,从Java角度讲,相当于Runnable接口实例,此时要想运行Job需要提交给JobScheduler,
* 在JobScheduler中通过线程池的方式找到一个单独的线程来提交Job到集群运行(其实是在线程中
* 基于RDD的Action触发真正的作业的运行)
*
* 2,ReceiverTracker启动后首先在Spark Cluster中启动Receiver(其实是在Executor中先启动
* ReceiverSupervisor),在Receiver收到数据后会通过ReceiverSupervisor存储到Executor并且把
* 数据的Metadata信息发送给Driver中的ReceiverTracker,在ReceiverTracker内部会通过
* ReceivedBlockTracker来管理接受到的元数据信息.
- /**
- * Start the execution of the streams.
- *
- * @throws IllegalStateException if the StreamingContext is already stopped.
- */
- def start(): Unit = synchronized {
- state match {
- case INITIALIZED =>
- startSite.set(DStream.getCreationSite())
- StreamingContext.ACTIVATION_LOCK.synchronized {
- StreamingContext.assertNoOtherContextIsActive()
- try {
- validate()
- // Start the streaming scheduler in a new thread, so that thread local properties
- // like call sites and job groups can be reset without affecting those of the
- // current thread.
- //线程本地存储,线程有自己的私有属性,设置这些线程的时候不会影响其他线程,
- ThreadUtils.runInNewThread("streaming-start") {
- sparkContext.setCallSite(startSite.get)
- sparkContext.clearJobGroup()
- sparkContext.setLocalProperty(SparkContext.SPARK_JOB_INTERRUPT_ON_CANCEL, "false")
- //启动JobScheduler
- scheduler.start()
- }
- state = StreamingContextState.ACTIVE
- } catch {
- case NonFatal(e) =>
- logError("Error starting the context, marking it as stopped", e)
- scheduler.stop(false)
- state = StreamingContextState.STOPPED
- throw e
- }
- StreamingContext.setActiveContext(this)
- }
- shutdownHookRef = ShutdownHookManager.addShutdownHook(
- StreamingContext.SHUTDOWN_HOOK_PRIORITY)(stopOnShutdown)
- // Registering Streaming Metrics at the start of the StreamingContext
- assert(env.metricsSystem != null)
- env.metricsSystem.registerSource(streamingSource)
- uiTab.foreach(_.attach())
- logInfo("StreamingContext started")
- case ACTIVE =>
- logWarning("StreamingContext has already been started")
- case STOPPED =>
- throw new IllegalStateException("StreamingContext has already been stopped")
- }
- }
- def start(): Unit = synchronized {
- if (eventLoop != null) return // scheduler has already been started
- logDebug("Starting JobScheduler")
- eventLoop = new EventLoop[JobSchedulerEvent]("JobScheduler") {
- override protected def onReceive(event: JobSchedulerEvent): Unit = processEvent(event)
- override protected def onError(e: Throwable): Unit = reportError("Error in job scheduler", e)
- }
- // 启动消息循环处理线程。用于处理JobScheduler的各种事件。
- eventLoop.start()
- // attach rate controllers of input streams to receive batch completion updates
- for {
- inputDStream <- ssc.graph.getInputStreams
- // rateController可以控制输入速度
- rateController <- inputDStream.rateController
- } ssc.addStreamingListener(rateController)
- // 启动监听器。用于更新Spark UI中StreamTab的内容。
- listenerBus.start(ssc.sparkContext)
- receiverTracker = new ReceiverTracker(ssc)
- // 生成InputInfoTracker。用于管理所有的输入的流,以及他们输入的数据统计。这些信息将通过 StreamingListener监听。
- inputInfoTracker = new InputInfoTracker(ssc)
- // 启动ReceiverTracker。用于处理数据接收、数据缓存、Block生成。
- receiverTracker.start()
- // 启动JobGenerator。用于DStreamGraph初始化、DStream与RDD的转换、生成Job、提交执行等工作。
- jobGenerator.start()
- logInfo("Started JobScheduler")
- }
- private def processEvent(event: JobSchedulerEvent) {
- try {
- event match {
- case JobStarted(job, startTime) => handleJobStart(job, startTime)
- case JobCompleted(job, completedTime) => handleJobCompletion(job, completedTime)
- case ErrorReported(m, e) => handleError(m, e)
- }
- } catch {
- case e: Throwable =>
- reportError("Error in job scheduler", e)
- }
- }
- private val eventQueue: BlockingQueue[E] = new LinkedBlockingDeque[E]()
- private val eventThread = new Thread(name) {
- setDaemon(true)
- override def run(): Unit = {
- try {
- while (!stopped.get) {
- val event = eventQueue.take()
- try {
- onReceive(event)
- } catch {
- case NonFatal(e) => {
- try {
- onError(e)
- } catch {
- case NonFatal(e) => logError("Unexpected error in " + name, e)
- }
- }
- }
- }
- } catch {
- case ie: InterruptedException => // exit even if eventQueue is not empty
- case NonFatal(e) => logError("Unexpected error in " + name, e)
- }
- }
- }
这个线程中的onReceive、onError,在JobScheduler中的EventLoop实例化时已定义。
def start(): Unit = synchronized {
if (isTrackerStarted) {
throw new SparkException("ReceiverTracker already started")
} if (!receiverInputStreams.isEmpty) {
endpoint = ssc.env.rpcEnv.setupEndpoint(
"ReceiverTracker", new ReceiverTrackerEndpoint(ssc.env.rpcEnv))
if (!skipReceiverLaunch) launchReceivers()
logInfo("ReceiverTracker started")
trackerState = Started
}
}
/**
* Get the receivers from the ReceiverInputDStreams, distributes them to the
* worker nodes as a parallel collection, and runs them.
*/
private def launchReceivers(): Unit = {
val receivers = receiverInputStreams.map(nis => {
val rcvr = nis.getReceiver()
rcvr.setReceiverId(nis.id)
rcvr
})
runDummySparkJob()
logInfo("Starting " + receivers.length + " receivers")
endpoint.send(StartAllReceivers(receivers))
}
/**
* Run the dummy Spark job to ensure that all slaves have registered. This avoids all the
* receivers to be scheduled on the same node.
*
* TODO Should poll the executor number and wait for executors according to
* "spark.scheduler.minRegisteredResourcesRatio" and
* "spark.scheduler.maxRegisteredResourcesWaitingTime" rather than running a dummy job.
*/
private def runDummySparkJob(): Unit = {
if (!ssc.sparkContext.isLocal) {
ssc.sparkContext.makeRDD( to , ).map(x => (x, )).reduceByKey(_ + _, ).collect()
}
assert(getExecutors.nonEmpty)
}
ReceiverTracker.launchReceivers()还调用了endpoint.send(StartAllReceivers(receivers))方法,Rpc消息通信体发送StartAllReceivers消息。
ReceiverTrackerEndpoint它自己接收到消息后,先根据调度策略获得Recevier在哪个Executor上运行,然后在调用startReceiver(receiver, executors)方法,来启动Receiver。
override def receive: PartialFunction[Any, Unit] = {
// Local messages
case StartAllReceivers(receivers) =>
val scheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors)
for (receiver <- receivers) {
val executors = scheduledLocations(receiver.streamId)
updateReceiverScheduledExecutors(receiver.streamId, executors)
receiverPreferredLocations(receiver.streamId) = receiver.preferredLocation
startReceiver(receiver, executors)
}
/**
* Start a receiver along with its scheduled executors
*/
private def startReceiver(
receiver: Receiver[_],
scheduledLocations: Seq[TaskLocation]): Unit = {
def shouldStartReceiver: Boolean = { // ........... 此处省略1万字 (无关代码) , 呵呵哒 ......... // Function to start the receiver on the worker node
val startReceiverFunc: Iterator[Receiver[_]] => Unit =
(iterator: Iterator[Receiver[_]]) => {
if (!iterator.hasNext) {
throw new SparkException(
"Could not start receiver as object not found.")
}
if (TaskContext.get().attemptNumber() == ) {
val receiver = iterator.next()
assert(iterator.hasNext == false)
//实例化Receiver监控者
val supervisor = new ReceiverSupervisorImpl(
receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption)
supervisor.start()
supervisor.awaitTermination()
} else {
// It's restarted by TaskScheduler, but we want to reschedule it again. So exit it.
}
} // Create the RDD using the scheduledLocations to run the receiver in a Spark job
val receiverRDD: RDD[Receiver[_]] =
if (scheduledLocations.isEmpty) {
ssc.sc.makeRDD(Seq(receiver), )
} else {
val preferredLocations = scheduledLocations.map(_.toString).distinct
ssc.sc.makeRDD(Seq(receiver -> preferredLocations))
} receiverRDD.setName(s"Receiver $receiverId")
ssc.sparkContext.setJobDescription(s"Streaming job running receiver $receiverId")
ssc.sparkContext.setCallSite(Option(ssc.getStartSite()).getOrElse(Utils.getCallSite()))
val future = ssc.sparkContext.submitJob[Receiver[_], Unit, Unit](
receiverRDD,
startReceiverFunc, //提交Job时候传入startReceiverFunc这个方法,因为startReceiverFunc该方法是在Executor上执行的
Seq(), (_, _) => Unit, ()) // 一直重启 receiver job直到 ReceiverTracker is stopped
future.onComplete {
case Success(_) =>
if (!shouldStartReceiver) {
onReceiverJobFinish(receiverId)
} else {
logInfo(s"Restarting Receiver $receiverId")
self.send(RestartReceiver(receiver))
}
case Failure(e) =>
if (!shouldStartReceiver) {
onReceiverJobFinish(receiverId)
} else {
logError("Receiver has been stopped. Try to restart it.", e)
logInfo(s"Restarting Receiver $receiverId")
self.send(RestartReceiver(receiver))
}
}(submitJobThreadPool)
logInfo(s"Receiver ${receiver.streamId} started")
}
/** Start the supervisor */
def start() {
onStart()
startReceiver()
}
override protected def onStart() {
registeredBlockGenerators.foreach { _.start() }
}
/** Start receiver */
def startReceiver(): Unit = synchronized {
try {
if (onReceiverStart()) {
logInfo("Starting receiver")
receiverState = Started
receiver.onStart()
logInfo("Called receiver onStart")
} else {
// The driver refused us
stop("Registered unsuccessfully because Driver refused to start receiver " + streamId, None)
}
} catch {
case NonFatal(t) =>
stop("Error starting receiver " + streamId, Some(t))
}
}
override protected def onReceiverStart(): Boolean = {
val msg = RegisterReceiver(
streamId, receiver.getClass.getSimpleName, host, executorId, endpoint)
trackerEndpoint.askWithRetry[Boolean](msg)
}
override def receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit] = {
// Remote messages
case RegisterReceiver(streamId, typ, host, executorId, receiverEndpoint) =>
val successful =
registerReceiver(streamId, typ, host, executorId, receiverEndpoint, context.senderAddress)
context.reply(successful)
/** Register a receiver */
private def registerReceiver(
streamId: Int,
typ: String,
host: String,
executorId: String,
receiverEndpoint: RpcEndpointRef,
senderAddress: RpcAddress
): Boolean = {
if (!receiverInputStreamIds.contains(streamId)) {
throw new SparkException("Register received for unexpected id " + streamId)
} // ........... 此处省略1万字 (无关代码) , 呵呵哒 ......... if (!isAcceptable) {
// Refuse it since it's scheduled to a wrong executor
false
} else {
val name = s"${typ}-${streamId}"
val receiverTrackingInfo = ReceiverTrackingInfo(
streamId,
ReceiverState.ACTIVE,
scheduledLocations = None,
runningExecutor = Some(ExecutorCacheTaskLocation(host, executorId)),
name = Some(name),
endpoint = Some(receiverEndpoint))
receiverTrackingInfos.put(streamId, receiverTrackingInfo)
listenerBus.post(StreamingListenerReceiverStarted(receiverTrackingInfo.toReceiverInfo))
logInfo("Registered receiver for stream " + streamId + " from " + senderAddress)
true
}
}
private[streaming]
class SocketReceiver[T: ClassTag](
host: String,
port: Int,
bytesToObjects: InputStream => Iterator[T],
storageLevel: StorageLevel
) extends Receiver[T](storageLevel) with Logging { def onStart() {
// Start the thread that receives data over a connection
new Thread("Socket Receiver") {
setDaemon(true)
override def run() { receive() }
}.start()
} /** Create a socket connection and receive data until receiver is stopped */
def receive() {
var socket: Socket = null
try {
logInfo("Connecting to " + host + ":" + port)
socket = new Socket(host, port)
logInfo("Connected to " + host + ":" + port)
val iterator = bytesToObjects(socket.getInputStream())
while(!isStopped && iterator.hasNext) {
store(iterator.next)
}
if (!isStopped()) {
restart("Socket data stream had no more data")
} else {
logInfo("Stopped receiving")
}
} catch {
// ........... 此处省略1万字 (无关代码) , 呵呵哒 .........
}
}
//根据创建StreamContext时传入的batchInterval,定时发送GenerateJobs消息
private val timer = new RecurringTimer(clock, ssc.graph.batchDuration.milliseconds,
longTime => eventLoop.post(GenerateJobs(new Time(longTime))), "JobGenerator") JobGenerator的start()方法:
/** Start generation of jobs */
def start(): Unit = synchronized {
if (eventLoop != null) return // generator has already been started // Call checkpointWriter here to initialize it before eventLoop uses it to avoid a deadlock.
// See SPARK-10125
checkpointWriter eventLoop = new EventLoop[JobGeneratorEvent]("JobGenerator") {
override protected def onReceive(event: JobGeneratorEvent): Unit = processEvent(event) override protected def onError(e: Throwable): Unit = {
jobScheduler.reportError("Error in job generator", e)
}
} // 启动消息循环处理线程
eventLoop.start() if (ssc.isCheckpointPresent) {
restart()
} else {
// 开启定时生成Job的定时器
startFirstTime()
}
}
/** Starts the generator for the first time */
private def startFirstTime() {
val startTime = new Time(timer.getStartTime())
graph.start(startTime - graph.batchDuration)
timer.start(startTime.milliseconds)
logInfo("Started JobGenerator at " + startTime)
}
- /** Processes all events */
- private def processEvent(event: JobGeneratorEvent) {
- logDebug("Got event " + event)
- event match {
- case GenerateJobs(time) => generateJobs(time)
- case ClearMetadata(time) => clearMetadata(time)
- case DoCheckpoint(time, clearCheckpointDataLater) =>
- doCheckpoint(time, clearCheckpointDataLater)
- case ClearCheckpointData(time) => clearCheckpointData(time)
- }
- }
其中generateJobs的定义:
/** Generate jobs and perform checkpoint for the given `time`. */
private def generateJobs(time: Time) {
// Set the SparkEnv in this thread, so that job generation code can access the environment
// Example: BlockRDDs are created in this thread, and it needs to access BlockManager
// Update: This is probably redundant after threadlocal stuff in SparkEnv has been removed.
SparkEnv.set(ssc.env)
Try { // 根据特定的时间获取具体的数据
jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batch
//调用DStreamGraph的generateJobs生成Job
graph.generateJobs(time) // generate jobs using allocated block
} match {
case Success(jobs) =>
val streamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time)
jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToInputInfos))
case Failure(e) =>
jobScheduler.reportError("Error generating jobs for time " + time, e)
}
eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false))
}
// 输出流:具体Action的输出操作
private val outputStreams = new ArrayBuffer[DStream[_]]() def generateJobs(time: Time): Seq[Job] = {
logDebug("Generating jobs for time " + time)
val jobs = this.synchronized {
outputStreams.flatMap { outputStream =>
val jobOption = outputStream.generateJob(time)
jobOption.foreach(_.setCallSite(outputStream.creationSite))
jobOption
}
}
logDebug("Generated " + jobs.length + " jobs for time " + time)
jobs
}
- /**
- * Generate a SparkStreaming job for the given time. This is an internal method that
- * should not be called directly. This default implementation creates a job
- * that materializes the corresponding RDD. Subclasses of DStream may override this
- * to generate their own jobs.
- */
- private[streaming] def generateJob(time: Time): Option[Job] = {
- getOrCompute(time) match {
- case Some(rdd) => {
- val jobFunc = () => {
- val emptyFunc = { (iterator: Iterator[T]) => {} }
- context.sparkContext.runJob(rdd, emptyFunc)
- }
- Some(new Job(time, jobFunc))
- }
- case None => None
- }
- }
接下来看JobScheduler的submitJobSet方法,向线程池中提交JobHandler。而JobHandler实现了Runnable 接口,最终调用了job.run()这个方法。看一下Job类的定义,其中run方法调用的func为构造Job时传入的jobFunc,其包含了context.sparkContext.runJob(rdd, emptyFunc)操作,最终导致Job的提交。
def submitJobSet(jobSet: JobSet) {
if (jobSet.jobs.isEmpty) {
logInfo("No jobs added for time " + jobSet.time)
} else {
listenerBus.post(StreamingListenerBatchSubmitted(jobSet.toBatchInfo))
jobSets.put(jobSet.time, jobSet)
jobSet.jobs.foreach(job => jobExecutor.execute(new JobHandler(job)))
logInfo("Added jobs for time " + jobSet.time)
}
}
private class JobHandler(job: Job) extends Runnable with Logging {
import JobScheduler._ def run() {
try { // *********** 此处省略无关代码 ******************* // We need to assign `eventLoop` to a temp variable. Otherwise, because
// `JobScheduler.stop(false)` may set `eventLoop` to null when this method is running, then
// it's possible that when `post` is called, `eventLoop` happens to null.
var _eventLoop = eventLoop
if (_eventLoop != null) {
_eventLoop.post(JobStarted(job, clock.getTimeMillis()))
// Disable checks for existing output directories in jobs launched by the streaming
// scheduler, since we may need to write output to an existing directory during checkpoint
// recovery; see SPARK-4835 for more details.
PairRDDFunctions.disableOutputSpecValidation.withValue(true) {
job.run()
}
_eventLoop = eventLoop
if (_eventLoop != null) {
_eventLoop.post(JobCompleted(job, clock.getTimeMillis()))
}
} else {
// JobScheduler has been stopped.
}
} finally {
ssc.sc.setLocalProperty(JobScheduler.BATCH_TIME_PROPERTY_KEY, null)
ssc.sc.setLocalProperty(JobScheduler.OUTPUT_OP_ID_PROPERTY_KEY, null)
}
}
}
}
- private[streaming]
- class Job(val time: Time, func: () => _) {
- private var _id: String = _
- private var _outputOpId: Int = _
- private var isSet = false
- private var _result: Try[_] = null
- private var _callSite: CallSite = null
- private var _startTime: Option[Long] = None
- private var _endTime: Option[Long] = None
- def run() {
- _result = Try(func())
- }
5.Spark Streaming流计算框架的运行流程源码分析2的更多相关文章
- Dream_Spark-----Spark 定制版:005~贯通Spark Streaming流计算框架的运行源码
Spark 定制版:005~贯通Spark Streaming流计算框架的运行源码 本讲内容: a. 在线动态计算分类最热门商品案例回顾与演示 b. 基于案例贯通Spark Streaming的运 ...
- Spark Streaming实时计算框架介绍
随着大数据的发展,人们对大数据的处理要求也越来越高,原有的批处理框架MapReduce适合离线计算,却无法满足实时性要求较高的业务,如实时推荐.用户行为分析等. Spark Streaming是建立在 ...
- 贯通Spark Streaming流计算框架的运行源码
本章节内容: 一.在线动态计算分类最热门商品案例回顾 二.基于案例贯通Spark Streaming的运行源码 先看代码(源码场景:用户.用户的商品.商品的点击量排名,按商品.其点击量排名前三): p ...
- 基于案例贯通 Spark Streaming 流计算框架的运行源码
本期内容 : Spark Streaming+Spark SQL案例展示 基于案例贯穿Spark Streaming的运行源码 一. 案例代码阐述 : 在线动态计算电商中不同类别中最热门的商品排名,例 ...
- 65、Spark Streaming:数据接收原理剖析与源码分析
一.数据接收原理 二.源码分析 入口包org.apache.spark.streaming.receiver下ReceiverSupervisorImpl类的onStart()方法 ### overr ...
- 【Streaming】30分钟概览Spark Streaming 实时计算
本文主要介绍四个问题: 什么是Spark Streaming实时计算? Spark实时计算原理流程是什么? Spark 2.X下一代实时计算框架Structured Streaming Spark S ...
- Storm分布式实时流计算框架相关技术总结
Storm分布式实时流计算框架相关技术总结 Storm作为一个开源的分布式实时流计算框架,其内部实现使用了一些常用的技术,这里是对这些技术及其在Storm中作用的概括介绍.以此为基础,后续再深入了解S ...
- StreamDM:基于Spark Streaming、支持在线学习的流式分析算法引擎
StreamDM:基于Spark Streaming.支持在线学习的流式分析算法引擎 streamDM:Data Mining for Spark Streaming,华为诺亚方舟实验室开源了业界第一 ...
- Spark练习之通过Spark Streaming实时计算wordcount程序
Spark练习之通过Spark Streaming实时计算wordcount程序 Java版本 Scala版本 pom.xml Java版本 import org.apache.spark.Spark ...
随机推荐
- Weblogic 9.2 启动时报错 javax.xml.namespace.QName
启动Weblogic 时会报错.javax.xml.namespace.QName; local class incompatible: stream classdesc serialVersionU ...
- 51Nod 1009 数字1的个数 | 数位DP
题意: 小于等于n的所有数中1的出现次数 分析: 数位DP 预处理dp[i][j]存 从1~以j开头的i位数中有几个1,那么转移方程为: if(j == 1) dp[i][j] = dp[i-1][9 ...
- LightOJ 1321 - Sending Packets 简单最短路+期望
http://www.lightoj.com/volume_showproblem.php?problem=1321 题意:每条边都有概率无法经过,但可以重新尝试,现给出成功率,传输次数和传输时间,求 ...
- redis启动脚本
#!/bin/sh # # Simple Redis init.d script conceived to work on Linux systems # as it does use of the ...
- 【hdu5381】维护区间内所有子区间的gcd之和-线段树
题意:给定n个数,m个询问,每次询问一个区间内所有连续子区间的gcd的和.n,m<=10^5 题解: 这题和之前比赛的一题很像.我们从小到大枚举r,固定右端点枚举左端点,维护的区间最多只有log ...
- 【BZOJ】3895: 取石子
[算法]博弈论+记忆化搜索 [题意]给定n堆石子,两人轮流操作,每个人可以合并两堆石子或拿走一个石子,不能操作者输,问是否先手必胜 [题解] 首先,若所有石子堆的石子数>1,显然总操作数为(石子 ...
- 【洛谷 P2742】【模板】二维凸包
题目链接 二维凸包板子..有时间会补总结的. #include <cstdio> #include <cmath> #include <algorithm> usi ...
- @EnableEurekaClient源码分析
@EnableEurekaClient注解,从源码的角度分析是如何work的 NetFlix Eureka client Eureka client 负责与Eureka Server 配合向外提供注册 ...
- 3.0docker操作
登录镜像资源 docker login daocloud.io username: password: docker login : 登陆到一个Docker镜像仓库,如果未指定镜像仓库地址,默认为官方 ...
- windows下常用快捷键(转)
原文转自 https://blog.csdn.net/LJFPHP/article/details/78818696 win+E 打开文件管器 win+D ...