Spark分析之Standalone运行过程分析
一、集群启动过程--启动Master
$SPARK_HOME/sbin/start-master.sh
start-master.sh脚本关键内容:
spark-daemon.sh start org.apache.spark.deploy.master.Master 1 --ip $SPARK_MASTER_IP --port $SPARK_MASTER_PORT --webui-port $SPARK_MASTER_WEBUI_PORT
日志信息:$SPARK_HOME/logs/
// :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@hadoop000:7077]
// :: INFO master.Master: Starting Spark master at spark://hadoop000:7077
// :: INFO server.Server: jetty-.y.z-SNAPSHOT
// :: INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:
// :: INFO ui.MasterWebUI: Started MasterWebUI at http://hadoop000:8080
// :: INFO master.Master: I have been elected leader! New state: ALIVE
二、集群启动过程--启动Worker
$SPARK_HOME/sbin/start-slaves.sh
start-slaves.sh脚本关键内容:
spark-daemon.sh start org.apache.spark.deploy.worker.Worker master-spark-URL
Worker运行时,需要注册到指定的master url,这里就是spark://hadoop000:7077
Worker启动之后主要做了两件事情:
1)将自己注册到Master(RegisterWorker);
2)定期发送心跳信息给Master;
Worker向Master发送注册信息:
Worker.scala
==>preStart
==>registerWithMaster
==>tryRegisterAllMasters
==> actor ! RegisterWorker(workerId, host, port, cores, memory, webUi.boundPort, publicAddress)
Master侧收到RegisterWorker通知:
Master.scala
==>case RegisterWorker(id, workerHost, workerPort, cores, memory, workerUiPort, publicAddress) => {
val worker = new WorkerInfo(id, workerHost, workerPort, cores, memory,
sender, workerUiPort, publicAddress)
if (registerWorker(worker)) {
persistenceEngine.addWorker(worker)
sender ! RegisteredWorker(masterUrl, masterWebUiUrl) //注册成功后向Worker发送注册成功信息
schedule()
}
}
Worker在收到Master发来的注册成功信息后,定期向Master发送心跳信息
Worker.scala
==>case SendHeartbeat =>
masterLock.synchronized {if (connected) { master ! Heartbeat(workerId) }
}
Master在接收到Worker发送来的心跳信息后更新最后一次心跳时间
Master.scala
==>case Heartbeat(workerId) => {
idToWorker.get(workerId) match {
case Some(workerInfo) =>
workerInfo.lastHeartbeat = System.currentTimeMillis()
}
}
Master定期移除超时未发送心跳信息给Master的Worker节点
Master.scala
==>preStart
==>CheckForWorkerTimeOut
==>case CheckForWorkerTimeOut => {timeOutDeadWorkers()} //Check for, and remove, any timed-out workers
日志信息:$SPARK_HOME/logs/
Master部分日志信息:
14/07/22 13:41:36 INFO master.Master: Registering worker hadoop000:48343 with 1 cores, 2.0 GB RAM
Worker部分日志信息:
14/07/22 13:41:35 INFO Worker: Starting Spark worker hadoop000:48343 with 1 cores, 2.0 GB RAM
14/07/22 13:41:35 INFO Worker: Spark home: /home/spark/app/spark-1.0.1-bin-2.3.0-cdh5.0.0
14/07/22 13:41:35 INFO WorkerWebUI: Started WorkerWebUI at http://hadoop000:8081
14/07/22 13:41:35 INFO Worker: Connecting to master spark://hadoop000:7077...
14/07/22 13:41:36 INFO Worker: Successfully registered with master spark://hadoop000:7077
三、Application提交过程
A、提交Application
运行spark-shell: $SPARK_HOME/bin/spark-shell --master spark://hadoop000:7077
日志信息:$SPARK_HOME/work
spark-shell属于application,在启动SparkContext的createTaskScheduler创建SparkDeploySchedulerBackend的过程中创建
client = new AppClient(sc.env.actorSystem, masters, appDesc, this, conf)
client.start()
会向Master发送RegisterApplication请求
AppClient.scala
==>preStart
==>registerWithMaster
==>tryRegisterAllMasters
==>actor ! RegisterApplication(appDescription)
B、 Master处理RegisterApplication的请求
在Master侧其处理的分支是RegisterApplication;Master在收到RegisterApplication请求之后,Master进行调度:如果有worker已经注册上来,发送LaunchExecutor指令给相应worker
Master.scala
==>case RegisterApplication(description) => {
logInfo("Registering app " + description.name)
val app = createApplication(description, sender)
registerApplication(app)
logInfo("Registered app " + description.name + " with ID " + app.id)
persistenceEngine.addApplication(app)
sender ! RegisteredApplication(app.id, masterUrl)
schedule()
}
==>schedule
==>launchExecutor(worker, exec)
==> worker.addExecutor(exec)
worker.actor ! LaunchExecutor(masterUrl,exec.application.id, exec.id, exec.application.desc, exec.cores, exec.memory)
exec.application.driver ! ExecutorAdded(exec.id, worker.id, worker.hostPort, exec.cores, exec.memory)
C、启动Executor
Worker在收到LaunchExecutor指令之后,会启动Executor进程
Worker.scala
==>case LaunchExecutor(masterUrl, appId, execId, appDesc, cores_, memory_) =>
logInfo("Asked to launch executor %s/%d for %s".format(appId, execId, appDesc.name))
val manager = new ExecutorRunner(appId, execId, appDesc, cores_, memory_,
self, workerId, host,
appDesc.sparkHome.map(userSparkHome => new File(userSparkHome)).getOrElse(sparkHome),
workDir, akkaUrl, ExecutorState.RUNNING)
executors(appId + "/" + execId) = manager
manager.start()
coresUsed += cores_
memoryUsed += memory_
masterLock.synchronized {master ! ExecutorStateChanged(appId, execId, manager.state, None, None)}
}
D、注册Executor
启动的Executor进程会根据启动时的入参,将自己注册到Driver中的SchedulerBackend
SparkDeploySchedulerBackend.scala
==>preStart (CoarseGrainedSchedulerBackend)
==> case RegisterExecutor(executorId, hostPort, cores) =>
logInfo("Registered executor: " + sender + " with ID " + executorId)
sender ! RegisteredExecutor(sparkProperties)
executorActor(executorId) = sender
executorHost(executorId) = Utils.parseHostPort(hostPort)._1
totalCores(executorId) = cores
freeCores(executorId) = cores
executorAddress(executorId) = sender.path.address
addressToExecutorId(sender.path.address) = executorId
totalCoreCount.addAndGet(cores)
makeOffers() CoarseGrainedExecutorBackend.scala
case RegisteredExecutor(sparkProperties) =>
ogInfo("Successfully registered with driver")
executor = new Executor(executorId, Utils.parseHostPort(hostPort)._1, sparkProperties,false)
executor日志信息位置:控制台/$SPARK_HOME/logs
E、运行Task
示例代码:
sc.textFile("hdfs://hadoop000:8020/hello.txt").flatMap(_.split('\t')).map((_,1)).reduceByKey(_+_).collect
SchedulerBackend收到Executor的注册消息之后,会将提交到的Spark Job分解为多个具体的Task,然后通过LaunchTask指令将这些Task分散到各个Executor上真正的运行。
CoarseGrainedSchedulerBackend.scala
def makeOffers() {
launchTasks(scheduler.resourceOffers(
executorHost.toArray.map {case (id, host) => new WorkerOffer(id, host, freeCores(id))}))
} ==>executorActor(task.executorId) ! LaunchTask(new SerializableBuffer(serializedTask))
==>CoarseGrainedSchedulerBackend case LaunchTask(data) =>
if (executor == null) {
logError("Received LaunchTask command but executor was null")
System.exit(1)
} else {
val ser = SparkEnv.get.closureSerializer.newInstance()
val taskDesc = ser.deserialize[TaskDescription](data.value)
logInfo("Got assigned task " + taskDesc.taskId)
executor.launchTask(this, taskDesc.taskId, taskDesc.serializedTask)
}
Master部分日志信息:
14/07/22 15:25:27 INFO master.Master: Registering app Spark shell
14/07/22 15:25:27 INFO master.Master: Registered app Spark shell with ID app-20140722152527-0001
14/07/22 15:25:27 INFO master.Master: Launching executor app-20140722152527-0001/0 on worker worker-20140722134135-hadoop000-48343
Worker部分日志信息:
Spark assembly has been built with Hive, including Datanucleus jars on classpath
14/07/22 15:25:27 INFO Worker: Asked to launch executor app-20140722152527-0001/0 for Spark shell
Spark assembly has been built with Hive, including Datanucleus jars on classpath
14/07/22 15:25:28 INFO ExecutorRunner: Launch command: "java" "-cp" "::/home/spark/app/spark-1.0.1-bin-2.3.0-cdh5.0.0/conf:/home/spark/app/spark-1.0.1-bin-2.3.0-cdh5.0.0/lib/spark-assembly-1.0.1-hadoop2.3.0-cdh5.0.0.jar:/home/spark/app/spark-1.0.1-bin-2.3.0-cdh5.0.0/lib/datanucleus-rdbms-3.2.1.jar:/home/spark/app/spark-1.0.1-bin-2.3.0-cdh5.0.0/lib/datanucleus-core-3.2.2.jar:/home/spark/app/spark-1.0.1-bin-2.3.0-cdh5.0.0/lib/datanucleus-api-jdo-3.2.1.jar" "-XX:MaxPermSize=128m" "-Xms1024M" "-Xmx1024M" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "akka.tcp://spark@hadoop000:50515/user/CoarseGrainedScheduler" "0" "hadoop000" "1" "akka.tcp://sparkWorker@hadoop000:48343/user/Worker" "app-20140722152527-0001"
控制台部分日志信息:
14/07/22 15:25:31 INFO cluster.SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@hadoop000:45150/user/Executor#-791712793] with ID 0
14/07/22 15:25:31 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
每当有新的application注册到master,master都要调度schedule函数将application发送到相应的worker,在对应的worker启动相应的ExecutorBackend,最终的Task就运行在ExecutorBackend中。
Spark分析之Standalone运行过程分析的更多相关文章
- Spark分析之SparkContext启动过程分析
SparkContext作为整个Spark的入口,不管是spark.sparkstreaming.spark sql都需要首先创建一个SparkContext对象,然后基于这个SparkContext ...
- Spark standalone运行模式
Spark Standalone 部署配置 Standalone架构 手工启动一个Spark集群 https://spark.apache.org/docs/latest/spark-standalo ...
- Task的运行过程分析
Task的运行过程分析 Task的运行通过Worker启动时生成的Executor实例进行, caseRegisteredExecutor(sparkProperties)=> logInfo( ...
- 【Spark Core】任务运行机制和Task源代码浅析1
引言 上一小节<TaskScheduler源代码与任务提交原理浅析2>介绍了Driver側将Stage进行划分.依据Executor闲置情况分发任务,终于通过DriverActor向exe ...
- [大数据从入门到放弃系列教程]第一个spark分析程序
[大数据从入门到放弃系列教程]第一个spark分析程序 原文链接:http://www.cnblogs.com/blog5277/p/8580007.html 原文作者:博客园--曲高终和寡 **** ...
- Spark新手入门——3.Spark集群(standalone模式)安装
主要包括以下三部分,本文为第三部分: 一. Scala环境准备 查看二. Hadoop集群(伪分布模式)安装 查看三. Spark集群(standalone模式)安装 Spark集群(standalo ...
- 五、standalone运行模式
在上文中我们知道spark的集群主要有三种运行模式standalone.yarn.mesos,其中常被使用的是standalone和yarn,本文了解一下什么是standalone运行模式,它的运行流 ...
- Netty3 源代码分析 - NIO server绑定过程分析
Netty3 源代码分析 - NIO server绑定过程分析 一个框架封装的越好,越利于我们高速的coding.可是却掩盖了非常多的细节和原理.可是源代码可以揭示一切. 服务器端代码在指定 ...
- Spark 中 RDD的运行机制
1. RDD 的设计与运行原理 Spark 的核心是建立在统一的抽象 RDD 之上,基于 RDD 的转换和行动操作使得 Spark 的各个组件可以无缝进行集成,从而在同一个应用程序中完成大数据计算任务 ...
随机推荐
- HDU 3376
http://acm.hdu.edu.cn/showproblem.php?pid=3376 题意:一个矩阵,每个点有价值,起点左上角终点右下角,每次只能走当前点的下一点或右一点,从起点走到终点,再从 ...
- springboot整合jedisCluster
maven依赖 springboot整合jedisCluster相当简单,maven依赖如下: <dependency> <groupId>org.springframewor ...
- 2015 PHP框架调查结果出炉,Laravel最受欢迎!
日前,SitePoint花了一个月时间进行了有关PHP框架使用情况的调查,通过调查结果所示,无论是在团队项目还是个人项目:无论是国家或是年龄层次,Laravel都是使用最多的一款框架. 其中,最流行的 ...
- CTF-练习平台-WEB之 计算题
四.计算题 打开连接 输入后发现只能输入一个数字,在火狐浏览器中按F12,打开查看器 ,如图所示修改最大长度 输入答案后验证,当当当~~flag出现
- 《DSP using MATLAB》Problem 4.5
1. 2. 3. 5.不会
- USB设备驱动_WDS
1. usb_alloc_dev中的 bus_type 中指定了匹配函数,和uevent中的环境参数. ====>不同的bus_type的匹配函数可能是不同的,uevent的环境变量参数也可能是 ...
- dbt 包依赖简单测试
dbt 包含一个自己的包管理,可以使用git 等工具,还是很方便的,可以方便的进行代码共享,实现复用 创建简单包 实际上就是一个简单的dbt 项目,参考项目 https://gitlab.com/da ...
- 转 update关联更新在sqlserver和oracle中的实现
sqlserver和oracle中实现update关联更新的语法不同,都可以通过inline view(内嵌视图)来实现,总的来说sqlserver更简单些. 测试例子如下: create table ...
- webpack 打包性能分析工具
webpack-bundle-analyzer,推荐使用 新版 vue-cli (旧版按照新版的进行配置即可)已经集成该插件,在项目的 package.json 文件中注入如下命令,然后运行(npm ...
- MapReduce-寻找三角形
在图中,如何判断三角形?三角形在很多场景都有应用,比如社交网络中确定人和人之间的关系. 那么如果通过代码逻辑来实现呢?在数据结构之图中,区分三联体(有一端没有关联关系的三角形)和三角形是关键:两者之间 ...