Spark --- 启动、运行、关闭过程
// scalastyle:off println
package org.apache.spark.examples import scala.math.random import org.apache.spark._ /** Computes an approximation to pi */
object SparkPi {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Spark Pi")
val spark = new SparkContext(conf)
val slices = if (args.length > ) args().toInt else
val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
val count = spark.parallelize( until n, slices).map { i =>
val x = random * -
val y = random * -
if (x*x + y*y < ) else
}.reduce(_ + _)
println("Pi is roughly " + 4.0 * count / n)
spark.stop()
}
}
[abc@search-engine---dev4 spark]$ ./bin/run-example SparkPi Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties // :: INFO SparkContext: Running Spark version 1.6. // :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable #进行acls用户权限认证 // :: INFO SecurityManager: Changing view acls to: abc // :: INFO SecurityManager: Changing modify acls to: abc // :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(abc); users with modify permissions: Set(abc) // :: INFO Utils: Successfully started service 'sparkDriver' on port . // :: INFO Slf4jLogger: Slf4jLogger started #启动远程监听服务,端口是36739,Spark的通信工作由akka来实现 // :: INFO Remoting: Starting remoting // :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@127.0.0.1:36739] // :: INFO Utils: Successfully started service 'sparkDriverActorSystem' on port . #注册MapOutputTracker,BlockManagerMaster,BlockManager // :: INFO SparkEnv: Registering MapOutputTracker // :: INFO SparkEnv: Registering BlockManagerMaster #分配存储空间,包括磁盘空间和内存空间 // :: INFO DiskBlockManager: Created local directory at /tmp/blockmgr-8a68c39e-40e5-43ca-b21e-081ef8d278e2 // :: INFO MemoryStore: MemoryStore started with capacity 511.1 MB // :: INFO SparkEnv: Registering OutputCommitCoordinator // :: INFO Utils: Successfully started service 'SparkUI' on port . // :: INFO SparkUI: Started SparkUI at http://127.0.0.1:4040 // :: INFO HttpFileServer: HTTP File server directory is /tmp/spark-3ef0b16c-fe81-482e--30571da062e7/httpd-796af3e2-122c---f4aa7d32bb04 #启动HTTP服务,可以通过界面查看服务和任务运行情况 // :: INFO HttpServer: Starting HTTP Server // :: INFO Utils: Successfully started service 'HTTP file server' on port . #启动SparkContext,并上传本地运行的jar包到http://127.0.0.1:54315 // :: INFO SparkContext: Added JAR file:/usr/local/spark/lib/spark-examples-1.6.-hadoop2.6.0.jar at http://127.0.0.1:54315/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1465285404966 // :: INFO Executor: Starting executor ID driver on host localhost // :: INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port . // :: INFO NettyBlockTransferService: Server created on // :: INFO BlockManagerMaster: Trying to register BlockManager // :: INFO BlockManagerMasterEndpoint: Registering block manager localhost: with 511.1 MB RAM, BlockManagerId(driver, localhost, ) // :: INFO BlockManagerMaster: Registered BlockManager #Spark提交了一个job给DAGScheduler // :: INFO SparkContext: Starting job: reduce at SparkPi.scala: #DAGScheduler收到一个编号为0的含有2个partitions分区的job // :: INFO DAGScheduler: Got job (reduce at SparkPi.scala:) with output partitions #将job转换为编号为0的stage // :: INFO DAGScheduler: Final stage: ResultStage (reduce at SparkPi.scala:) #DAGScheduler在submitting stage之前,首先寻找本次stage的parents,如果missing parents为空,则submitting stage; #如果有,会对parents stage进行递归submit stage,随之又将stage 0分成了2个task,提交给TaskScheduler的submitTasks方法。 #对于某些简单的job,如果它没有依赖关系,并且只有一个partition,这样的job会使用local thread处理而并不会提交到TaskScheduler上处理。 // :: INFO DAGScheduler: Parents of final stage: List() // :: INFO DAGScheduler: Missing parents: List() // :: INFO DAGScheduler: Submitting ResultStage (MapPartitionsRDD[] at map at SparkPi.scala:), which has no missing parents // :: INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1904.0 B, free 1904.0 B) // :: INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1218.0 B, free 3.0 KB) // :: INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost: (size: 1218.0 B, free: 511.1 MB) // :: INFO SparkContext: Created broadcast from broadcast at DAGScheduler.scala: // :: INFO DAGScheduler: Submitting missing tasks from ResultStage (MapPartitionsRDD[] at map at SparkPi.scala:) #TaskSchedulerImpl是TaskScheduler的实现类,接收了DAGScheduler提交的2个task // :: INFO TaskSchedulerImpl: Adding task set 0.0 with tasks // :: INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID , localhost, partition ,PROCESS_LOCAL, bytes) // :: INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID , localhost, partition ,PROCESS_LOCAL, bytes) #Executor接收任务后则从远程的服务器中将运行jar包存放到本地,然后进行计算,并各自汇报了任务执行状态 // :: INFO Executor: Running task 1.0 in stage 0.0 (TID ) // :: INFO Executor: Running task 0.0 in stage 0.0 (TID ) // :: INFO Executor: Fetching http://127.0.0.1:54315/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1465285404966 // :: INFO Utils: Fetching http://127.0.0.1:54315/jars/spark-examples-1.6.1-hadoop2.6.0.jar to /tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7/userFiles-b021b090-3024-421c-b4b0-73fc9f723f44/fetchFileTemp4760324069006875921.tmp // :: INFO Executor: Adding file:/tmp/spark-3ef0b16c-fe81-482e--30571da062e7/userFiles-b021b090--421c-b4b0-73fc9f723f44/spark-examples-1.6.-hadoop2.6.0.jar to class loader // :: INFO Executor: Finished task 1.0 in stage 0.0 (TID ). bytes result sent to driver // :: INFO Executor: Finished task 0.0 in stage 0.0 (TID ). bytes result sent to driver #TaskSetManager、SparkContent各自收到任务完成报告 // :: INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID ) in ms on localhost (/) // :: INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID ) in ms on localhost (/) // :: INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool // :: INFO DAGScheduler: ResultStage (reduce at SparkPi.scala:) finished in 2.217 s // :: INFO DAGScheduler: Job finished: reduce at SparkPi.scala:, took 2.877995 s #打印程序执行结果 Pi is roughly 3.14282 #Spark服务关闭 // :: INFO SparkUI: Stopped Spark web UI at http://127.0.0.1:4040 // :: INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! // :: INFO MemoryStore: MemoryStore cleared // :: INFO BlockManager: BlockManager stopped // :: INFO BlockManagerMaster: BlockManagerMaster stopped // :: INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! // :: INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. // :: INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. // :: INFO SparkContext: Successfully stopped SparkContext // :: INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. // :: INFO ShutdownHookManager: Shutdown hook called // :: INFO ShutdownHookManager: Deleting directory /tmp/spark-3ef0b16c-fe81-482e--30571da062e7/httpd-796af3e2-122c---f4aa7d32bb04 // :: INFO ShutdownHookManager: Deleting directory /tmp/spark-3ef0b16c-fe81-482e--30571da062e7
Spark --- 启动、运行、关闭过程的更多相关文章
- DBA_Oracle Startup / Shutdown启动和关闭过程详解(概念)
2014-08-07 Created By BaoXinjian
- 19、oracle的启动和关闭过程
19.1.oracle数据库实例的启动分三步: 1.启动oracle例程: startup nomount; #读初始化参数文件,启动实例,但不安装数据库.当数据库以这个模式启动时,参数文件被读取, ...
- Oracle数据库的启动与关闭
一.概述: Oracle数据库的启动分为启动数据库实例.装载数据库和打开数据库3个过程,对应数据库的3种模式. 启动数据库实例:根据数据库初始化参数文件中参数设置,在内存中为数据库分配SGA.PGA等 ...
- Oracle数据库启动和关闭
在介绍oracle数据库的启动和关闭前,先看一下Oracle的参数文件. oracle参数文件 1.初始化参数文件 oracle的初始化参数文件分为spfilesid.ora.spfile.ora.i ...
- RAC 数据库的启动与关闭
RAC数据库与单实例的差异主要表现在多个实例通过集群件来统一管理共享的资源.因此原有的单实例的管理方式,如数据库.监听器等的关闭启动等可以使用原有的方式进行,也可以通过集群管理工具,命令行来集中管理, ...
- Oracle数据库体系结构、启动过程、关闭过程
一.Oracle数据库体系结构体系结构由下面组件组成:1.Oracle服务器(Server):由数据库实例和数据库文件组成,另外在用户建立与服务器的连接时启动服务器进程并分配PGA(程序全局区) (1 ...
- 老李推荐:第8章5节《MonkeyRunner源码剖析》MonkeyRunner启动运行过程-运行测试脚本
老李推荐:第8章5节<MonkeyRunner源码剖析>MonkeyRunner启动运行过程-运行测试脚本 poptest是国内唯一一家培养测试开发工程师的培训机构,以学员能胜任自动化 ...
- Spark 启动过程(standalone)
Spark启动过程 正常启动Spark集群时往往使用start-all.sh ,此脚本中通过调用start-master.sh和start-slaves.sh启动mater及workers节点. 1. ...
- 老李推荐:第8章7节《MonkeyRunner源码剖析》MonkeyRunner启动运行过程-小结
老李推荐:第8章7节<MonkeyRunner源码剖析>MonkeyRunner启动运行过程-小结 poptest是国内唯一一家培养测试开发工程师的培训机构,以学员能胜任自动化测试,性 ...
- 老李推荐:第8章1节《MonkeyRunner源码剖析》MonkeyRunner启动运行过程-运行环境初始化
老李推荐:第8章1节<MonkeyRunner源码剖析>MonkeyRunner启动运行过程-运行环境初始化 首先大家应该清楚的一点是,MonkeyRunner的运行是牵涉到主机端和目 ...
随机推荐
- git 删除仓库的文件
git移除远程仓库某个文件夹 1.比如src/product/ 文件夹 git rm -r --cached "src/product" //执行命令. 2.提交到本地 git c ...
- Jenkins和Sonar集成
Jenkins可以通过插件的形式和Sonar很好的集成. (1)Jenkin安装Sonar插件(这里我估计安装的插件有点多) 注意:之前安装Jenkins的时候我用的是JDK系统环境环境变量jdk1. ...
- DAX Editor VSIX project
DAX Editor is a Visual Studio extension that implements a language service for DAX language for SQL ...
- include_once与require_once的区别
①作用及用法 可以减少代码的重复 include(_once)("文件的路径")与require(_once)("文件的路径") ②理解 说白了,就是用包含进 ...
- Flask web开发之路一
之前学过一段时间的flask,感觉还是挺好用的,自己的专利挖掘项目也想这个web框架来搭建,于是重新开始基础学习 环境:win10,python3.6,pycharm2017,虚拟环境virtuale ...
- HDOJ HDU 1850 Being a Good Boy in Spring Festival
Description 一年在外 父母时刻牵挂 春节回家 你能做几天好孩子吗 寒假里尝试做做下面的事情吧 陪妈妈逛一次菜场 悄悄给爸爸买个小礼物 主动地 强烈地 要求洗一次碗 某一天早起 给爸妈用心地 ...
- String和datetime在SQL中和在C#中相互转换方法总结
Custom Date and Time Format Strings <= https://docs.microsoft.com/en-us/dotnet/standard/base-ty ...
- 用mysql-connector操作MySQL数据库
首先是工具库的安装 pip install mysql-connector 连接数据库 #连接数据库 #常规连接方式 conn = mysql.connector.connect(user=', da ...
- Appium入门(5)__ Appium测试用例(1)
步骤为:启动AVD.启动Appium.写用例(python).执行 一.启动Android模拟器 二.启动Appium Server 双击appium图标启动,配置 ...
- opencv 替换图像中的一部分
首先选取图像中的Roi区域,然后对Roi区域进行赋值,那么原图像相应的区域也跟着变化了: dst = src.clone(); cv::Mat Roi(dst, cv::Rect(x, y, cut_ ...