spark错误记录总结
1、执行spark-submit时出错
执行任务如下:
# ./spark-submit --class org.apache.spark.examples.SparkPi /hadoop/spark/examples/jars/spark-examples_2.11-2.4.0.jar 100
报错如下:
2019-02-22 09:56:26 INFO StandaloneAppClient$ClientEndpoint:54 - Executor updated: app-20190222015626-0020/1 is now RUNNING
2019-02-22 09:56:26 INFO BlockManagerMaster:54 - Registering BlockManager BlockManagerId(driver, kvm-test, 36768, None)
2019-02-22 09:56:26 INFO BlockManagerMasterEndpoint:54 - Registering block manager kvm-test:36768 with 366.3 MB RAM, BlockManagerId(driver, kvm-test, 36768, None)
2019-02-22 09:56:26 INFO BlockManagerMaster:54 - Registered BlockManager BlockManagerId(driver, kvm-test, 36768, None)
2019-02-22 09:56:26 INFO BlockManager:54 - Initialized BlockManager: BlockManagerId(driver, kvm-test, 36768, None)
2019-02-22 09:56:26 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5aae8eb5{/metrics/json,null,AVAILABLE,@Spark}
2019-02-22 09:56:27 INFO EventLoggingListener:54 - Logging events to hdfs://hadoop-cluster/spark/eventLog/app-20190222015626-0020.snappy
2019-02-22 09:56:27 INFO StandaloneSchedulerBackend:54 - SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
2019-02-22 09:56:28 INFO SparkContext:54 - Starting job: reduce at SparkPi.scala:38
2019-02-22 09:56:28 INFO DAGScheduler:54 - Got job 0 (reduce at SparkPi.scala:38) with 100 output partitions
2019-02-22 09:56:28 INFO DAGScheduler:54 - Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
2019-02-22 09:56:28 INFO DAGScheduler:54 - Parents of final stage: List()
2019-02-22 09:56:28 INFO DAGScheduler:54 - Missing parents: List()
2019-02-22 09:56:28 INFO DAGScheduler:54 - Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
2019-02-22 09:56:28 INFO StandaloneAppClient$ClientEndpoint:54 - Executor updated: app-20190222015626-0020/1 is now EXITED (Command exited with code 1)
2019-02-22 09:56:28 INFO StandaloneSchedulerBackend:54 - Executor app-20190222015626-0020/1 removed: Command exited with code 1
2019-02-22 09:56:28 INFO StandaloneAppClient$ClientEndpoint:54 - Executor added: app-20190222015626-0020/2 on worker-20190111083714-172.20.1.1-45882 (172.20.1.1:45882) with 1 core(s)
2019-02-22 09:56:28 INFO StandaloneSchedulerBackend:54 - Granted executor ID app-20190222015626-0020/2 on hostPort 172.20.1.1:45882 with 1 core(s), 512.0 MB RAM
2019-02-22 09:56:28 INFO StandaloneAppClient$ClientEndpoint:54 - Executor updated: app-20190222015626-0020/2 is now RUNNING
2019-02-22 09:56:28 INFO BlockManagerMaster:54 - Removal of executor 1 requested
2019-02-22 09:56:28 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:54 - Asked to remove non-existent executor 1
2019-02-22 09:56:28 INFO BlockManagerMasterEndpoint:54 - Trying to remove executor 1 from BlockManagerMaster.
2019-02-22 09:56:28 INFO MemoryStore:54 - Block broadcast_0 stored as values in memory (estimated size 1936.0 B, free 366.3 MB)
2019-02-22 09:56:28 INFO MemoryStore:54 - Block broadcast_0_piece0 stored as bytes in memory (estimated size 1236.0 B, free 366.3 MB)
2019-02-22 09:56:28 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on kvm-test:36768 (size: 1236.0 B, free: 366.3 MB)
2019-02-22 09:56:28 INFO SparkContext:54 - Created broadcast 0 from broadcast at DAGScheduler.scala:1161
2019-02-22 09:56:28 INFO DAGScheduler:54 - Submitting 100 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14))
2019-02-22 09:56:28 INFO TaskSchedulerImpl:54 - Adding task set 0.0 with 100 tasks
2019-02-22 09:56:29 INFO StandaloneAppClient$ClientEndpoint:54 - Executor updated: app-20190222015626-0020/2 is now EXITED (Command exited with code 1)
2019-02-22 09:56:29 INFO StandaloneSchedulerBackend:54 - Executor app-20190222015626-0020/2 removed: Command exited with code 1
2019-02-22 09:56:29 INFO StandaloneAppClient$ClientEndpoint:54 - Executor added: app-20190222015626-0020/3 on worker-20190111083714-172.20.1.1-45882 (172.20.1.1:45882) with 1 core(s)
2019-02-22 09:56:29 INFO BlockManagerMaster:54 - Removal of executor 2 requested
2019-02-22 09:56:29 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:54 - Asked to remove non-existent executor 2
2019-02-22 09:56:29 INFO StandaloneSchedulerBackend:54 - Granted executor ID app-20190222015626-0020/3 on hostPort 172.20.1.1:45882 with 1 core(s), 512.0 MB RAM
2019-02-22 09:56:29 INFO BlockManagerMasterEndpoint:54 - Trying to remove executor 2 from BlockManagerMaster.
2019-02-22 09:56:29 INFO StandaloneAppClient$ClientEndpoint:54 - Executor updated: app-20190222015626-0020/3 is now RUNNING
2019-02-22 09:56:31 INFO StandaloneAppClient$ClientEndpoint:54 - Executor updated: app-20190222015626-0020/3 is now EXITED (Command exited with code 1)
2019-02-22 09:56:31 INFO StandaloneSchedulerBackend:54 - Executor app-20190222015626-0020/3 removed: Command exited with code 1
2019-02-22 09:56:31 INFO BlockManagerMasterEndpoint:54 - Trying to remove executor 3 from BlockManagerMaster.
2019-02-22 09:56:31 INFO BlockManagerMaster:54 - Removal of executor 3 requested
2019-02-22 09:56:31 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:54 - Asked to remove non-existent executor 3
2019-02-22 09:56:31 INFO StandaloneAppClient$ClientEndpoint:54 - Executor added: app-20190222015626-0020/4 on worker-20190111083714-172.20.1.1-45882 (172.20.1.1:45882) with 1 core(s)
2019-02-22 09:56:31 INFO StandaloneSchedulerBackend:54 - Granted executor ID app-20190222015626-0020/4 on hostPort 172.20.1.1:45882 with 1 core(s), 512.0 MB RAM
2019-02-22 09:56:31 INFO StandaloneAppClient$ClientEndpoint:54 - Executor updated: app-20190222015626-0020/4 is now RUNNING
2019-02-22 09:56:33 INFO StandaloneAppClient$ClientEndpoint:54 - Executor updated: app-20190222015626-0020/4 is now EXITED (Command exited with code 1)
2019-02-22 09:56:33 INFO StandaloneSchedulerBackend:54 - Executor app-20190222015626-0020/4 removed: Command exited with code 1
2019-02-22 09:56:33 INFO BlockManagerMasterEndpoint:54 - Trying to remove executor 4 from BlockManagerMaster.
2019-02-22 09:56:33 INFO BlockManagerMaster:54 - Removal of executor 4 requested
从报错看出来,,任务一直在请求,但是executor莫名退出了,日志后面还有一个警告,如下:
2019-02-22 09:42:58 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
分析:从这个信息可以看出来,,task没有获取到资源
解决:
第一种情况:资源不足(可能是CPU,也可能是内存),这种情况可以调整内存(driver或者executor)或者CPU大小
例如,按如下调整,很多情况都是executor内存设置的过大,超出了实际的内存大小
# ./spark-submit --class org.apache.spark.examples.SparkPi --executor-memory 512M --total-executor-cores 2 --driver-memory 512M /hadoop/spark/examples/jars/spark-examples_2.11-2.4.0.jar 100
第二种情况:也是我遇到的。我有一个spark集群+一个spark客户端,我在spark集群里面执行任务可以正常执行,但是放到spark客户端执行的时候就报错了。机器内存,cpu都足够大。导致错误的原因竟然是主机名和ip对应出错了,
由于spark集群是以前搭建的,今天做了一个spark,忘记在spark集群里面添加spark客户端的主机和ip映射了。添加上好了。
总结:
出现这类问题一般有几个可能的原因,逐一检查排除即可:
(1).因为提交任务的节点不能和worker节点交互,因为提交完任务后提交任务节点上会起一个进程,展示任务进度,大多端口为4044,工作节点需要反馈进度给该该端口,所以如果主机名或者IP在hosts中配置不正确。所以检查下主机名和ip是否配置正确。
(2).也有可能是内存不足造成的。内存设置可以根据情况调整下。另外,也检查下web UI看看,确保worker节点处于alive状态。
2、错误日志如下
19/04/08 23:47:19 ERROR ContextCleaner: Error cleaning broadcast 11700946
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:47)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:62)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:58)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)
at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76)
at org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:148)
at org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:321)
at org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
at org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:66)
at org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:238)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$1.apply(ContextCleaner.scala:194)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$1.apply(ContextCleaner.scala:185)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:185)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1302)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:178)
at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:73)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [120 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
同时,日志里面还报了java.lang.OutOfMemoryError: Java heap space
分析:从上面日志分析,是由于spark内存不够,导致gc,gc会使得executor与driver通信中断。
解决:(1)、增加硬件资源 ,修改executor内存;
(2)、增大作业并发度;
(3)、修改spark-defaults.conf ,加大executor通信超时时间spark.executor.heartbeatInterval
spark错误记录总结的更多相关文章
- uploadify插件Http Error(302)错误记录(MVC)
由于项目(asp.net MVC)需要做一个附件上传的功能,使用的是jQuery的Uploadify插件的2.1.0版本,上传文件到自己项目指定的文件夹下面.做完之后,在谷歌上测试是正确的,在火狐上报 ...
- 开发错误记录8:Unable to instantiate application com
开发错误记录8:Unable to instantiate application com.android.tools.fd.runtime.BootstrapApplication 这是因为在And ...
- PHP 错误与异常 笔记与总结(5)配置文件中与错误日志相关的选项 && 将错误记录到指定的文件中
[记录错误(生产环境)] php.ini: ① 开启 / 关闭 错误日志功能 log_errors = On ② 设置 log_errors 的最大字节数 log_errors_max_len = 其 ...
- 安装nagios出现的两个错误记录
最近在安装nagios,出现几个错误记录: 一 检查nagios配置的时候出现错误如下: Warning: Duplicate definition found for host 'kelly' (c ...
- [置顶] 利用Global.asax的Application_Error实现错误记录,错误日志
利用Global.asax的Application_Error实现错误记录 错误日志 void Application_Error(object sender, EventArgs e) { // 在 ...
- streamsets 错误记录处理
我们可以在stage 级别,或者piepline 级别进行error 处理配置 pipeline的错误记录处理 discard(丢踢) send response to Origin pipeline ...
- php设置错误,错误记录
//设置错误级别. error_reporting(E_ALL); //显示所有错误 error_reporting(E_ALL&~E_NOTICE); //显示所有错误但不显示提示级别的 ...
- 27:简单错误记录SimpleErrorLog
题目描述 开发一个简单错误记录功能小模块,能够记录出错的代码所在的文件名称和行号. 处理: 1. 记录最多8条错误记录,循环记录,对相同的错误记录(净文件名称和行号完全匹配)只记录一条,错误计数增加: ...
- WebSphere中数据源连接池太小导致的连接超时错误记录
WebSphere中数据源连接池太小导致的连接超时错误记录. 应用连接超时错误信息: [// ::: CST] webapp E com.ibm.ws.webcontainer.webapp.WebA ...
随机推荐
- .NET Framework 4安装问题
1.安装时,系统提示:运行此安装程序之前,必须安装 32 位 Windows 映像处理组件(WIC). http://www.microsoft.com/downloads/zh-cn/details ...
- flutter常见编译运行等奇怪问题的汇总汇(l转)
1. flutter ios 卡死在闪屏页:解决办法: 1) flutter doctor 2) flutter clean 3) flutter build ios --release 4) Arc ...
- 使用SAP Cloud Platform Leonardo机器学习的测试控制台
选中一个需要进行测试的Leonardo机器学习服务,点击Configure Environments: 因为我不想使用sandbox环境,所以我选择了eu10这个region: 维护clientid和 ...
- java容器细节
1. 2. 3.报错的原因,object里面没有game()方法 4.关于3的解决方法以及另一个细节 .next() 会在取完这个值之后,指针向后走一位,所以迭代器里不要出现多次.next() 正确写 ...
- 记录java+testng运行selenium(一)
整体的流程为下图 整体思路为: 1. 由程序开始运行时去读取ini文件中存储的浏览器及需要打开的url 2. test运行时通过description实现数据驱动,主要做两件事 2.1 第一件事为:读 ...
- Windows server 2012 利用ntdsutil工具实现AD角色转移及删除域控方法
场景1:主域控制器与辅助域控制器运行正常,相互间可以实现AD复制功能.需要把辅助域控制器提升为主域控制器 ,把主域控制器降级为普通成员服务器:这种场景一般应用到原主域控制器进行系统升级(先转移域角色, ...
- 电池管理系统(BMS)
概述 电池管理系统(BMS)为一套保护动力电池使用安全的控制系统,时刻监控电池的使用状态,通过必要措施缓解电池组的不一致性,为新能源车辆的使用安全提供保障. 经纬恒润在控制系统开发方面拥有雄厚的实力和 ...
- 运输层5——TCP报文段的首部格式
写在前面:本文章是针对<计算机网络第七版>的学习笔记 运输层1--运输层协议概述 运输层2--用户数据报协议UDP 运输层3--传输控制协议TCP概述 运输层4--TCP可靠运输的工作原理 ...
- 赤池信息准则AIC,BIC
很多参数估计问题均采用似然函数作为目标函数,当训练数据足够多时,可以不断提高模型精度,但是以提高模型复杂度为代价的,同时带来一个机器学习中非常普遍的问题——过拟合.所以,模型选择问题在模型复杂度与模型 ...
- 26.C# 文件系统
1.流的含义 流是一系列具有方向性的字节序列,比如水管中的水流,只不过现在管道中装的不是水,而是字节序列.当流是用于向外部目标比如磁盘输出数据时称为输出流,当流是用于把数据从外部目标读入程序称为输入流 ...