Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task  in stage 0.0 failed  times, most recent failure: Lost task 3.3 in stage 0.0 (TID , hadoop7, executor ): ExecutorLostFailure (executor  exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 9.2 GB of  GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:) ERROR : FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory.
INFO : Completed executing command(queryId=hive_20190529100107_063ed2a4-e3b0-48a9-9bcc-49acd51925c1); Time taken: 1441.753 seconds
Error: Error while processing statement: FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory. (state=,code=)
Closing: : jdbc:hive2://hadoop1:10000/pdw_nameonce

Hive on spark时报错

解决
a.set spark.yarn.executor.memoryOverhead=512G 调大(权宜之计),excutor-momery + memoryOverhead不能大于集群内存
b.该问题的原因是因为OS层面虚拟内存分配导致,物理内存没有占用多少,但检查虚拟内存的时候却发现OOM,因此可以通过关闭虚拟内存检查来解决该问题,yarn.nodemanager.vmem-check-enabled=false 将虚拟内存检测设置为false

Hive-Container killed by YARN for exceeding memory limits. 9.2 GB of 9 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.的更多相关文章

  1. Container killed by YARN for exceeding memory limits

    19/08/12 14:15:35 ERROR cluster.YarnScheduler: Lost executor 5 on worker01.hadoop.mobile.cn: Contain ...

  2. [转载]Memory Limits for Windows and Windows Server Releases

    Memory Limits for Windows and Windows Server Releases This topic describes the memory limits for sup ...

  3. hadoop的job执行在yarn中内存分配调节————Container [pid=108284,containerID=container_e19_1533108188813_12125_01_000002] is running beyond virtual memory limits. Current usage: 653.1 MB of 2 GB physical memory used

    实际遇到的真实问题,解决方法: 1.调整虚拟内存率yarn.nodemanager.vmem-pmem-ratio (这个hadoop默认是2.1) 2.调整map与reduce的在AM中的大小大于y ...

  4. hive: insert数据时Error during job, obtaining debugging information 以及beyond physical memory limits

    insert overwrite table canal_amt1...... 2014-10-09 10:40:27,368 Stage-1 map = 100%, reduce = 32%, Cu ...

  5. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)

    异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...

  6. spark运行任务报错:Container [...] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 5.0 GB of 6.3 GB virtual memory used. Killing container.

    spark版本:1.6.0 scala版本:2.10 报错日志: Application application_1562341921664_2123 failed 2 times due to AM ...

  7. [hadoop] - Container [xxxx] is running beyond physical/virtual memory limits.

    当运行mapreduce的时候,有时候会出现异常信息,提示物理内存或者虚拟内存超出限制,默认情况下:虚拟内存是物理内存的2.1倍.异常信息类似如下: Container [pid=13026,cont ...

  8. Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits

    以Spark-Client模式运行,Spark-Submit时出现了下面的错误: User: hadoop Name: Spark Pi Application Type: SPARK Applica ...

  9. Spark- Spark Yarn模式下跑yarn-client无法初始化SparkConext,Over usage of virtual memory

    在spark yarn模式下跑yarn-client时出现无法初始化SparkContext错误. // :: INFO mapreduce.Job: Task Id : attempt_142829 ...

随机推荐

  1. string、wstring、CString 相互转换

    关于string wstring cstring的功能这里不详细叙述了 可参见这里:https://www.cnblogs.com/guolixiucai/p/4716521.html 关于转换这里只 ...

  2. k8s删除节点后再重新添加进去(踩坑)

    开启本地集群,发现一台节点出问题了,想删除再换一台节点,结果就踩坑了,还好本地有好几套环境. 再master节点执行以下命令 [root@k8s-master ~]# kubectl drain k8 ...

  3. BZOJ 2669 Luogu P3160 [CQOI2012]局部极小值 (容斥原理、DP)

    题目链接 (bzoj) https://www.lydsy.com/JudgeOnline/problem.php?id=2669 (luogu) https://www.luogu.org/prob ...

  4. 分布式-信息方式-ActiveMQ静态网络连接的容错

    容错的链接Failover Protocol 前面讲述的都是client配置链接到指定的 broker上.但是,如果 Broker的链接失败怎么办呢?此时, Client有两个选项:要么立刻死掉,要么 ...

  5. linux下文件权限更改(转载)

    版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明.本文链接:https://blog.csdn.net/qq_33571752/article/d ...

  6. 【编程漫谈】Hello world!

    Hello world!是打开编程世界的第一把钥匙,只要你能运行出Hello world!,基本上就算入了个门了,因为程序正确的运行代表着基本开发环境都有了,包括编辑器,编译器,解释器,运行环境等待, ...

  7. centos7 搭建测试环境

    1. 下载JDK8 地址:https://download.oracle.com/otn/java/jdk/8u221-b11/230deb18db3e4014bb8e3e8324f81b43/jdk ...

  8. 转 实例详解Django的 select_related 和 prefetch_related 函数对 QuerySet 查询的优化(三)

    这是本系列的最后一篇,主要是select_related() 和 prefetch_related() 的最佳实践. 第一篇在这里 讲例子和select_related() 第二篇在这里 讲prefe ...

  9. python接口测试之mock(一)

    在现在的软件开发过程中,特别是app的部分,需要的很多数据以及内容,都是来自server端的API,但是不能保证在客户端开发的时候,api在 server端已经开发完成,专门等着前端来调用,理想的情况 ...

  10. Libvirt 版本降级过程记录 4.5.0 to 3.9.0

    前言 搞 OpenStack 开发 Libvirt 版本会随着 OpenStack 版本切来切去的,记录一下 Libvirt 从 4.5 降级到 3.9.0 的过程. 步骤 直接 downgrade ...