Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task  in stage 0.0 failed  times, most recent failure: Lost task 3.3 in stage 0.0 (TID , hadoop7, executor ): ExecutorLostFailure (executor  exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 9.2 GB of  GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:) ERROR : FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory.
INFO : Completed executing command(queryId=hive_20190529100107_063ed2a4-e3b0-48a9-9bcc-49acd51925c1); Time taken: 1441.753 seconds
Error: Error while processing statement: FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory. (state=,code=)
Closing: : jdbc:hive2://hadoop1:10000/pdw_nameonce

Hive on spark时报错

解决
a.set spark.yarn.executor.memoryOverhead=512G 调大(权宜之计),excutor-momery + memoryOverhead不能大于集群内存
b.该问题的原因是因为OS层面虚拟内存分配导致,物理内存没有占用多少,但检查虚拟内存的时候却发现OOM,因此可以通过关闭虚拟内存检查来解决该问题,yarn.nodemanager.vmem-check-enabled=false 将虚拟内存检测设置为false

Hive-Container killed by YARN for exceeding memory limits. 9.2 GB of 9 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.的更多相关文章

  1. Container killed by YARN for exceeding memory limits

    19/08/12 14:15:35 ERROR cluster.YarnScheduler: Lost executor 5 on worker01.hadoop.mobile.cn: Contain ...

  2. [转载]Memory Limits for Windows and Windows Server Releases

    Memory Limits for Windows and Windows Server Releases This topic describes the memory limits for sup ...

  3. hadoop的job执行在yarn中内存分配调节————Container [pid=108284,containerID=container_e19_1533108188813_12125_01_000002] is running beyond virtual memory limits. Current usage: 653.1 MB of 2 GB physical memory used

    实际遇到的真实问题,解决方法: 1.调整虚拟内存率yarn.nodemanager.vmem-pmem-ratio (这个hadoop默认是2.1) 2.调整map与reduce的在AM中的大小大于y ...

  4. hive: insert数据时Error during job, obtaining debugging information 以及beyond physical memory limits

    insert overwrite table canal_amt1...... 2014-10-09 10:40:27,368 Stage-1 map = 100%, reduce = 32%, Cu ...

  5. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)

    异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...

  6. spark运行任务报错:Container [...] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 5.0 GB of 6.3 GB virtual memory used. Killing container.

    spark版本:1.6.0 scala版本:2.10 报错日志: Application application_1562341921664_2123 failed 2 times due to AM ...

  7. [hadoop] - Container [xxxx] is running beyond physical/virtual memory limits.

    当运行mapreduce的时候,有时候会出现异常信息,提示物理内存或者虚拟内存超出限制,默认情况下:虚拟内存是物理内存的2.1倍.异常信息类似如下: Container [pid=13026,cont ...

  8. Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits

    以Spark-Client模式运行,Spark-Submit时出现了下面的错误: User: hadoop Name: Spark Pi Application Type: SPARK Applica ...

  9. Spark- Spark Yarn模式下跑yarn-client无法初始化SparkConext,Over usage of virtual memory

    在spark yarn模式下跑yarn-client时出现无法初始化SparkContext错误. // :: INFO mapreduce.Job: Task Id : attempt_142829 ...

随机推荐

  1. JZOJ5358【NOIP2017提高A组模拟9.12】BBQ

    题目 分析 发现,\(C_{ai+aj+bi+bj}^{ai+aj}\),其实就等于从(0,0)走最短路到(ai+aj,bi+bj). 我们可以想办法将i.j分开,从(0,0)走最短路到(ai+aj, ...

  2. wx小程序知识点(八)

    八.小程序的优劣势 优势:   ① 不需要下载 ② 打开速度快 ③ 开发成本低 ④ 安卓上可以添加在桌面,与原生 App 相似 ⑤ 小程序的发布审查流程比较严格,安全保障 劣势:   ① 页面大小不能 ...

  3. 任务Task、先后任务

    Task类似后台线程. using System; using System.Threading; using System.Threading.Tasks;//引用命名空间 namespace Co ...

  4. JS中对小数取整的函数,向上(下),四舍五入取整

    1.丢弃小数部分,保留整数部分 js:parseInt(7/2) 2.向上取整,有小数就整数部分加1 js: Math.ceil(7/2) 3,四舍五入. js: Math.round(7/2) 4, ...

  5. 【UTR #3】量子破碎

    一道有趣的题. 看到按位的矩阵运算,如果对FWT比较熟悉的话,会比较容易地想到. 这种形式也就FWT等转移里面有吧--就算有其他的也难构造出来. 然而FWT的矩阵并不是酉矩阵(也就是满足 \(AA^T ...

  6. JavaWeb_(SSH论坛)_二、框架整合

    基于SSH框架的小型论坛项目 一.项目入门 传送门 二.框架整合 传送门 三.用户模块 传送门 四.页面显示 传送门 五.帖子模块 传送门 六.点赞模块 传送门 七.辅助模块 传送门 导入Jar包 导 ...

  7. Unity3D_(网格导航)简单物体自动寻路

    NavMesh(导航网络)是3D游戏世界中用于实现动态物体自动寻路的一种技术,它将游戏场景中复杂的结构组织关系简化为带有一定信息的网格,进而在这些网格的基础上通过一系列的计算来实现自动寻路. 实现Ca ...

  8. 前端MVC、MVVM的简单实现

    MVC MVC是一种设计模式,它将应用划分为3个部分:数据(模型).展示层(视图)和用户交互层.结合一下下图,更能理解三者之间的关系.换句话说,一个事件的发生是这样的过程 用户和应用交互 控制器的事件 ...

  9. 在SSH项目中Struts2、Spring、Hibernate分别起到什么作用?

    (1)Struts主要起控制作用,Spring主要起解耦作用,Hibernate主要起操作数据作用. (2)Struts2是一个基于MVC设计模式的Web应用框架,在MVC设计模式中Struts2作为 ...

  10. vue指令之v-cloak

    vue指令之v-cloak 一起学 vue指令 v-cloak  指令可看作标签属性 某些情况下可能由于机器性能故障或者网络原因,导致传输有问题,那么浏览器无法成功解析数据,此时浏览器输出的内容就是纯 ...