Spark源码分析:

https://yq.aliyun.com/articles/28400?utm_campaign=wenzhang&utm_medium=article&utm_source=QQ-qun&utm_content=m_11999

Spark shuffle:

http://blog.csdn.net/johnny_lee/article/details/22619585

Spark java.lang.OutOfMemoryError: Java heap space

My cluster: 1 master, 11 slaves, each node has 6 GB memory.

My settings:
spark.executor.memory=4g, Dspark.akka.frameSize=512
Here is the problem:

First, I read some data (2.19 GB) from HDFS to RDD:
val imageBundleRDD = sc.newAPIHadoopFile(...)
Second, do something on this RDD:

val res = imageBundleRDD.map(data => {
val desPoints = threeDReconstruction(data._2, bg)
(data._1, desPoints)
})
Last, output to HDFS:

res.saveAsNewAPIHadoopFile(...)
When I run my program it shows:

.....
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:24 as TID 33 on executor 9: Salve7.Hadoop (NODE_LOCAL)
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:24 as 30618515 bytes in 210 ms
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:36 as TID 34 on executor 2: Salve11.Hadoop (NODE_LOCAL)
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:36 as 30618515 bytes in 449 ms
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Starting task 1.0:32 as TID 35 on executor 7: Salve4.Hadoop (NODE_LOCAL)
Uncaught error from thread [spark-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[spark]

I have a few suggestions:

  • If your nodes are configured to have 6g maximum for Spark (and are leaving a little for other processes), then use 6g rather than 4g, spark.executor.memory=6g. Make sure you're using as much memory as possible by checking the UI (it will say how much mem you're using)
  • Try using more partitions, you should have 2 - 4 per CPU. IME increasing the number of partitions is often the easiest way to make a program more stable (and often faster). For huge amounts of data you may need way more than 4 per CPU, I've had to use 8000 partitions in some cases!
  • Decrease the fraction of memory reserved for caching, using spark.storage.memoryFraction. If you don't use cache() or persist in your code, this might as well be 0. It's default is 0.6, which means you only get 0.4 * 4g memory for your heap. IME reducing the mem frac often makes OOMs go away. UPDATE: From spark 1.6 apparently we will no longer need to play with these values, spark will determine them automatically.
  • Similar to above but shuffle memory fraction. If your job doesn't need much shuffle memory then set it to a lower value (this might cause your shuffles to spill to disk which can have catastrophic impact on speed). Sometimes when it's a shuffle operation that's OOMing you need to do the opposite i.e. set it to something large, like 0.8, or make sure you allow your shuffles to spill to disk (it's the default since 1.0.0).
  • Watch out for memory leaks, these are often caused by accidentally closing over objects you don't need in your lambdas. The way to diagnose is to look out for the "task serialized as XXX bytes" in the logs, if XXX is larger than a few k or more than an MB, you may have a memory leak. See http://stackoverflow.com/a/25270600/1586965
  • Related to above; use broadcast variables if you really do need large objects.
  • If you are caching large RDDs and can sacrifice some access time consider serialising the RDDhttp://spark.apache.org/docs/latest/tuning.html#serialized-rdd-storage. Or even caching them on disk (which sometimes isn't that bad if using SSDs).
  • (Advanced) Related to above, avoid String and heavily nested structures (like Map and nested case classes). If possible try to only use primitive types and index all non-primitives especially if you expect a lot of duplicates. Choose WrappedArray over nested structures whenever possible. Or even roll out your own serialisation - YOU will have the most information regarding how to efficiently back your data into bytes, USE IT!
  • (bit hacky) Again when caching, consider using a Dataset to cache your structure as it will use more efficient serialisation. This should be regarded as a hack when compared to the previous bullet point. Building your domain knowledge into your algo/serialisation can minimise memory/cache-space by 100x or 1000x, whereas all a Dataset will likely give is 2x - 5x in memory and 10x compressed (parquet) on disk.

http://spark.apache.org/docs/1.2.1/configuration.html

EDIT: (So I can google myself easier) The following is also indicative of this problem:

java.lang.OutOfMemoryError : GC overhead limit exceeded

Answer2:

Have a look at the start up scripts a Java heap size is set there, it looks like you're not setting this before running Spark worker.

# Set SPARK_MEM if it isn't already set since we also use it for this process
SPARK_MEM=${SPARK_MEM:-512m}
export SPARK_MEM # Set JAVA_OPTS to be able to load native libraries and to set heap size
JAVA_OPTS="$OUR_JAVA_OPTS"
JAVA_OPTS="$JAVA_OPTS -Djava.library.path=$SPARK_LIBRARY_PATH"
JAVA_OPTS="$JAVA_OPTS -Xms$SPARK_MEM -Xmx$SPARK_MEM"

You can find the documentation to deploy scripts here.

 

Spark高级的更多相关文章

  1. Spark高级数据分析——纽约出租车轨迹的空间和时间数据分析

    Spark高级数据分析--纽约出租车轨迹的空间和时间数据分析 一.地理空间分析: 二.pom.xml 原文地址:https://www.jianshu.com/p/eb6f3e0c09b5 作者:II ...

  2. Learning Spark中文版--第六章--Spark高级编程(2)

    Working on a Per-Partition Basis(基于分区的操作) 以每个分区为基础处理数据使我们可以避免为每个数据项重做配置工作.如打开数据库连接或者创建随机数生成器这样的操作,我们 ...

  3. Learning Spark中文版--第六章--Spark高级编程(1)

    Introduction(介绍) 本章介绍了之前章节没有涵盖的高级Spark编程特性.我们介绍两种类型的共享变量:用来聚合信息的累加器和能有效分配较大值的广播变量.基于对RDD现有的transform ...

  4. spark高级排序彻底解秘

    排序,真的非常重要! RDD.scala(源码) 在其,没有罗列排序,不是说它不重要! 1.基础排序算法实战 2.二次排序算法实战 3.更高级别排序算法 4.排序算法内幕解密 1.基础排序算法实战 启 ...

  5. spark高级编程

    启动spark-shell 如果你有一个Hadoop 集群, 并且Hadoop 版本支持YARN, 通过为Spark master 设定yarn-client 参数值,就可以在集群上启动Spark 作 ...

  6. Spark高级数据分析· 3推荐引擎

    推荐算法流程 推荐算法 预备 wget http://www.iro.umontreal.ca/~lisa/datasets/profiledata_06-May-2005.tar.gz cd /Us ...

  7. Spark高级数据分析-第2章 用Scala和Spark进行数据分析

    2.4 小试牛刀:Spark shell和SparkContext 本章使用的资料来自加州大学欧文分校机器学习资料库(UC Irvine Machine Learning Repository),这个 ...

  8. Spark高级函数应用【combineByKey、transform】

    一.combineByKey算子简介 功能:实现分组自定义求和及计数. 特点:用于处理(key,value)类型的数据. 实现步骤: 1.对要处理的数据进行初始化,以及一些转化操作 2.检测key是否 ...

  9. 10、spark高级编程

    一.基于排序机制的wordcount程序 1.要求 1.对文本文件内的每个单词都统计出其出现的次数. 2.按照每个单词出现次数的数量,降序排序. 2.代码实现 ------java实现------- ...

随机推荐

  1. Jerasure库接口简介及性能测试

    http://blog.chinaunix.net/uid-20196318-id-3277600.html Jerasure库提供Reed-Solomon和Cauchy Reed-Solomon两种 ...

  2. 【前端学习笔记】2015-09-11~~~~ js中ajax请求返回案例

    <body><textarea id='a' rows=100 cols=300>result:</textarea>><script>var a ...

  3. LA 2797 平面区域dfs

    题目大意:一个平面区域有n条线段,问能否从(0,0)处到达无穷远处(不穿过任何线段) 分析:若两条线段有一个端点重合,这种情况是不能从端点重合处穿过的 的.因此对每个端点延长一点,就可以避免这个问题. ...

  4. 能量项链(codevs 1154)

    题目描述 Description 在Mars星球上,每个Mars人都随身佩带着一串能量项链.在项链上有N颗能量珠.能量珠是一颗有头标记与尾标记的珠子,这些标记对应着某个正整数.并且,对于相邻的两颗珠子 ...

  5. SPOJ 4060 A game with probability

    博弈论+dp+概率 提交链接- 题意不是很好懂 Ai 表示剩 i 个石头. A 先手的获胜概率. Bi 表示剩 i 个石头. B先手的获胜概率. 如果想选,对于 Ai: 有 p 的概率进入 Bi−1 ...

  6. 无记录时显示gridview表头,并增加一行显示“没有记录”【绑定SqlDataSource控件时】

    原文发布时间为:2008-08-04 -- 来源于本人的百度文章 [由搬家工具导入] using System;using System.Data;using System.Configuration ...

  7. ZOJ 3306 状压dp

    转自:http://blog.csdn.net/a497406594/article/details/38442893 Kill the Monsters Time Limit: 7 Seconds ...

  8. WAMP本地环境升级php版本

    !!!本次测试未完全成功,仅供提供经验. (1)下载php最新版本 http://windows.php.net/download/ (2)解压放到wamp/bin/php目录下 (3) 从已存在的p ...

  9. 如何在ASP.NET Core自定义中间件中读取Request.Body和Response.Body的内容?

    原文:如何在ASP.NET Core自定义中间件中读取Request.Body和Response.Body的内容? 文章名称: 如何在ASP.NET Core自定义中间件读取Request.Body和 ...

  10. win7安装ANT

    点击进入ant官网,找到下载选项.   选择下载安装文件.其余的源文件和手册的下载步骤完全相同.   可以下载官网上对应系统的最新版本.也可以在old ant 版本中选择自己需要的版本.笔者需要ant ...