Spark高级
Spark源码分析:
https://yq.aliyun.com/articles/28400?utm_campaign=wenzhang&utm_medium=article&utm_source=QQ-qun&utm_content=m_11999
Spark shuffle:
http://blog.csdn.net/johnny_lee/article/details/22619585
Spark java.lang.OutOfMemoryError: Java heap space
My cluster: 1 master, 11 slaves, each node has 6 GB memory.
My settings:
spark.executor.memory=4g, Dspark.akka.frameSize=512
Here is the problem:
First, I read some data (2.19 GB) from HDFS to RDD:
val imageBundleRDD = sc.newAPIHadoopFile(...)
Second, do something on this RDD:
val res = imageBundleRDD.map(data => {
val desPoints = threeDReconstruction(data._2, bg)
(data._1, desPoints)
})
Last, output to HDFS:
res.saveAsNewAPIHadoopFile(...)
When I run my program it shows:
.....
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:24 as TID 33 on executor 9: Salve7.Hadoop (NODE_LOCAL)
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:24 as 30618515 bytes in 210 ms
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:36 as TID 34 on executor 2: Salve11.Hadoop (NODE_LOCAL)
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:36 as 30618515 bytes in 449 ms
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Starting task 1.0:32 as TID 35 on executor 7: Salve4.Hadoop (NODE_LOCAL)
Uncaught error from thread [spark-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[spark]
I have a few suggestions:
- If your nodes are configured to have 6g maximum for Spark (and are leaving a little for other processes), then use 6g rather than 4g,
spark.executor.memory=6g
. Make sure you're using as much memory as possible by checking the UI (it will say how much mem you're using) - Try using more partitions, you should have 2 - 4 per CPU. IME increasing the number of partitions is often the easiest way to make a program more stable (and often faster). For huge amounts of data you may need way more than 4 per CPU, I've had to use 8000 partitions in some cases!
- Decrease the fraction of memory reserved for caching, using
spark.storage.memoryFraction
. If you don't usecache()
orpersist
in your code, this might as well be 0. It's default is 0.6, which means you only get 0.4 * 4g memory for your heap. IME reducing the mem frac often makes OOMs go away. UPDATE: From spark 1.6 apparently we will no longer need to play with these values, spark will determine them automatically. - Similar to above but shuffle memory fraction. If your job doesn't need much shuffle memory then set it to a lower value (this might cause your shuffles to spill to disk which can have catastrophic impact on speed). Sometimes when it's a shuffle operation that's OOMing you need to do the opposite i.e. set it to something large, like 0.8, or make sure you allow your shuffles to spill to disk (it's the default since 1.0.0).
- Watch out for memory leaks, these are often caused by accidentally closing over objects you don't need in your lambdas. The way to diagnose is to look out for the "task serialized as XXX bytes" in the logs, if XXX is larger than a few k or more than an MB, you may have a memory leak. See http://stackoverflow.com/a/25270600/1586965
- Related to above; use broadcast variables if you really do need large objects.
- If you are caching large RDDs and can sacrifice some access time consider serialising the RDDhttp://spark.apache.org/docs/latest/tuning.html#serialized-rdd-storage. Or even caching them on disk (which sometimes isn't that bad if using SSDs).
- (Advanced) Related to above, avoid
String
and heavily nested structures (likeMap
and nested case classes). If possible try to only use primitive types and index all non-primitives especially if you expect a lot of duplicates. ChooseWrappedArray
over nested structures whenever possible. Or even roll out your own serialisation - YOU will have the most information regarding how to efficiently back your data into bytes, USE IT! - (bit hacky) Again when caching, consider using a
Dataset
to cache your structure as it will use more efficient serialisation. This should be regarded as a hack when compared to the previous bullet point. Building your domain knowledge into your algo/serialisation can minimise memory/cache-space by 100x or 1000x, whereas all aDataset
will likely give is 2x - 5x in memory and 10x compressed (parquet) on disk.
http://spark.apache.org/docs/1.2.1/configuration.html
EDIT: (So I can google myself easier) The following is also indicative of this problem:
java.lang.OutOfMemoryError : GC overhead limit exceeded
Answer2:
Have a look at the start up scripts a Java heap size is set there, it looks like you're not setting this before running Spark worker.
# Set SPARK_MEM if it isn't already set since we also use it for this process
SPARK_MEM=${SPARK_MEM:-512m}
export SPARK_MEM
# Set JAVA_OPTS to be able to load native libraries and to set heap size
JAVA_OPTS="$OUR_JAVA_OPTS"
JAVA_OPTS="$JAVA_OPTS -Djava.library.path=$SPARK_LIBRARY_PATH"
JAVA_OPTS="$JAVA_OPTS -Xms$SPARK_MEM -Xmx$SPARK_MEM"
You can find the documentation to deploy scripts here.
Spark高级的更多相关文章
- Spark高级数据分析——纽约出租车轨迹的空间和时间数据分析
Spark高级数据分析--纽约出租车轨迹的空间和时间数据分析 一.地理空间分析: 二.pom.xml 原文地址:https://www.jianshu.com/p/eb6f3e0c09b5 作者:II ...
- Learning Spark中文版--第六章--Spark高级编程(2)
Working on a Per-Partition Basis(基于分区的操作) 以每个分区为基础处理数据使我们可以避免为每个数据项重做配置工作.如打开数据库连接或者创建随机数生成器这样的操作,我们 ...
- Learning Spark中文版--第六章--Spark高级编程(1)
Introduction(介绍) 本章介绍了之前章节没有涵盖的高级Spark编程特性.我们介绍两种类型的共享变量:用来聚合信息的累加器和能有效分配较大值的广播变量.基于对RDD现有的transform ...
- spark高级排序彻底解秘
排序,真的非常重要! RDD.scala(源码) 在其,没有罗列排序,不是说它不重要! 1.基础排序算法实战 2.二次排序算法实战 3.更高级别排序算法 4.排序算法内幕解密 1.基础排序算法实战 启 ...
- spark高级编程
启动spark-shell 如果你有一个Hadoop 集群, 并且Hadoop 版本支持YARN, 通过为Spark master 设定yarn-client 参数值,就可以在集群上启动Spark 作 ...
- Spark高级数据分析· 3推荐引擎
推荐算法流程 推荐算法 预备 wget http://www.iro.umontreal.ca/~lisa/datasets/profiledata_06-May-2005.tar.gz cd /Us ...
- Spark高级数据分析-第2章 用Scala和Spark进行数据分析
2.4 小试牛刀:Spark shell和SparkContext 本章使用的资料来自加州大学欧文分校机器学习资料库(UC Irvine Machine Learning Repository),这个 ...
- Spark高级函数应用【combineByKey、transform】
一.combineByKey算子简介 功能:实现分组自定义求和及计数. 特点:用于处理(key,value)类型的数据. 实现步骤: 1.对要处理的数据进行初始化,以及一些转化操作 2.检测key是否 ...
- 10、spark高级编程
一.基于排序机制的wordcount程序 1.要求 1.对文本文件内的每个单词都统计出其出现的次数. 2.按照每个单词出现次数的数量,降序排序. 2.代码实现 ------java实现------- ...
随机推荐
- [luoguP2495] [SDOI2011]消耗战(DP + 虚树)
传送门 明显虚树. 别的题解里都是这样说的. 先不考虑虚树,假设只有一组询问,该如何dp? f[u]表示把子树u中所有的有资源的节点都切掉的最优解 如果节点u需要切掉了话,$f[u]=val[u]$ ...
- maven配置中国下载源【转:http://www.cnblogs.com/libingbin/p/5949483.html】
修改 配置文件 maven 安装 路径 F:\apache-maven-3.3.9\conf 修改 settings.xml或者在.m2文件夹下新建一个settings.xml 阿里源 <mir ...
- 拯救小矮人(codevs 2544)
题目描述 Description 一群小矮人掉进了一个很深的陷阱里,由于太矮爬不上来,于是他们决定搭一个人梯.即:一个小矮人站在另一小矮人的肩膀上,知道最顶端的小矮人伸直胳膊可以碰到陷阱口.对于每一个 ...
- R语言入门---杂记(一)---R的常用函数
1.nchar():查看字符串长度. 2.rev(): 给你的数据翻个个 3.sort():给你数据排个序(默认从小到大依次排列) 4.runif():产生均匀分布的随机数 #runif
- Redis数据结构之简单动态字符串
Redis没有直接使用C语言传统的字符串表示(以空字符结尾的字符数组), 而是自己构建了一种名为简单动态字符串(simple dynamic string,SDS)的抽象类型, 并将SDS用作Redi ...
- Spring注解处理Ajax请求-JSON格式[系统架构:Spring+SpringMVC+MyBatis+MySql]
1.前端jsp页面 <div class="tab_tip"> 请输入[身份证号或姓名] <input type="text" class=& ...
- spring boot 添加mybatis,以及相关配置
首先在pom.xml文件里加入 <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifa ...
- spark再总结
1.Spark是什么? UCBerkeley AMPlab所开源的类HadoopMapReduce的通用的并行计算框架. dfsSpark基于mapreduce算法实现的分布式计算,拥有HadoopM ...
- SSH login without password
SSH login without password Your aim You want to use Linux and OpenSSH to automize your tasks. Theref ...
- window7_64安装STAF
1. 安装包下载 从http://sourceforge.net/projects/staf/files/staf/V3.4.17/下载所需安装包,有Windows.Linux.Solar ...