Spark 调优(转)
Spark 调优
返回原文英文原文:Tuning Spark
Because of the in-memory nature of most Spark computations, Spark programs can be bottlenecked by any resource in the cluster: CPU, network bandwidth, or memory. Most often, if the data fits in memory, the bottleneck is network bandwidth, but sometimes, you also need to do some tuning, such as storing RDDs in serialized form, to decrease memory usage. This guide will cover two main topics: data serialization, which is crucial for good network performance and can also reduce memory use, and memory tuning. We also sketch several smaller topics. |
译者信息因为大部分Spark程序都具有“内存计算”的天性,所以集群中的所有资源:CPU、网络带宽或者是内存都有可能成为Spark程序的瓶颈。通常情况下,如果数据完全加载到内存那么网络带宽就会成为瓶颈,但是你仍然需要对程序进行优化,例如采用序列化的方式保存RDD数据(Resilient Distributed Datasets),以便减少内存使用。该文章主要包含两个议题:数据序列化和内存优化,数据序列化不但能提高网络性能还能减少内存使用。与此同时,我们还讨论了其他几个的小议题。 |
Data SerializationSerialization plays an important role in the performance of any distributed application. Formats that are slow to serialize objects into, or consume a large number of bytes, will greatly slow down the computation. Often, this will be the first thing you should tune to optimize a Spark application. Spark aims to strike a balance between convenience (allowing you to work with any Java type in your operations) and performance. It provides two serialization libraries:
|
译者信息
数据序列化序列化对于提高分布式程序的性能起到非常重要的作用。一个不好的序列化方式(如序列化模式的速度非常慢或者序列化结果非常大)会极大降低计算速度。很多情况下,这是你优化Spark应用的第一选择。Spark试图在方便和性能之间获取一个平衡。Spark提供了两个序列化类库:
|
You can switch to using Kryo by callingSystem.setProperty("spark.serializer", "spark.KryoSerializer") beforecreating your SparkContext. The only reason it is not the default is because of the custom registration requirement, but we recommend trying it in any network-intensive application.
Finally, to register your classes with Kryo, create a public class that extends spark.KryoRegistrator and set thespark.kryo.registratorsystem property to point to it, as follows: import com.esotericsoftware.kryo.Kryo class MyRegistrator extends spark.KryoRegistrator { The Kryo documentation describes more advanced registration options, such as adding custom serialization code. If your objects are large, you may also need to increase thespark.kryoserializer.buffer.mbsystem property. The default is 32, but this value needs to be large enough to hold the largest object you will serialize. Finally, if you don’t register your classes, Kryo will still work, but it will have to store the full class name with each object, which is wasteful. |
译者信息
你可以在创建SparkContext之前,通过调用System.setProperty("spark.serializer", "spark.KryoSerializer"),将序列化方式切换成Kryo。Kryo不能成为默认方式的唯一原因是需要用户进行注册;但是,对于任何“网络密集型”(network-intensive)的应用,我们都建议采用该方式。 最后,为了将类注册到Kryo,你需要继承 spark.KryoRegistrator并且设置系统属性spark.kryo.registrator指向该类,如下所示: import com.esotericsoftware.kryo.Kryo class MyRegistrator extends spark.KryoRegistrator { Kryo文档描述了很多便于注册的高级选项,例如添加用户自定义的序列化代码。 如果对象非常大,你还需要增加属性spark.kryoserializer.buffer.mb的值。该属性的默认值是32,但是该属性需要足够大以便能够容纳需要序列化的最大对象。 最后,如果你不注册你的类,Kryo仍然可以工作,但是需要为了每一个对象保存其对应的全类名(full class name),这是非常浪费的。 |
Memory TuningThere are three considerations in tuning memory usage: the amount of memory used by your objects (you may want your entire dataset to fit in memory), the cost of accessing those objects, and the overhead ofgarbage collection (if you have high turnover in terms of objects). By default, Java objects are fast to access, but can easily consume a factor of 2-5x more space than the “raw” data inside their fields. This is due to several reasons:
This section will discuss how to determine the memory usage of your objects, and how to improve it – either by changing your data structures, or by storing data in a serialized format. We will then cover tuning Spark’s cache size and the Java garbage collector. |
译者信息
内存优化内存优化有三个方面的考虑:对象所占用的内存(你或许希望将所有的数据都加载到内存),访问对象的消耗以及垃圾回收(garbage collection)所占用的开销。 通常,Java对象的访问速度更快,但其占用的空间通常比其内部的属性数据大2-5倍。这主要由以下几方面原因:
该章节讨论如何估算对象所占用的内存以及如何进行改进——通过改变数据结构或者采用序列化方式。然后,我们将讨论如何优化Spark的缓存以及Java内存回收(garbage collection)。 |
Determining Memory ConsumptionThe best way to size the amount of memory consumption your dataset will require is to create an RDD, put it into cache, and look at the SparkContext logs on your driver program. The logs will tell you how much memory each partition is consuming, which you can aggregate to get the total size of the RDD. You will see messages like this: INFO BlockManagerMasterActor: Added rdd_0_1 in memory on mbk.local:50311 (size: 717.5 KB, free: 332.3 MB) This means that partition 1 of RDD 0 consumed 717.5 KB. |
译者信息
确定内存消耗确定对象所需要内存大小的最好方法是创建一个RDD,然后将其放入缓存,最后阅读驱动程序(driver program)中SparkContext的日志。日志会告诉你每一部分占用的内存大小;你可以收集该类信息以确定RDD消耗内存的最终大小。日志信息如下所示: INFO BlockManagerMasterActor: Added rdd_0_1 in memory on mbk.local:50311 (size: 717.5 KB, free: 332.3 MB) 该信息表明RDD0的第一部分消耗717.5KB的内存。 |
Tuning Data StructuresThe first way to reduce memory consumption is to avoid the Java features that add overhead, such as pointer-based data structures and wrapper objects. There are several ways to do this:
|
译者信息
优化数据结构减少内存使用的第一条途径是避免使用一些增加额外开销的Java特性,例如基于指针的数据结构以对对象进行再包装等。有很多方法:
|
Serialized RDD StorageWhen your objects are still too large to efficiently store despite this tuning, a much simpler way to reduce memory usage is to store them in serialized form, using the serialized StorageLevels in the RDD persistence API, such asMEMORY_ONLY_SER. Spark will then store each RDD partition as one large byte array. The only downside of storing data in serialized form is slower access times, due to having to deserialize each object on the fly. We highly recommend using Kryo if you want to cache data in serialized form, as it leads to much smaller sizes than Java serialization (and certainly than raw Java objects). |
译者信息
序列化RDD存储经过上述优化,如果对象还是太大以至于不能有效存放,还有一个减少内存使用的简单方法——序列化,采用RDD持久化API的序列化StorageLevel,例如MEMORY_ONLY_SER。Spark将RDD每一部分都保存为byte数组。序列化带来的唯一缺点是会降低访问速度,因为需要将对象反序列化。如果需要采用序列化的方式缓存数据,我们强烈建议采用Kryo,Kryo序列化结果比Java标准序列化更小(其实比对象内部的原始数据都要小)。 |
Garbage Collection TuningJVM garbage collection can be a problem when you have large “churn” in terms of the RDDs stored by your program. (It is usually not a problem in programs that just read an RDD once and then run many operations on it.) When Java needs to evict old objects to make room for new ones, it will need to trace through all your Java objects and find the unused ones. The main point to remember here is that the cost of garbage collection is proportional to the number of Java objects, so using data structures with fewer objects (e.g. an array ofInts instead of aLinkedList) greatly lowers this cost. An even better method is to persist objects in serialized form, as described above: now there will be only one object (a byte array) per RDD partition. Before trying other techniques, the first thing to try if GC is a problem is to use serialized caching. |
译者信息
优化内存回收如果你需要不断的“翻动”程序保存的RDD数据,JVM内存回收就可能成为问题(通常,如果只需进行一次RDD读取然后进行操作是不会带来问题的)。当需要回收旧对象以便为新对象腾内存空间时,JVM需要跟踪所有的Java对象以确定哪些对象是不再需要的。需要记住的一点是,内存回收的代价与对象的数量正相关;因此,使用对象数量更小的数据结构(例如使用int数组而不是LinkedList)能显著降低这种消耗。另外一种更好的方法是采用对象序列化,如上面所描述的一样;这样,RDD的每一部分都会保存为唯一一个对象(一个byte数组)。如果内存回收存在问题,在尝试其他方法之前,首先尝试使用序列化缓存(serialized caching)。 |
GC can also be a problem due to interference between your tasks’ working memory (the amount of space needed to run the task) and the RDDs cached on your nodes. We will discuss how to control the space allocated to the RDD cache to mitigate this. Measuring the Impact of GC The first step in GC tuning is to collect statistics on how frequently garbage collection occurs and the amount of time spent GC. This can be done by adding-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStampsto yourSPARK_JAVA_OPTSenvironment variable. Next time your Spark job is run, you will see messages printed in the worker’s logs each time a garbage collection occurs. Note these logs will be on your cluster’s worker nodes (in thestdoutfiles in their work directories), not on your driver program. |
译者信息
每项任务(task)的工作内存以及缓存在节点的RDD之间会相互影响,这种影响也会带来内存回收问题。下面我们讨论如何为RDD分配空间以便减轻这种影响。 估算内存回收的影响 优化内存回收的第一步是获取一些统计信息,包括内存回收的频率、内存回收耗费的时间等。为了获取这些统计信息,我们可以把参数-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps添加到环境变量SPARK_JAVA_OPTS。设置完成后,Spark作业运行时,我们可以在日志中看到每一次内存回收的信息。注意,这些日志保存在集群的工作节点(work nodes)而不是你的驱动程序(driver program). |
Cache Size Tuning One important configuration parameter for GC is the amount of memory that should be used for caching RDDs. By default, Spark uses 66% of the configured executor memory (spark.executor.memoryorSPARK_MEM) to cache RDDs. This means that 33% of memory is available for any objects created during task execution. In case your tasks slow down and you find that your JVM is garbage-collecting frequently or running out of memory, lowering this value will help reduce the memory consumption. To change this to say 50%, you can callSystem.setProperty("spark.storage.memoryFraction", "0.5"). Combined with the use of serialized caching, using a smaller cache should be sufficient to mitigate most of the garbage collection problems. In case you are interested in further tuning the Java GC, continue reading below. |
译者信息
优化缓存大小 用多大的内存来缓存RDD是内存回收一个非常重要的配置参数。默认情况下,Spark采用运行内存(executor memory,spark.executor.memory或者SPARK_MEM)的66%来进行RDD缓存。这表明在任务执行期间,有33%的内存可以用来进行对象创建。 如果任务运行速度变慢且JVM频繁进行内存回收,或者内存空间不足,那么降低缓存大小设置可以减少内存消耗。为了将缓存大小修改为50%,你可以调用方法System.setProperty("spark.storage.memoryFraction", "0.5")。结合序列化缓存,使用较小缓存足够解决内存回收的大部分问题。如果你有兴趣进一步优化Java内存回收,请继续阅读下面文章。 |
Advanced GC Tuning To further tune garbage collection, we first need to understand some basic information about memory management in the JVM:
|
译者信息
内存回收高级优化 为了进一步优化内存回收,我们需要了解JVM内存管理的一些基本知识。
|
The goal of GC tuning in Spark is to ensure that only long-lived RDDs are stored in the Old generation and that the Young generation is sufficiently sized to store short-lived objects. This will help avoid full GCs to collect temporary objects created during task execution. Some steps which may be useful are:
Our experience suggests that the effect of GC tuning depends on your application and the amount of memory available. There are many more tuning options described online, but at a high level, managing how frequently full GC takes place can help in reducing the overhead. |
译者信息
Spark内存回收优化的目标是确保只有长时间存活的RDD才保存到老生代区域;同时,新生代区域足够大以保存生命周期比较短的对象。这样,在任务执行期间可以避免执行full GC。下面是一些可能有用的执行步骤:
我们的经历表明有效的内存回收优化取决于你的程序和内存大小。在网上还有很多其他的优化选项, 总体而言有效控制内存回收的频率非常有助于降低额外开销。 |
Other ConsiderationsLevel of ParallelismClusters will not be fully utilized unless you set the level of parallelism for each operation high enough. Spark automatically sets the number of “map” tasks to run on each file according to its size (though you can control it through optional parameters toSparkContext.textFile, etc), and for distributed “reduce” operations, such asgroupByKeyandreduceByKey, it uses the largest parent RDD’s number of partitions. You can pass the level of parallelism as a second argument (see the spark.PairRDDFunctionsdocumentation), or set the system propertyspark.default.parallelismto change the default. In general, we recommend 2-3 tasks per CPU core in your cluster. |
译者信息
其他考虑并行度集群不能有效的被利用,除非为每一个操作都设置足够高的并行度。Spark会根据每一个文件的大小自动设置运行在该文件“Map"任务的个数(你也可以通过SparkContext的配置参数来控制);对于分布式"reduce"任务(例如group by key或者reduce by key),则利用最大RDD的分区数。你可以通过第二个参数传入并行度(阅读文档spark.PairRDDFunctions )或者通过设置系统参数spark.default.parallelism来改变默认值。通常来讲,在集群中,我们建议为每一个CPU核(core)分配2-3个任务。 |
Memory Usage of Reduce TasksSometimes, you will get an OutOfMemoryError not because your RDDs don’t fit in memory, but because the working set of one of your tasks, such as one of the reduce tasks ingroupByKey, was too large. Spark’s shuffle operations (sortByKey,groupByKey,reduceByKey,join, etc) build a hash table within each task to perform the grouping, which can often be large. The simplest fix here is to increase the level of parallelism, so that each task’s input set is smaller. Spark can efficiently support tasks as short as 200 ms, because it reuses one worker JVMs across all tasks and it has a low task launching cost, so you can safely increase the level of parallelism to more than the number of cores in your clusters. |
译者信息
Reduce Task的内存使用有时,你会碰到OutOfMemory错误,这不是因为你的RDD不能加载到内存,而是因为任务执行的数据集过大,例如正在执行groupByKey操作的reduce任务。Spark的”混洗“(shuffle)操作(sortByKey、groupByKey、reduceByKey、join等)为了完成分组会为每一个任务创建哈希表,哈希表有可能非常大。最简单的修复方法是增加并行度,这样,每一个任务的输入会变的更小。Spark能够非常有效的支持段时间任务(例如200ms),因为他会对所有的任务复用JVM,这样能减小任务启动的消耗。所以,你可以放心的使任务的并行度远大于集群的CPU核数。 |
Broadcasting Large VariablesUsing the broadcast functionality available inSparkContextcan greatly reduce the size of each serialized task, and the cost of launching a job over a cluster. If your tasks use any large object from the driver program inside of them (e.g. a static lookup table), consider turning it into a broadcast variable. Spark prints the serialized size of each task on the master, so you can look at that to decide whether your tasks are too large; in general tasks larger than about 20 KB are probably worth optimizing. SummaryThis has been a short guide to point out the main concerns you should know about when tuning a Spark application – most importantly, data serialization and memory tuning. For most programs, switching to Kryo serialization and persisting data in serialized form will solve most common performance issues. Feel free to ask on the Spark mailing list about other tuning best practices. |
译者信息
广播”大变量“使用SparkContext的 广播功能可以有效减小每一个任务的大小以及在集群中启动作业的消耗。如果任务会使用驱动程序(driver program)中比较大的对象(例如静态查找表),考虑将其变成可广播变量。Spark会在master打印每一个任务序列化后的大小,所以你可以通过它来检查任务是不是过于庞大。通常来讲,大于20KB的任务可能都是值得优化的。 总结该文指出了Spark程序优化所需要关注的几个关键点——最主要的是数据序列化和内存优化。对于大多数程序而言,采用Kryo框架以及序列化能够解决性能有关的大部分问题。非常欢迎在Spark mailing list提问优化相关的问题。 |
Spark 调优(转)的更多相关文章
- 【Spark学习】Apache Spark调优
Spark版本:1.1.0 本文系以开源中国社区的译文为基础,结合官方文档翻译修订而来,转载请注明以下链接: http://www.cnblogs.com/zhangningbo/p/4117981. ...
- 【Spark调优】提交job资源参数调优
[场景] Spark提交作业job的时候要指定该job可以使用的CPU.内存等资源参数,生产环境中,任务资源分配不足会导致该job执行中断.失败等问题,所以对Spark的job资源参数分配调优非常重要 ...
- 【Spark调优】大表join大表,少数key导致数据倾斜解决方案
[使用场景] 两个RDD进行join的时候,如果数据量都比较大,那么此时可以sample看下两个RDD中的key分布情况.如果出现数据倾斜,是因为其中某一个RDD中的少数几个key的数据量过大,而另一 ...
- 【Spark调优】小表join大表数据倾斜解决方案
[使用场景] 对RDD使用join类操作,或者是在Spark SQL中使用join语句时,而且join操作中的一个RDD或表的数据量比较小(例如几百MB或者1~2GB),比较适用此方案. [解决方案] ...
- 【Spark调优】数据倾斜及排查
[数据倾斜及调优概述] 大数据分布式计算中一个常见的棘手问题——数据倾斜: 在进行shuffle的时候,必须将各个节点上相同的key拉取到某个节点上的一个task来进行处理,比如按照key进行聚合或j ...
- 【Spark调优】Broadcast广播变量
[业务场景] 在Spark的统计开发过程中,肯定会遇到类似小维表join大业务表的场景,或者需要在算子函数中使用外部变量的场景(尤其是大变量,比如100M以上的大集合),那么此时应该使用Spark的广 ...
- 【Spark调优】Kryo序列化
[Java序列化与反序列化] Java序列化是指把Java对象转换为字节序列的过程:而Java反序列化是指把字节序列恢复为Java对象的过程.序列化使用场景:1.数据的持久化,通过序列化可以把数据永久 ...
- 【翻译】Spark 调优 (Tuning Spark) 中文版
由于Spark自己的调优guidance已经覆盖了很多很有价值的点,因此这里直接翻译一份过来.也作为一个积累. Spark 调优 (Tuning Spark) 由于大多数Spark计算任务是在内存中运 ...
- 【Spark调优】Shuffle原理理解与参数调优
[生产实践经验] 生产实践中的切身体会是:影响Spark性能的大BOSS就是shuffle,抓住并解决shuffle这个主要原因,事半功倍. [Shuffle原理学习笔记] 1.未经优化的HashSh ...
- 【Spark调优】:结合业务场景,优选高性能算子
聚合操作使用reduceByKey/aggregateByKey替代groupByKey 参见我的这篇博客说明 [Spark调优]:如果实在要shuffle,使用map侧预聚合的算子 内存充足前提下使 ...
随机推荐
- 匿名内部类访问方法成员变量需要加final的原因及证明(转)
https://blog.csdn.net/wjw521wjw521/article/details/77333820 在java编程中,没用的类定义太多对系统来说也是一个负担,这时候我们可以通过定义 ...
- [UE4]复制引起的重复对象
一.在角色的BeginPlay事件中,在角色正前方1米到2米处生成一立方体. 二.开启2个玩家,第一个创建是服务器端,第二个窗口是客户端.可以看到:服务器端窗口创建了2个灰色的立方体,客户端却创建了4 ...
- T-SQL 逻辑控制语句 ifelse while casewhen
ifelse,如果逻辑语句有多行,用begin end 包裹 use StudentManageDB go --查询成绩 declare @cAvg int select @cAvg=avg(CSha ...
- Linux入门:常用命令:查看硬盘、分区、CPU、内存信息
查看硬盘信息 $df -lh #查看所有硬盘的使用状 $du -sh /etc #查看etc目录大小 #获得文件大小很方便,主要是目录 外部系统挂载 $mount ...
- sql server 日志传送问题整理
1.数据库备用/只读状态恢复为联机 SELECT DATABASEPROPERTYEX('ty_szum_oa_v2_bak','IsInStandBy') restore database ty_s ...
- 用singleton单例模式实现一个模块
对于具有唯一性的模块(例如,购物车项目中的物品数据,各个页面都要使用它,而且是唯一的数据),用singleton模式. var mySingleton = (function() { var priv ...
- 通过git将本地文件上传到码云的方法
1. 在码云上创建项目 在码云首页顶部,下图所示,右上角头像旁边的加号,鼠标移上去会显示下拉的,点击“新建项目”. 2. 安装Git 下载完成后安装即可,安装过程中没有注意事项,全部默认一直next直 ...
- .NET/C#发起GET和POST请求的几种方法
using System.Net; GET: 1 2 3 var request = (HttpWebRequest)WebRequest.Create("http://www.lead ...
- JVM总结-java内存模型
我们先来看一个反常识的例子. int a=0, b=0; public void method1() { int r2 = a; b = 1; } public void method2() { in ...
- HTTP中Get、Post、Put与Delete。了解一下!
1.GET请求会向数据库发索取数据的请求,从而来获取信息,该请求就像数据库的select操作一样,只是用来查询一下数据,不会修改.增加数据,不会影响资源的内容,即该请求不会产生副作用.无论进行多少次操 ...