https://flink.apache.org/news/2015/09/16/off-heap-memory.html

 

Running data-intensive code in the JVM and making it well-behaved is tricky. Systems that put billions of data objects naively onto the JVM heap face unpredictable OutOfMemoryErrors and Garbage Collection stalls. Of course, you still want to to keep your data in memory as much as possible, for speed and responsiveness of the processing applications. In that context, “off-heap” has become almost something like a magic word to solve these problems.

 

In this blog post, we will look at how Flink exploits off-heap memory.
The feature is part of the upcoming release, but you can try it out with the latest nightly builds. We will also give a few interesting insights into the behavior for Java’s JIT compiler for highly optimized methods and loops.

 

Why actually bother with off-heap memory?

Given that Flink has a sophisticated level of managing on-heap memory, why do we even bother with off-heap memory? It is true that “out of memory” has been much less of a problem for Flink because of its heap memory management techniques. Nonetheless, there are a few good reasons to offer the possibility to move Flink’s managed memory out of the JVM heap:

  • Very large JVMs (100s of GBytes heap memory) tend to be tricky. It takes long to start them (allocate and initialize heap) and garbage collection stalls can be huge (minutes). While newer incremental garbage collectors (like G1) mitigate this problem to some extend, an even better solution is to just make the heap much smaller and allocate Flink’s managed memory chunks outside the heap.

  • I/O and network efficiency: In many cases, we write MemorySegments to disk (spilling) or to the network (data transfer). Off-heap memory can be written/transferred with zero copies, while heap memory always incurs an additional memory copy.

  • Off-heap memory can actually be owned by other processes. That way, cached data survives process crashes (due to user code exceptions) and can be used for recovery. Flink does not exploit that, yet, but it is interesting future work.

Flink传统的基于‘on-heap’ 内存管理机制,已经可以解决很多的java关于‘out of memory’或gc的问题,那我们为何还要用 ‘off-heap’的技术,

1. very large的JVM会要很长的启动时间,并且gc的代价也会很大
2. heap在写磁盘或network时,至少要一次copy,而off-heap可以实现zero copy
3. off-heap内存是进程共享的,JVM进程crash不会丢失数据

 

The opposite question is also valid. Why should Flink ever not use off-heap memory?

  • On-heap is easier and interplays better with tools. Some container environments and monitoring tools get confused when the monitored heap size does not remotely reflect the amount of memory used by the process.

  • Short lived memory segments are cheaper on the heap. Flink sometimes needs to allocate some short lived buffers, which works cheaper on the heap than off-heap.

  • Some operations are actually a bit faster on heap memory (or the JIT compiler understands them better).

为何Flink不直接用off-heap memory?

越强大的东西,一般都越麻烦,

所以一般case下,用on-heap就够了

 

The off-heap Memory Implementation

Given that all memory intensive internal algorithms are already implemented against the MemorySegment, our implementation to switch to off-heap memory is actually trivial.
You can compare it to replacing allByteBuffer.allocate(numBytes) calls with ByteBuffer.allocateDirect(numBytes).
In Flink’s case it meant that we made the MemorySegment abstract and added the HeapMemorySegment and OffHeapMemorySegment subclasses.
TheOffHeapMemorySegment takes the off-heap memory pointer from a java.nio.DirectByteBuffer and implements its specialized access methods using sun.misc.Unsafe.
We also made a few adjustments to the startup scripts and the deployment code to make sure that the JVM is permitted enough off-heap memory (direct memory, -XX:MaxDirectMemorySize).

使用off-heap在内存管理机制上和使用on-heap并没有太大的区别,

相比于NIO,使用ByteBuffer.allocate(numBytes)来分配heap内存,而用ByteBuffer.allocateDirect(numBytes)来分配off-heap内存

Flink,对MemorySegment,生成两个子类,HeapMemorySegment and OffHeapMemorySegment

其中OffHeapMemorySegment,以java.nio.DirectByteBuffer的形式使用off-heap memory, 通过sun.misc.Unsafe接口来操作这些memory

 

Understanding the JIT and tuning the implementation

The MemorySegment was (before our change) a standalone class, it was final (had no subclasses). Via Class Hierarchy Analysis (CHA), the JIT compiler was able to determine that all of the accessor method calls go to one specific implementation. That way, all method calls can be perfectly de-virtualized and inlined, which is essential to performance, and the basis for all further optimizations (like vectorization of the calling loop).

With two different memory segments loaded at the same time, the JIT compiler cannot perform the same level of optimization any more, which results in a noticeable difference in performance: A slowdown of about 2.7 x in the following example:

 

这里是考虑性能优化问题,

这里提出的一个问题就是,如果MemorySegment是standalone class类,没有之类,那么会比较高效,因为编译的时候,他所调研的method都是确定的,可以提前做优化;
如果具有两个子类,那么只有到真正运行到时候才知道是哪个子类,这样就不能提前做优化;

实际测,性能的差距在2.7倍左右

解决方法:

Approach 1: Make sure that only one memory segment implementation is ever loaded.

We re-structured the code a bit to make sure that all places that produce long-lived and short-lived memory segments instantiate the same MemorySegment subclass (Heap- or Off-Heap segment). Using factories rather than directly instantiating the memory segment classes, this was straightforward.

如果在代码里面只可能实例化其中的一个子类,另一个子类根本就没有被实例化过,那么JIT会意识到,并做优化;我们可以用factories来实例化对象,这样更方便;

Approach 2: Write one segment that handles both heap and off-heap memory

We created a class HybridMemorySegment which handles transparently both heap- and off-heap memory. It can be initialized either with a byte array (heap memory), or with a pointer to a memory region outside the heap (off-heap memory).

第二种方法就是用HybridMemorySegment,同时处理heap和off-heap,这样就不需要子类
并且有tricky的方式,可以做到透明的处理两种memory

细节看原文

Off-heap Memory in Apache Flink and the curious JIT compiler的更多相关文章

  1. Peeking into Apache Flink's Engine Room

    http://flink.apache.org/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html   Join Processin ...

  2. Flink监控:Monitoring Apache Flink Applications

    This post originally appeared on the Apache Flink blog. It was reproduced here under the Apache Lice ...

  3. 深入理解Apache Flink

    Apache Flink(下简称Flink)项目是大数据处理领域最近冉冉升起的一颗新星,其不同于其他大数据项目的诸多特性吸引了越来越多人的关注.本文将深入分析Flink的一些关键技术与特性,希望能够帮 ...

  4. 深入理解Apache Flink核心技术

    深入理解Apache Flink核心技术 2016年02月18日 17:04:03 阅读数:1936 标签: Apache-Flink数据流程序员JVM   版权声明:本文为博主原创文章,未经博主允许 ...

  5. Apache Flink 开发环境搭建和应用的配置、部署及运行

    https://mp.weixin.qq.com/s/noD2Jv6m-somEMtjWTJh3w 本文是根据 Apache Flink 系列直播课程整理而成,由阿里巴巴高级开发工程师沙晟阳分享,主要 ...

  6. Apache Flink 进阶(八):详解 Metrics 原理与实战

    本文由 Apache Flink Contributor 刘彪分享,本文对两大问题进行了详细的介绍,即什么是 Metrics.如何使用 Metrics,并对 Metrics 监控实战进行解释说明. 什 ...

  7. Apache Flink

    Flink 剖析 1.概述 在如今数据爆炸的时代,企业的数据量与日俱增,大数据产品层出不穷.今天给大家分享一款产品—— Apache Flink,目前,已是 Apache 顶级项目之一.那么,接下来, ...

  8. 新一代大数据处理引擎 Apache Flink

    https://www.ibm.com/developerworks/cn/opensource/os-cn-apache-flink/index.html 大数据计算引擎的发展 这几年大数据的飞速发 ...

  9. Managing Large State in Apache Flink®: An Intro to Incremental Checkpointing

    January 23, 2018- Apache Flink, Flink Features Stefan Richter and Chris Ward Apache Flink was purpos ...

随机推荐

  1. poj 1753

    翻转棋,注意是翻转周围四个的,不是整行列的  汗-_-! 哥的代码风还是不错的 二进制储存状态 Sample Input bwwb bbwb bwwb bwww Sample Output 4 #in ...

  2. pl/sql developer 登陆提示ORA-12514(转)

      pl/sql developer 登陆提示ORA-12514 说明监听服务已经起来了 备注:通过 lsnrctl 命令来启动/停止/查看/重载监听器/服务 lsnrctl start|stop|s ...

  3. asp.net控件开发基础(1)(转)原文更多内容

    asp.net本身提供了很多控件,提供给我们这些比较懒惰的人使用,我认为控件的作用就在此,因为我们不想重复工作,所以要创建它,这个本身便是一个需求的关系,所以学习控件开发很有意思. wrox网站上有本 ...

  4. easyui的validatebox重写自定义验证规则的几个实例

    validatebox已经实现的几个规则: 验证规则是根据使用需求和验证类型属性来定义的,这些规则已经实现(easyui API): email:匹配E-Mail的正则表达式规则. url:匹配URL ...

  5. Redis Set 命令

        1.Set set是string类型的无序集合,其参考来源应该属于STL中的Set.   •set元素最大可以包含(2的32次方-1)个元素. •set的是通过hash table实现的,ha ...

  6. 三十分钟掌握STL

    这是本小人书.原名是<using stl>,不知道是谁写的.不过我倒觉得很有趣,所以化了两个晚上把它翻译出来.我没有对翻译出来的内容校验过.如果你没法在三十分钟内觉得有所收获,那么赶紧扔了 ...

  7. storm环境搭建

    备注——使用: 1.单机版本: 启动zkServer.nimbus.supervisor.ui服务: zkServer.sh start zkServer.sh status #查看zkserver是 ...

  8. 发送JS错误日志到服务器

    JS记录错误日志/捕捉错误 //onerror提供异常信息,文件路径和发生错误代码的行数的三个参数. window.onerror = function(e,url,index){ var msg = ...

  9. The constructor BASE64Encoder() is not accessible due to restriction on required

    在Eclipse中编写Java代码时,用到了BASE64Decoder,import sun.misc.BASE64Decoder;可是Eclipse提示: Access restriction : ...

  10. highcharts报表插件之expoting参数的使用

    exporting 参数配置 本文转载自:http://blog.csdn.net/myjlvzlp/article/details/8531275 说明:导出及打印选项 打印导出功能的配置项. 1. ...