spark源码阅读之network(1)
spark将在1.6中替换掉akka,而采用netty实现整个集群的rpc的框架,netty的内存管理和NIO支持将有效的提高spark集群的网络传输能力,为了看懂这块代码,在网上找了两本书看《netty in action》和《netty权威指南》,结合了spark的源码既学习了netty也看完了spark netty的部分源码。该部分源码掺杂了太多netty的东西,看起来还是有点累的。
缓存模块
publicabstractclassManagedBuffer{
/** Number of bytes of the data. */
publicabstractlong size();
/**
* Exposes this buffer's data as an NIO ByteBuffer. Changing the position and limit of the
* returned ByteBuffer should not affect the content of this buffer.
*/
// TODO: Deprecate this, usage may require expensive memory mapping or allocation.
publicabstractByteBuffer nioByteBuffer()throwsIOException;
/**
* Exposes this buffer's data as an InputStream. The underlying implementation does not
* necessarily check for the length of bytes read, so the caller is responsible for making sure
* it does not go over the limit.
*/
publicabstractInputStream createInputStream()throwsIOException;
/**
* Increment the reference count by one if applicable.
*/
publicabstractManagedBuffer retain();
/**
* If applicable, decrement the reference count by one and deallocates the buffer if the
* reference count reaches zero.
*/
publicabstractManagedBuffer release();
/**
* Convert the buffer into an Netty object, used to write the data out.
*/
publicabstractObject convertToNetty()throwsIOException;
}
publicfinalclassFileSegmentManagedBufferextendsManagedBuffer{
privatefinalTransportConf conf;
privatefinalFile file;
privatefinallong offset;
privatefinallong length;
publicFileSegmentManagedBuffer(TransportConf conf,File file,long offset,long length){
this.conf = conf;
this.file = file;
this.offset = offset;
this.length = length;
}
@Override
publiclong size(){
return length;
}
@Override
publicByteBuffer nioByteBuffer()throwsIOException{
FileChannel channel =null;
try{
channel =newRandomAccessFile(file,"r").getChannel();
// Just copy the buffer if it's sufficiently small, as memory mapping has a high overhead.
if(length < conf.memoryMapBytes()){
ByteBuffer buf =ByteBuffer.allocate((int) length);
channel.position(offset);
while(buf.remaining()!=0){
if(channel.read(buf)==-1){
thrownewIOException(String.format("Reached EOF before filling buffer\n"+
"offset=%s\nfile=%s\nbuf.remaining=%s",
offset, file.getAbsoluteFile(), buf.remaining()));
}
}
buf.flip();
return buf;
}else{
return channel.map(FileChannel.MapMode.READ_ONLY, offset, length);
}
}catch(IOException e){
try{
if(channel !=null){
long size = channel.size();
thrownewIOException("Error in reading "+this+" (actual file length "+ size +")",
e);
}
}catch(IOException ignored){
// ignore
}
thrownewIOException("Error in opening "+this, e);
}finally{
JavaUtils.closeQuietly(channel);
}
}
@Override
publicInputStream createInputStream()throwsIOException{
FileInputStream is =null;
try{
is =newFileInputStream(file);
ByteStreams.skipFully(is, offset);
returnnewLimitedInputStream(is, length);
}catch(IOException e){
try{
if(is !=null){
long size = file.length();
thrownewIOException("Error in reading "+this+" (actual file length "+ size +")",
e);
}
}catch(IOException ignored){
// ignore
}finally{
JavaUtils.closeQuietly(is);
}
thrownewIOException("Error in opening "+this, e);
}catch(RuntimeException e){
JavaUtils.closeQuietly(is);
throw e;
}
}
@Override
publicManagedBuffer retain(){
returnthis;
}
@Override
publicManagedBuffer release(){
returnthis;
}
@Override
publicObject convertToNetty()throwsIOException{
if(conf.lazyFileDescriptor()){
returnnewLazyFileRegion(file, offset, length);
}else{
FileChannel fileChannel =newFileInputStream(file).getChannel();
returnnewDefaultFileRegion(fileChannel, offset, length);
}
}
publicFile getFile(){return file;}
publiclong getOffset(){return offset;}
publiclong getLength(){return length;}
@Override
publicString toString(){
returnObjects.toStringHelper(this)
.add("file", file)
.add("offset", offset)
.add("length", length)
.toString();
}
}
publicfinalclassNettyManagedBufferextendsManagedBuffer{
privatefinalByteBuf buf;
publicNettyManagedBuffer(ByteBuf buf){
this.buf = buf;
}
@Override
publiclong size(){
return buf.readableBytes();
}
@Override
publicByteBuffer nioByteBuffer()throwsIOException{
return buf.nioBuffer();
}
@Override
publicInputStream createInputStream()throwsIOException{
returnnewByteBufInputStream(buf);
}
@Override
publicManagedBuffer retain(){
buf.retain();
returnthis;
}
@Override
publicManagedBuffer release(){
buf.release();
returnthis;
}
@Override
publicObject convertToNetty()throwsIOException{
return buf.duplicate();
}
@Override
publicString toString(){
returnObjects.toStringHelper(this)
.add("buf", buf)
.toString();
}
}
publicfinalclassNioManagedBufferextendsManagedBuffer{
privatefinalByteBuffer buf;
publicNioManagedBuffer(ByteBuffer buf){
this.buf = buf;
}
@Override
publiclong size(){
return buf.remaining();
}
@Override
publicByteBuffer nioByteBuffer()throwsIOException{
return buf.duplicate();
}
@Override
publicInputStream createInputStream()throwsIOException{
returnnewByteBufInputStream(Unpooled.wrappedBuffer(buf));
}
@Override
publicManagedBuffer retain(){
returnthis;
}
@Override
publicManagedBuffer release(){
returnthis;
}
@Override
publicObject convertToNetty()throwsIOException{
returnUnpooled.wrappedBuffer(buf);
}
@Override
publicString toString(){
returnObjects.toStringHelper(this)
.add("buf", buf)
.toString();
}
}
spark源码阅读之network(1)的更多相关文章
- spark源码阅读之network(2)
在上节的解读中发现spark的源码中大量使用netty的buffer部分的api,该节将看到netty核心的一些api,比如channel: 在Netty里,Channel是通讯的载体(网络套接字或组 ...
- spark源码阅读之network(3)
TransportContext用来创建TransportServer和TransportclientFactory,同时使用TransportChannelHandler用来配置channel的pi ...
- Spark源码阅读之存储体系--存储体系概述与shuffle服务
一.概述 根据<深入理解Spark:核心思想与源码分析>一书,结合最新的spark源代码master分支进行源码阅读,对新版本的代码加上自己的一些理解,如有错误,希望指出. 1.块管理器B ...
- win7+idea+maven搭建spark源码阅读环境
1.参考. 利用IDEA工具编译Spark源码(1.60~2.20) https://blog.csdn.net/He11o_Liu/article/details/78739699 Maven编译打 ...
- spark源码阅读
根据spark2.2的编译顺序来确定源码阅读顺序,只阅读核心的基本部分. 1.common目录 ①Tags②Sketch③Networking④Shuffle Streaming Service⑤Un ...
- emacs+ensime+sbt打造spark源码阅读环境
欢迎转载,转载请注明出处,徽沪一郎. 概述 Scala越来越流行, Spark也愈来愈红火, 对spark的代码进行走读也成了一个很普遍的行为.不巧的是,当前java社区中很流行的ide如eclips ...
- spark源码阅读---Utils.getCallSite
1 作用 当该方法在spark内部代码中调用时,会返回当前调用spark代码的用户类的名称,以及其所调用的spark方法.所谓用户类,就是我们这些用户使用spark api的类. 2 内部实现 2.1 ...
- spark源码阅读--SparkContext启动过程
##SparkContext启动过程 基于spark 2.1.0 scala 2.11.8 spark源码的体系结构实在是很庞大,从使用spark-submit脚本提交任务,到向yarn申请容器,启 ...
- Spark源码阅读(1): Stage划分
Spark中job由action动作生成,那么stage是如何划分的呢?一般的解答是根据宽窄依赖划分.那么我们深入源码看看吧 一个action 例如count,会在多次runJob中传递,最终会到一个 ...
随机推荐
- LeetCode 616. Add Bold Tag in String
原题链接在这里:https://leetcode.com/problems/add-bold-tag-in-string/description/ 题目: Given a string s and a ...
- 关于fft后图像的纵轴问题
fft后如果纵轴是abs后的值,且为双边图像,那么纵轴表示的就是此频率下信号的幅值*N/2的值,也就是说,如果有一正弦信号,幅度为1,假如fft了50个点,那么此信号频率的幅度就是1*50/2=25. ...
- CSS3的圆角border-radius属性
一,语法解释 border-radius : none | <length>{1,4} [/ <length>{1,4} ] <length>: 由浮点数字和单位标 ...
- 转: django数据库操作-增删改查-多对多关系以及一对多(外键)关系
原文链接:http://blog.csdn.net/u010271717/article/details/22044415 一.一对多(外键) 例子:一个作者对应多本书,一本书只有一个作者 model ...
- 使用Log4j将程序日志实时写入Kafka
第一部分 搭建Kafka环境 安装Kafka 下载:http://kafka.apache.org/downloads.html tar zxf kafka-<VERSION>.tgz c ...
- C 游戏所要看的书
C 游戏所要看的书 1.C++primer中文版第4版 经典啊2.C++标准程序库自修教程与参考手册 3.Windows程序设计第5版 4.MFC windows程序设计第2版中文版 5.VC ...
- Python 自动化测试config配置文件ini 配置目录
import ConfigParserimport os path = os.path.join(os.path.dirname(__file__), 'config.ini').replace('\ ...
- xml处理模块
xml是实现不同语言或程序之间进行数据交换的协议,跟json差不多,但json使用起来更简单,不过,古时候,在json还没诞生的黑暗年代,大家只能选择用xml呀,至今很多传统公司如金融行业的很多系统的 ...
- 第三方引擎应用场景分析--Tokudb,infobright
TokuDBTokuDB的特色:• Fractal Tree而不是B-Tree• 内部结点不仅有指向父子的指针还有Buffer区,数据写入先写buffer区,FIFO结构,写入只需要顺序添加到Buff ...
- MySQL router
MySQL Router is a building block for high availability (HA) solutions. It simplifies application dev ...