BlockTransferService 实现
spark的block管理是通过BlockTransferService定义的方法从远端获取block、将block存储到远程节点。shuffleclient生成过程就会引入blockTransferService。
类的定义如下:
定义了目标节点的主机名和端口号,还定义了批量获取,批量保存,单个block的同步获取和保存。初始化服务和关闭服务方法。
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/ package org.apache.spark.network import java.io.Closeable
import java.nio.ByteBuffer import scala.concurrent.{Future, Promise}
import scala.concurrent.duration.Duration
import scala.reflect.ClassTag import org.apache.spark.internal.Logging
import org.apache.spark.network.buffer.{ManagedBuffer, NioManagedBuffer}
import org.apache.spark.network.shuffle.{BlockFetchingListener, ShuffleClient}
import org.apache.spark.storage.{BlockId, StorageLevel}
import org.apache.spark.util.ThreadUtils private[spark]
abstract class BlockTransferService extends ShuffleClient with Closeable with Logging { /**
* Initialize the transfer service by giving it the BlockDataManager that can be used to fetch
* local blocks or put local blocks.
*
* 通过BlockDataManager来初始化BlockTransferService,可以获取和保存blocks
*/
def init(blockDataManager: BlockDataManager): Unit /**
* Tear down the transfer service.
* 关闭服务
*/
def close(): Unit /**
* Port number the service is listening on, available only after [[init]] is invoked.
*/
def port: Int /**
* Host name the service is listening on, available only after [[init]] is invoked.
*/
def hostName: String /**
* 从远程节点获取blocks信息。
* Fetch a sequence of blocks from a remote node asynchronously,
* available only after [[init]] is invoked.
*
* 可以批量按顺序获取blcok,批量获取到block信息。当读取到一个block信息时候触发listener无需等待全部block fetched
*
* Note that this API takes a sequence so the implementation can batch requests, and does not
* return a future so the underlying implementation can invoke onBlockFetchSuccess as soon as
* the data of a block is fetched, rather than waiting for all blocks to be fetched.
*/
override def fetchBlocks(
host: String,
port: Int,
execId: String,
blockIds: Array[String],
listener: BlockFetchingListener): Unit /**
* Upload a single block to a remote node, available only after [[init]] is invoked.
* 上传single block 到一个远端节点
*/
def uploadBlock(
hostname: String,
port: Int,
execId: String,
blockId: BlockId,
blockData: ManagedBuffer,
level: StorageLevel,
classTag: ClassTag[_]): Future[Unit] /**
* A special case of [[fetchBlocks]], as it fetches only one block and is blocking.
* 获取1个block信息。并且同步阻塞等待。
* It is also only available after [[init]] is invoked.
* 获取block。主机、端口、execId、blockId
*/
def fetchBlockSync(host: String, port: Int, execId: String, blockId: String): ManagedBuffer = {
// A monitor for the thread to wait on.
val result = Promise[ManagedBuffer]()
fetchBlocks(host, port, execId, Array(blockId),
new BlockFetchingListener {
override def onBlockFetchFailure(blockId: String, exception: Throwable): Unit = {
result.failure(exception)
} override def onBlockFetchSuccess(blockId: String, data: ManagedBuffer): Unit = {
val ret = ByteBuffer.allocate(data.size.toInt)
ret.put(data.nioByteBuffer())
ret.flip()
result.success(new NioManagedBuffer(ret))
}
})
ThreadUtils.awaitResult(result.future, Duration.Inf)
} /**
* Upload a single block to a remote node, available only after [[init]] is invoked.
* 上传一个block,并且同步阻塞等待。
* This method is similar to [[uploadBlock]], except this one blocks the thread
* until the upload finishes.
*/
def uploadBlockSync(
hostname: String,
port: Int,
execId: String,
blockId: BlockId,
blockData: ManagedBuffer,
level: StorageLevel,
classTag: ClassTag[_]): Unit = {
val future = uploadBlock(hostname, port, execId, blockId, blockData, level, classTag)
ThreadUtils.awaitResult(future, Duration.Inf)
}
}
BlockTransferService默认为NettyBlockTransferService,基于Netty的网络应用框架,提供网络连接。
有两个重要方法fetchBlocks、uploadBlock。即获取和保存block信息。
fetchBlocks:
//获取blocks数据,需要用主机名称,端口号,excutorid和blockids
override def fetchBlocks(
host: String,
port: Int,
execId: String,
blockIds: Array[String],
listener: BlockFetchingListener): Unit = {
logTrace(s"Fetch blocks from $host:$port (executor id $execId)")
try {
val blockFetchStarter = new RetryingBlockFetcher.BlockFetchStarter {
override def createAndStart(blockIds: Array[String], listener: BlockFetchingListener) {
//clientFactory维护了一个client数组,如果指定主机和端口的连接,获取或者创建一个与目标主机和端口的socket连接
val client = clientFactory.createClient(host, port)
new OneForOneBlockFetcher(client, appId, execId, blockIds.toArray, listener).start()
}
} val maxRetries = transportConf.maxIORetries()
if (maxRetries > 0) {
// Note this Fetcher will correctly handle maxRetries == 0; we avoid it just in case there's
// a bug in this code. We should remove the if statement once we're sure of the stability.
new RetryingBlockFetcher(transportConf, blockFetchStarter, blockIds, listener).start()
} else {
blockFetchStarter.createAndStart(blockIds, listener)
}
} catch {
case e: Exception =>
logError("Exception while beginning fetchBlocks", e)
blockIds.foreach(listener.onBlockFetchFailure(_, e))
}
}
通过TransportClientFactory创建一个读取客户端,实现如下:
public TransportClient createClient(String remoteHost, int remotePort)
throws IOException, InterruptedException {
// Get connection from the connection pool first.
// If it is not found or not active, create a new one.
// Use unresolved address here to avoid DNS resolution each time we creates a client.
final InetSocketAddress unresolvedAddress =
InetSocketAddress.createUnresolved(remoteHost, remotePort); // Create the ClientPool if we don't have it yet.
ClientPool clientPool = connectionPool.get(unresolvedAddress);
if (clientPool == null) {
connectionPool.putIfAbsent(unresolvedAddress, new ClientPool(numConnectionsPerPeer));
clientPool = connectionPool.get(unresolvedAddress);
} int clientIndex = rand.nextInt(numConnectionsPerPeer);
TransportClient cachedClient = clientPool.clients[clientIndex]; if (cachedClient != null && cachedClient.isActive()) {
// Make sure that the channel will not timeout by updating the last use time of the
// handler. Then check that the client is still alive, in case it timed out before
// this code was able to update things.
TransportChannelHandler handler = cachedClient.getChannel().pipeline()
.get(TransportChannelHandler.class);
synchronized (handler) {
handler.getResponseHandler().updateTimeOfLastRequest();
} if (cachedClient.isActive()) {
logger.trace("Returning cached connection to {}: {}",
cachedClient.getSocketAddress(), cachedClient);
return cachedClient;
}
}
意思是维护一个client数组,当所需的客户端不存在的时候,创建一个新的网络连接,然后将谅解保存到client数组中。
final long preResolveHost = System.nanoTime();
final InetSocketAddress resolvedAddress = new InetSocketAddress(remoteHost, remotePort);
final long hostResolveTimeMs = (System.nanoTime() - preResolveHost) / 1000000;
if (hostResolveTimeMs > 2000) {
logger.warn("DNS resolution for {} took {} ms", resolvedAddress, hostResolveTimeMs);
} else {
logger.trace("DNS resolution for {} took {} ms", resolvedAddress, hostResolveTimeMs);
}
uploadBlock:
override def uploadBlock(
hostname: String,
port: Int,
execId: String,
blockId: BlockId,
blockData: ManagedBuffer,
level: StorageLevel,
classTag: ClassTag[_]): Future[Unit] = {
val result = Promise[Unit]()
val client = clientFactory.createClient(hostname, port) // StorageLevel and ClassTag are serialized as bytes using our JavaSerializer.
// Everything else is encoded using our binary protocol.
val metadata = JavaUtils.bufferToArray(serializer.newInstance().serialize((level, classTag))) // Convert or copy nio buffer into array in order to serialize it.
val array = JavaUtils.bufferToArray(blockData.nioByteBuffer())
//通过Netty发送message,构造的Channel对象,new UploadBlock(appId, execId, blockId.toString, metadata, array).toByteBuffer为message
client.sendRpc(new UploadBlock(appId, execId, blockId.toString, metadata, array).toByteBuffer,
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
logTrace(s"Successfully uploaded block $blockId")
result.success((): Unit)
} override def onFailure(e: Throwable): Unit = {
logError(s"Error while uploading block $blockId", e)
result.failure(e)
}
}) result.future
}
同样通过TransportClientFactory创建一个读取客户端,通过
client.sendRpc(new UploadBlock(appId, execId, blockId.toString, metadata, array).toByteBuffer,
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
logTrace(s"Successfully uploaded block $blockId")
result.success((): Unit)
} override def onFailure(e: Throwable): Unit = {
logError(s"Error while uploading block $blockId", e)
result.failure(e)
}
}) result.future
}
方法将数据保存到远端节点。其中new UploadBlock(appId, execId, blockId.toString, metadata, array).toByteBuffer为消息体内容,
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
logTrace(s"Successfully uploaded block $blockId")
result.success((): Unit)
} override def onFailure(e: Throwable): Unit = {
logError(s"Error while uploading block $blockId", e)
result.failure(e)
}
}为回掉函数。
BlockTransferService 实现的更多相关文章
- Spark——SparkContext简单分析
本篇文章就要根据源码分析SparkContext所做的一些事情,用过Spark的开发者都知道SparkContext是编写Spark程序用到的第一个类,足以说明SparkContext的重要性:这里先 ...
- Spark数据传输及ShuffleClient(源码阅读五)
我们都知道Spark的每个task运行在不同的服务器节点上,map输出的结果直接存储到map任务所在服务器的存储体系中,reduce任务有可能不在同一台机器上运行,所以需要远程将多个map任务的中间结 ...
- 王家林 大数据Spark超经典视频链接全集[转]
压缩过的大数据Spark蘑菇云行动前置课程视频百度云分享链接 链接:http://pan.baidu.com/s/1cFqjQu SCALA专辑 Scala深入浅出经典视频 链接:http://pan ...
- shuffle过程中的信息传递
依据Spark1.4版 Spark中的shuffle大概是这么个过程:map端把map输出写成本地文件,reduce端去读取这些文件,然后执行reduce操作. 那么,问题来了: reducer是怎么 ...
- spark storage之SparkEnv
此文旨在对spark storage模块进行分析,整理自己所看所得,等以后再整理. ok,首先看看SparkContext中sparkEnv相关代码: private[spark] def creat ...
- What’s new in Spark 1.2.0
What's new in Spark 1.2.0 1.2.0 was released on 12/18, 2014 在2014年5月30日公布了Spark 1.0 和9月11日公布了Spark1. ...
- Spark性能调优之代码方面的优化
Spark性能调优之代码方面的优化 1.避免创建重复的RDD 对性能没有问题,但会造成代码混乱 2.尽可能复用同一个RDD,减少产生RDD的个数 3.对多次使用的RDD进行持久化(ca ...
- Spark源码阅读之存储体系--存储体系概述与shuffle服务
一.概述 根据<深入理解Spark:核心思想与源码分析>一书,结合最新的spark源代码master分支进行源码阅读,对新版本的代码加上自己的一些理解,如有错误,希望指出. 1.块管理器B ...
- Spark Shuffle模块——Suffle Read过程分析
在阅读本文之前.请先阅读Spark Sort Based Shuffle内存分析 Spark Shuffle Read调用栈例如以下: 1. org.apache.spark.rdd.Shuffled ...
随机推荐
- linux查看操作系统是多少位
有三种方法: 1.echo $HOSTTYPE 2.getconf LONG_BIT,此处不应该是getconf WORD_BIT命令,在64位系统中显示的是32 3.uname -a 出现" ...
- CSS盒模型之margin解析
原文链接:http://www.jianshu.com/p/ccb534e9b588 文章分为: margin的使用 margin的叠压现象 margin的子债父偿现象 一.margin的使用 HTM ...
- 关于border边框重叠颜色设置问题
盒子模型包括:margin border padding content 在标准盒子模型中 conten不包括border和padding 就是他自身内容所包含的区域. 在IE盒子模型中 co ...
- 【BZOJ2440】完全平方数 [莫比乌斯函数]
完全平方数 Time Limit: 10 Sec Memory Limit: 128 MB[Submit][Status][Discuss] Description 小X自幼就很喜欢数. 但奇怪的是 ...
- bzoj 1002 找规律(基尔霍夫矩阵)
网上说的是什么基尔霍夫矩阵,没学过这个,打个表找下规律,发现 w[i]=3*w[i-1]-w[i-2]+2; 然后写个高精直接递推就行了 //By BLADEVIL var n :longint; a ...
- nginx中fastcgi_params配置参数
Nginx 的 fastcgi 模块提供了 fastcgi_param 指令来主要处理这些映射关系,下面 Ubuntu 下 Nginx 的一个配置文件,其主要完成的工作是将 Nginx 中的变量翻译成 ...
- Jenkins安装配置过程及问题详解
1:去官网下载jenkins.war包. 官网地址:http://Jenkins-ci.org/ 下载win版 官网镜像地址:http://mirrors.jenkins-ci.org/war-sta ...
- javascript字符串中包含特殊字符问题
我们都知道,在javascript中,字符串写在单引号或者双引号之中.因为这种要求,我们有些时候一些需要的字符串不能够被javascript解析,如下: "We are "Huma ...
- mvvm command的使用案例
主界面如下: 前台代码: <Window x:Class="WpfApp1.MainWindow" xmlns="http://schemas.mic ...
- Qt笔记——QFile,QDataStream,QTextStream
QFile #ifndef WIDGET_H #define WIDGET_H #include <QWidget> namespace Ui { class Widget; } clas ...