Storage模块

在Spark中提及最多的是RDD,而RDD所交互的数据是通过Storage来实现和管理

Storage模块整体架构

1. 存储层

在Spark里,单节点的Storage的管理是通过block来管理的,每个Block的存储可以在内存里或者在磁盘中,在BlockManager里既可以管理内存的存储,同时也管理硬盘的存储,存储的标识是通过块的ID来区分的。
 

2. 集群下的架构

2.1 架构

在集群下Spark的Block的管理架构使用Master-Slave模式
  • Master : 拥有所有block的具体信息(本地和Slave节点)
  • Slave : 通过master获取block的信息,并且汇报自己的信息
这里的Master并不是Spark集群中分配任务的Master,而是提交task的客户端Driver,这里并没有主备设计,因为Driver client是单点的,通常Driver client crash了,计算也没有结果了,在Storage 的集群管理中Master是由driver承担。
 
Executor在运行task的时候,通过blockManager获取本地的block块,如果本地找不到,尝试通过master去获取远端的块
for (pid <- Random.shuffle(Seq.range(, numBlocks))) {
val pieceId = BroadcastBlockId(id, "piece" + pid)
logDebug(s"Reading piece $pieceId of $broadcastId")
// First try getLocalBytes because there is a chance that previous attempts to fetch the
// broadcast blocks have already fetched some of the blocks. In that case, some blocks
// would be available locally (on this executor).
bm.getLocalBytes(pieceId) match {
case Some(block) =>
blocks(pid) = block
releaseLock(pieceId)
case None =>
bm.getRemoteBytes(pieceId) match {
case Some(b) =>
if (checksumEnabled) {
val sum = calcChecksum(b.chunks())
if (sum != checksums(pid)) {
throw new SparkException(s"corrupt remote block $pieceId of $broadcastId:" +
s" $sum != ${checksums(pid)}")
}
}
// We found the block from remote executors/driver's BlockManager, so put the block
// in this executor's BlockManager.
if (!bm.putBytes(pieceId, b, StorageLevel.MEMORY_AND_DISK_SER, tellMaster = true)) {
throw new SparkException(
s"Failed to store $pieceId of $broadcastId in local BlockManager")
}
blocks(pid) = b
case None =>
throw new SparkException(s"Failed to get $pieceId of $broadcastId")
}
}
}

2.2 Executor获取块内容的位置

 

唯一的blockID:

broadcast_0_piece0
请求Master获取该BlockID所在的 Location,也就是BlockManagerId的集合
/** Get locations of the blockId from the driver */
def getLocations(blockId: BlockId): Seq[BlockManagerId] = {
driverEndpoint.askWithRetry[Seq[BlockManagerId]](GetLocations(blockId))
}
唯一的BlockManagerId

BlockManagerId(driver, 192.168.121.101, 55153, None)

Executor ID, executor ID, 对driver来说就是driver

Host: executor/driver IP
Port:    executor/driver Port
 
每一个executor, 和driver 都生成唯一的BlockManagerId

2.3 Executor获取块的内容

def getRemoteBytes(blockId: BlockId): Option[ChunkedByteBuffer] = {
logDebug(s"Getting remote block $blockId")
require(blockId != null, "BlockId is null")
var runningFailureCount =
var totalFailureCount =
val locations = getLocations(blockId)
val maxFetchFailures = locations.size
var locationIterator = locations.iterator
while (locationIterator.hasNext) {
val loc = locationIterator.next()
logDebug(s"Getting remote block $blockId from $loc")
val data = try {
blockTransferService.fetchBlockSync(
loc.host, loc.port, loc.executorId, blockId.toString).nioByteBuffer()
} catch {
case NonFatal(e) =>
runningFailureCount +=
totalFailureCount += if (totalFailureCount >= maxFetchFailures) {
// Give up trying anymore locations. Either we've tried all of the original locations,
// or we've refreshed the list of locations from the master, and have still
// hit failures after trying locations from the refreshed list.
logWarning(s"Failed to fetch block after $totalFailureCount fetch failures. " +
s"Most recent failure cause:", e)
return None
} logWarning(s"Failed to fetch remote block $blockId " +
s"from $loc (failed attempt $runningFailureCount)", e) // If there is a large number of executors then locations list can contain a
// large number of stale entries causing a large number of retries that may
// take a significant amount of time. To get rid of these stale entries
// we refresh the block locations after a certain number of fetch failures
if (runningFailureCount >= maxFailuresBeforeLocationRefresh) {
locationIterator = getLocations(blockId).iterator
logDebug(s"Refreshed locations from the driver " +
s"after ${runningFailureCount} fetch failures.")
runningFailureCount =
} // This location failed, so we retry fetch from a different one by returning null here
null
} if (data != null) {
return Some(new ChunkedByteBuffer(data))
}
logDebug(s"The value of block $blockId is null")
}
logDebug(s"Block $blockId not found")
None
}

通过获取的BlockManagerId的集合列表,顺序的从列表中取出一个拥有该Block的服务器,通过

blockTransferService.fetchBlockSync(
loc.host, loc.port, loc.executorId, blockId.toString).nioByteBuffer()
同步的获取块的内容,如果该块不存在,则换下一个拥有该Block的服务器

2.4 BlockManager注册

Driver 初始化SparkContext.init 的时候,会初始化BlockManager.initialize
val idFromMaster = master.registerBlockManager(
id,
maxMemory,
slaveEndpoint)

会通过master 注册BlockManager

def registerBlockManager(
blockManagerId: BlockManagerId,
maxMemSize: Long,
slaveEndpoint: RpcEndpointRef): BlockManagerId = {
logInfo(s"Registering BlockManager $blockManagerId")
val updatedId = driverEndpoint.askWithRetry[BlockManagerId](
RegisterBlockManager(blockManagerId, maxMemSize, slaveEndpoint))
logInfo(s"Registered BlockManager $updatedId")
updatedId
}
在BlockManagerMaster里,我们看到了endpoint是强制的driver,也就是默认是driver 是master
无论driver,还是executor都是初始化后BlockManager,发消息给driver master进行注册,唯一不同的是driver标识自己的ID是driver,而executor是按照executor id来标识自己的

2.5 Driver Master的endpoint

前面一节已经介绍过无论driver还是executor 都会发送消息到Driver的Master,在Driver 和Executor里SparkEnv.create的时候会初始化BlockManagerMaster
val blockManagerMaster = new BlockManagerMaster(registerOrLookupEndpoint(
BlockManagerMaster.DRIVER_ENDPOINT_NAME,
new BlockManagerMasterEndpoint(rpcEnv, isLocal, conf, listenerBus)),
conf, isDriver)

注册一个lookup的endpoint

def registerOrLookupEndpoint(
name: String, endpointCreator: => RpcEndpoint):
RpcEndpointRef = {
if (isDriver) {
logInfo("Registering " + name)
rpcEnv.setupEndpoint(name, endpointCreator)
} else {
RpcUtils.makeDriverRef(name, conf, rpcEnv)
}
}

代码中可以看到只有isDriver的时候才会setup一个rpc的endpoint,默认是netty的rpc环境,命名为:BlockManagerMaster

spark://BlockManagerMaster@192.168.121.101:40978  
所有的driver, executor都会向master 40978发消息

2.6 Master和Executor消息格式

下面的代码每个case都是master和executor的消息格式
override def receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit] = {
case RegisterBlockManager(blockManagerId, maxMemSize, slaveEndpoint) =>
context.reply(register(blockManagerId, maxMemSize, slaveEndpoint)) case _updateBlockInfo @
UpdateBlockInfo(blockManagerId, blockId, storageLevel, deserializedSize, size) =>
context.reply(updateBlockInfo(blockManagerId, blockId, storageLevel, deserializedSize, size))
listenerBus.post(SparkListenerBlockUpdated(BlockUpdatedInfo(_updateBlockInfo))) case GetLocations(blockId) =>
context.reply(getLocations(blockId)) case GetLocationsMultipleBlockIds(blockIds) =>
context.reply(getLocationsMultipleBlockIds(blockIds)) case GetPeers(blockManagerId) =>
context.reply(getPeers(blockManagerId)) case GetExecutorEndpointRef(executorId) =>
context.reply(getExecutorEndpointRef(executorId)) case GetMemoryStatus =>
context.reply(memoryStatus) case GetStorageStatus =>
context.reply(storageStatus) case GetBlockStatus(blockId, askSlaves) =>
context.reply(blockStatus(blockId, askSlaves)) case GetMatchingBlockIds(filter, askSlaves) =>
context.reply(getMatchingBlockIds(filter, askSlaves)) case RemoveRdd(rddId) =>
context.reply(removeRdd(rddId)) case RemoveShuffle(shuffleId) =>
context.reply(removeShuffle(shuffleId)) case RemoveBroadcast(broadcastId, removeFromDriver) =>
context.reply(removeBroadcast(broadcastId, removeFromDriver)) case RemoveBlock(blockId) =>
removeBlockFromWorkers(blockId)
context.reply(true) case RemoveExecutor(execId) =>
removeExecutor(execId)
context.reply(true) case StopBlockManagerMaster =>
context.reply(true)
stop() case BlockManagerHeartbeat(blockManagerId) =>
context.reply(heartbeatReceived(blockManagerId)) case HasCachedBlocks(executorId) =>
blockManagerIdByExecutor.get(executorId) match {
case Some(bm) =>
if (blockManagerInfo.contains(bm)) {
val bmInfo = blockManagerInfo(bm)
context.reply(bmInfo.cachedBlocks.nonEmpty)
} else {
context.reply(false)
}
case None => context.reply(false)
}
}

2.7 Master结构关系

在Master上会保存每一个executor所对应的BlockManagerID和BlockManagerInfo,而在BlockManagerInfo中保存了每个block的状态
Executor通过心跳主动汇报自己的状态,Master更新EndPoint中Executor的状态
Executor 中的block的状态更新也会汇报给Master,只是跟新Master状态,但不会通知其他的Executor
 
在Executor和Master交互中是Executor主动推和获取数据的,Master只是管理executor的状态,以及Block的所在的Driver、Executor的位置及其状态,负载较小,Master没有考虑可用性,通常Master节点就是提交任务的Driver的节点。
 

Spark Storage(一) 集群下的区块管理的更多相关文章

  1. spark高可用集群搭建及运行测试

    文中的所有操作都是在之前的文章spark集群的搭建基础上建立的,重复操作已经简写: 之前的配置中使用了master01.slave01.slave02.slave03: 本篇文章还要添加master0 ...

  2. Spark高可用集群搭建

    Spark高可用集群搭建 node1    node2    node3   1.node1修改spark-env.sh,注释掉hadoop(就不用开启Hadoop集群了),添加如下语句 export ...

  3. Spark on Yarn 集群运行要点

    实验版本:spark-1.6.0-bin-hadoop2.6 本次实验主要是想在已有的Hadoop集群上使用Spark,无需过多配置 1.下载&解压到一台使用spark的机器上即可 2.修改配 ...

  4. 搭建Spark高可用集群

      Spark简介 官网地址:http://spark.apache.org/ Apache Spark™是用于大规模数据处理的统一分析引擎. 从右侧最后一条新闻看,Spark也用于AI人工智能 sp ...

  5. spark教程(一)-集群搭建

    spark 简介 建议先阅读我的博客 大数据基础架构 spark 一个通用的计算引擎,专门为大规模数据处理而设计,与 mapreduce 类似,不同的是,mapreduce 把中间结果 写入 hdfs ...

  6. 用redis实现TOMCAT集群下的session共享

    上篇实现了 LINUX中NGINX反向代理下的TOMCAT集群(http://www.cnblogs.com/yuanjava/p/6850764.html) 这次我们在上篇的基础上实现session ...

  7. redis入门(14)redis集群下的数据分区存储

    redis入门(10)redis集群下的数据分区存储

  8. Java应用集群下的定时任务处理方案(mysql)

    Java应用集群下的定时任务处理方案(mysql)   因为自己有csdn和博客园两个博客, 所以两边都会发一下. csdn地址: http://blog.csdn.net/u012881584/ar ...

  9. was集群下基于接口分布式架构和开发经验谈

    版权声明:本文为博主原创文章,未经博主同意不得转载. https://blog.csdn.net/luozhonghua2014/article/details/34084935    某b项目是我首 ...

随机推荐

  1. web_qianduan

    <!DOCTYPE html><html lang="zh"><head><meta charset="UTF-8"& ...

  2. HTML5实现图片预览功能

    两种方式实现 URL FileReader Index.jsp文件 <%@page contentType="text/html" pageEncoding="UT ...

  3. 【truffle】Error: `truffle init` no longer accepts a project template name as an argument.

    下载范例工程时候.使用命令报错 truffle init webpack 错误如下: Error: `truffle init` no longer accepts a project templat ...

  4. python基础---->python的使用(六)

    这里记录一下python中关于class类的一些知识.不解释就弄不懂的事,就意味着怎样解释也弄不懂. python中的类知识 一.class的属性引用与实例 class MyClass(): '''A ...

  5. 【Web前端开发最佳实践系列】前端代码推荐和建议

    一.常用的前端文件的组织结构: 1.js (放置JavaScript代码) lib(放置框架JavaScript文件) custom.js 2.css(放置CSS样式代码) lib(放置框架CSS文件 ...

  6. 浅析重定向与反弹Shell命令

    0×01    简介 反弹shell在漏洞证明和利用的过程中都是一个直接有力的手段.由于安全工作或者学习的需要,我们或多或少都会接触到各种反弹shell的命令,于是就有了这个能稍微帮助初学者理解的文档 ...

  7. springMVC 几种页面跳转方式

    今天主要写一下响应界面跳转的几种方式 1.在注解的方式中 1.1通过HttpServletResponse的API直接输出(不需要配置渲染器) controller类的主要代码 @Controller ...

  8. freemarker了解

    今天主要了解了项目流程,了解了这周要做退款详情迁移,了解了freemarker

  9. UVM/OVM中的factory【zz】

    原文地址:http://bbs.eetop.cn/viewthread.php?tid=452518&extra=&authorid=828160&page=1 在新的项目中再 ...

  10. easyui---基础组件:panel

    加载easyui有两种方式:1种是html方式加载,1种是js加载. 要加载内容非常多时,用js,如果加载的东西比较少,用html就可以了. panel组件:面板 就是头 身展示 ,一个滚动条,几个关 ...