Spark源码分析之-Storage模块
原文链接:http://jerryshao.me/architecture/2013/10/08/spark-storage-module-analysis/
Background
前段时间琐事颇多,一直没有时间整理自己的博客,Spark源码分析写到一半也搁置了。之前介绍了deploy和scheduler两大模块,这次介绍Spark中的另一大模块 - storage模块。
在写Spark程序的时候我们常常和RDD ( Resilient Distributed Dataset ) 打交道,通过RDD为我们提供的各种transformation和action接口实现我们的应用,RDD的引入提高了抽象层次,在接口和实现上进行有效地隔离,使用户无需关心底层的实现。但是RDD提供给我们的仅仅是一个“形”, 我们所操作的数据究竟放在哪里,如何存取?它的“体”是怎么样的?这是由storage模块来实现和管理的,接下来我们就要剖析一下storage模块。
Storage模块整体架构
Storage模块主要分为两层:
- 通信层:storage模块采用的是master-slave结构来实现通信层,master和slave之间传输控制信息、状态信息,这些都是通过通信层来实现的。
- 存储层:storage模块需要把数据存储到disk或是memory上面,有可能还需replicate到远端,这都是由存储层来实现和提供相应接口。
而其他模块若要和storage模块进行交互,storage模块提供了统一的操作类BlockManager
,外部类与storage模块打交道都需要通过调用BlockManager
相应接口来实现。
Storage模块通信层
首先来看一下通信层的UML类图:
其次我们来看看各个类在master和slave上所扮演的不同角色:
对于master和slave,BlockManager
的创建有所不同:
Master (client driver)
BlockManagerMaster
拥有BlockManagerMasterActor
的actor和所有BlockManagerSlaveActor
的ref。Slave (executor)
对于slave,
BlockManagerMaster
则拥有BlockManagerMasterActor
的ref和自身BlockManagerSlaveActor
的actor。
BlockManagerMasterActor
在ref和actor之间进行通信;BlockManagerSlaveActor
在ref和actor之间通信。
actor和ref:
actor和ref是Akka中的两个不同的actor reference,分别由
actorOf
和actorFor
所创建。actor类似于网络服务中的server端,它保存所有的状态信息,接收client端的请求执行并返回给客户端;ref类似于网络服务中的client端,通过向server端发起请求获取结果。
BlockManager
wrap了BlockManagerMaster
,通过BlockManagerMaster
进行通信。Spark会在client driver和executor端创建各自的BlockManager
,通过BlockManager
对storage模块进行操作。
BlockManager
对象在SparkEnv
中被创建,创建的过程如下所示:
def registerOrLookup(name:String, newActor:=>Actor):ActorRef={
if(isDriver){
logInfo("Registering "+ name)
actorSystem.actorOf(Props(newActor), name = name)
}else{
val driverHost:String=System.getProperty("spark.driver.host","localhost")
val driverPort:Int=System.getProperty("spark.driver.port","7077").toInt
Utils.checkHost(driverHost,"Expected hostname")
val url ="akka://spark@%s:%s/user/%s".format(driverHost, driverPort, name)
logInfo("Connecting to "+ name +": "+ url)
actorSystem.actorFor(url)
}
}
val blockManagerMaster =newBlockManagerMaster(registerOrLookup(
"BlockManagerMaster",
newBlockManagerMasterActor(isLocal)))
val blockManager =newBlockManager(executorId, actorSystem, blockManagerMaster, serializer)
可以看到对于client driver和executor,Spark分别创建了BlockManagerMasterActor
actor和ref,并被wrap到BlockManager
中。
通信层传递的消息
BlockManagerMasterActor
executor to client driver
RegisterBlockManager (executor创建BlockManager以后向client driver发送请求注册自身) HeartBeat UpdateBlockInfo (更新block信息) GetPeers (请求获得其他BlockManager的id) GetLocations (获取block所在的BlockManager的id) GetLocationsMultipleBlockIds (获取一组block所在的BlockManager id)
client driver to client driver
GetLocations (获取block所在的BlockManager的id) GetLocationsMultipleBlockIds (获取一组block所在的BlockManager id) RemoveExecutor (删除所保存的已经死亡的executor上的BlockManager) StopBlockManagerMaster (停止client driver上的BlockManagerMasterActor)
有些消息例如
GetLocations
在executor端和client driver端都会向actor请求,而其他的消息比如RegisterBlockManager
只会由executor端的ref向client driver端的actor发送,于此同时例如RemoveExecutor
则只会由client driver端的ref向client driver端的actor发送。具体消息是从哪里发送,哪里接收和处理请看代码细节,在这里就不再赘述了。
BlockManagerSlaveActor
client driver to executor
RemoveBlock (删除block) RemoveRdd (删除RDD)
通信层中涉及许多控制消息和状态消息的传递以及处理,这些细节可以直接查看源码,这里就不在一一罗列。下面就只简单介绍一下exeuctor端的BlockManager
是如何启动以及向client driver发送注册请求完成注册。
Register BlockManager
前面已经介绍了BlockManager
对象是如何被创建出来的,当BlockManager
被创建出来以后需要向client driver注册自己,下面我们来看一下这个流程:
首先BlockManager
会调用initialize()
初始化自己
privatedef initialize(){
master.registerBlockManager(blockManagerId, maxMemory, slaveActor)
...
if(!BlockManager.getDisableHeartBeatsForTesting){
heartBeatTask = actorSystem.scheduler.schedule(0.seconds, heartBeatFrequency.milliseconds){
heartBeat()
}
}
}
在initialized()
函数中首先调用BlockManagerMaster
向client driver注册自己,同时设置heartbeat定时器,定时发送heartbeat报文。可以看到在注册自身的时候向client driver传递了自身的slaveActor
,client driver收到slaveActor
以后会将其与之对应的BlockManagerInfo
存储到hash map中,以便后续通过slaveActor
向executor发送命令。
BlockManagerMaster
会将注册请求包装成RegisterBlockManager
报文发送给client driver的BlockManagerMasterActor
,BlockManagerMasterActor
调用register()
函数注册BlockManager
:
privatedefregister(id:BlockManagerId, maxMemSize:Long, slaveActor:ActorRef){
if(id.executorId =="<driver>"&&!isLocal){
// Got a register message from the master node; don't register it
}elseif(!blockManagerInfo.contains(id)){
blockManagerIdByExecutor.get(id.executorId) match {
caseSome(manager)=>
// A block manager of the same executor already exists.
// This should never happen. Let's just quit.
logError("Got two different block manager registrations on "+ id.executorId)
System.exit(1)
caseNone=>
blockManagerIdByExecutor(id.executorId)= id
}
blockManagerInfo(id)=newBlockManagerMasterActor.BlockManagerInfo(
id,System.currentTimeMillis(), maxMemSize, slaveActor)
}
}
需要注意的是在client driver端也会执行上述过程,只是在最后注册的时候如果判断是"<driver>"
就不进行任何操作。可以看到对应的BlockManagerInfo
对象被创建并保存在hash map中。
Storage模块存储层
在RDD层面上我们了解到RDD是由不同的partition组成的,我们所进行的transformation和action是在partition上面进行的;而在storage模块内部,RDD又被视为由不同的block组成,对于RDD的存取是以block为单位进行的,本质上partition和block是等价的,只是看待的角度不同。在Spark storage模块中中存取数据的最小单位是block,所有的操作都是以block为单位进行的。
首先我们来看一下存储层的UML类图:
BlockManager
对象被创建的时候会创建出MemoryStore
和DiskStore
对象用以存取block,同时在initialize()
函数中创建BlockManagerWorker
对象用以监听远程的block存取请求来进行相应处理。
private[storage] val memoryStore:BlockStore=newMemoryStore(this, maxMemory)
private[storage] val diskStore:DiskStore=
newDiskStore(this,System.getProperty("spark.local.dir",System.getProperty("java.io.tmpdir")))
privatedef initialize(){
...
BlockManagerWorker.startBlockManagerWorker(this)
...
}
下面就具体介绍一下对于DiskStore
和MemoryStore
,block的存取操作是怎样进行的。
DiskStore如何存取block
DiskStore
可以配置多个folder,Spark会在不同的folder下面创建Spark文件夹,文件夹的命名方式为(spark-local-yyyyMMddHHmmss-xxxx, xxxx是一个随机数),所有的block都会存储在所创建的folder里面。DiskStore
会在对象被创建时调用createLocalDirs()
来创建文件夹:
privatedef createLocalDirs():Array[File]={
logDebug("Creating local directories at root dirs '"+ rootDirs +"'")
val dateFormat =newSimpleDateFormat("yyyyMMddHHmmss")
rootDirs.split(",").map { rootDir =>
var foundLocalDir =false
var localDir:File=null
var localDirId:String=null
var tries =0
val rand =newRandom()
while(!foundLocalDir && tries < MAX_DIR_CREATION_ATTEMPTS){
tries +=1
try{
localDirId ="%s-%04x".format(dateFormat.format(newDate), rand.nextInt(65536))
localDir =newFile(rootDir,"spark-local-"+ localDirId)
if(!localDir.exists){
foundLocalDir = localDir.mkdirs()
}
}catch{
case e:Exception=>
logWarning("Attempt "+ tries +" to create local dir "+ localDir +" failed", e)
}
}
if(!foundLocalDir){
logError("Failed "+ MAX_DIR_CREATION_ATTEMPTS +
" attempts to create local dir in "+ rootDir)
System.exit(ExecutorExitCode.DISK_STORE_FAILED_TO_CREATE_DIR)
}
logInfo("Created local directory at "+ localDir)
localDir
}
}
在DiskStore
里面,每一个block都被存储为一个file,通过计算block id的hash值将block映射到文件中,block id与文件路径的映射关系如下所示:
privatedef getFile(blockId:String):File={
logDebug("Getting file for block "+ blockId)
// Figure out which local directory it hashes to, and which subdirectory in that
val hash =Utils.nonNegativeHash(blockId)
val dirId = hash % localDirs.length
val subDirId =(hash / localDirs.length)% subDirsPerLocalDir
// Create the subdirectory if it doesn't already exist
var subDir = subDirs(dirId)(subDirId)
if(subDir ==null){
subDir = subDirs(dirId).synchronized{
val old = subDirs(dirId)(subDirId)
if(old !=null){
old
}else{
val newDir =newFile(localDirs(dirId),"%02x".format(subDirId))
newDir.mkdir()
subDirs(dirId)(subDirId)= newDir
newDir
}
}
}
newFile(subDir, blockId)
}
根据block id计算出hash值,将hash取模获得dirId
和subDirId
,在subDirs
中找出相应的subDir
,若没有则新建一个subDir
,最后以subDir
为路径、block id为文件名创建file handler,DiskStore
使用此file handler将block写入文件内,代码如下所示:
overridedef putBytes(blockId:String, _bytes:ByteBuffer, level:StorageLevel){
// So that we do not modify the input offsets !
// duplicate does not copy buffer, so inexpensive
val bytes = _bytes.duplicate()
logDebug("Attempting to put block "+ blockId)
val startTime =System.currentTimeMillis
val file = createFile(blockId)
val channel =newRandomAccessFile(file,"rw").getChannel()
while(bytes.remaining >0){
channel.write(bytes)
}
channel.close()
val finishTime =System.currentTimeMillis
logDebug("Block %s stored as %s file on disk in %d ms".format(
blockId,Utils.bytesToString(bytes.limit),(finishTime - startTime)))
}
而获取block则非常简单,找到相应的文件并读取出来即可:
overridedef getBytes(blockId:String):Option[ByteBuffer]={
val file = getFile(blockId)
val bytes = getFileBytes(file)
Some(bytes)
}
因此在DiskStore
中存取block首先是要将block id映射成相应的文件路径,接着存取文件就可以了。
MemoryStore如何存取block
相对于DiskStore
需要根据block id hash计算出文件路径并将block存放到对应的文件里面,MemoryStore
管理block就显得非常简单:MemoryStore
内部维护了一个hash map来管理所有的block,以block id为key将block存放到hash map中。
caseclassEntry(value:Any, size:Long, deserialized:Boolean)
private val entries =newLinkedHashMap[String,Entry](32,0.75f,true)
在MemoryStore
中存放block必须确保内存足够容纳下该block,若内存不足则会将block写到文件中,具体的代码如下所示:
overridedef putBytes(blockId:String, _bytes:ByteBuffer, level:StorageLevel){
// Work on a duplicate - since the original input might be used elsewhere.
val bytes = _bytes.duplicate()
bytes.rewind()
if(level.deserialized){
val values = blockManager.dataDeserialize(blockId, bytes)
val elements =newArrayBuffer[Any]
elements ++= values
val sizeEstimate =SizeEstimator.estimate(elements.asInstanceOf[AnyRef])
tryToPut(blockId, elements, sizeEstimate,true)
}else{
tryToPut(blockId, bytes, bytes.limit,false)
}
}
在tryToPut()
中,首先调用ensureFreeSpace()
确保空闲内存是否足以容纳block,若可以则将该block放入hash map中进行管理;若不足以容纳则通过调用dropFromMemory()
将block写入文件。
privatedef tryToPut(blockId:String, value:Any, size:Long, deserialized:Boolean):Boolean={
// TODO: Its possible to optimize the locking by locking entries only when selecting blocks
// to be dropped. Once the to-be-dropped blocks have been selected, and lock on entries has been
// released, it must be ensured that those to-be-dropped blocks are not double counted for
// freeing up more space for another block that needs to be put. Only then the actually dropping
// of blocks (and writing to disk if necessary) can proceed in parallel.
putLock.synchronized{
if(ensureFreeSpace(blockId, size)){
val entry =newEntry(value, size, deserialized)
entries.synchronized{
entries.put(blockId, entry)
currentMemory += size
}
if(deserialized){
logInfo("Block %s stored as values to memory (estimated size %s, free %s)".format(
blockId,Utils.bytesToString(size),Utils.bytesToString(freeMemory)))
}else{
logInfo("Block %s stored as bytes to memory (size %s, free %s)".format(
blockId,Utils.bytesToString(size),Utils.bytesToString(freeMemory)))
}
true
}else{
// Tell the block manager that we couldn't put it in memory so that it can drop it to
// disk if the block allows disk storage.
val data =if(deserialized){
Left(value.asInstanceOf[ArrayBuffer[Any]])
}else{
Right(value.asInstanceOf[ByteBuffer].duplicate())
}
blockManager.dropFromMemory(blockId, data)
false
}
}
}
而从MemoryStore
中取得block则非常简单,只需从hash map中取出block id对应的value即可。
overridedef getValues(blockId:String):Option[Iterator[Any]]={
val entry = entries.synchronized{
entries.get(blockId)
}
if(entry ==null){
None
}elseif(entry.deserialized){
Some(entry.value.asInstanceOf[ArrayBuffer[Any]].iterator)
}else{
val buffer = entry.value.asInstanceOf[ByteBuffer].duplicate()// Doesn't actually copy data
Some(blockManager.dataDeserialize(blockId, buffer))
}
}
Put or Get block through BlockManager
上面介绍了DiskStore
和MemoryStore
对于block的存取操作,那么我们是要直接与它们交互存取数据吗,还是封装了更抽象的接口使我们无需关心底层?
BlockManager
为我们提供了put()
和get()
函数,用户可以使用这两个函数对block进行存取而无需关心底层实现。
首先我们来看一下put()
函数的实现:
def put(blockId:String, values:ArrayBuffer[Any], level:StorageLevel,
tellMaster:Boolean=true):Long={
...
// Remember the block's storage level so that we can correctly drop it to disk if it needs
// to be dropped right after it got put into memory. Note, however, that other threads will
// not be able to get() this block until we call markReady on its BlockInfo.
val myInfo ={
val tinfo =newBlockInfo(level, tellMaster)
// Do atomically !
val oldBlockOpt = blockInfo.putIfAbsent(blockId, tinfo)
if(oldBlockOpt.isDefined){
if(oldBlockOpt.get.waitForReady()){
logWarning("Block "+ blockId +" already exists on this machine; not re-adding it")
return oldBlockOpt.get.size
}
// TODO: So the block info exists - but previous attempt to load it (?) failed. What do we do now ? Retry on it ?
oldBlockOpt.get
}else{
tinfo
}
}
val startTimeMs =System.currentTimeMillis
// If we need to replicate the data, we'll want access to the values, but because our
// put will read the whole iterator, there will be no values left. For the case where
// the put serializes data, we'll remember the bytes, above; but for the case where it
// doesn't, such as deserialized storage, let's rely on the put returning an Iterator.
var valuesAfterPut:Iterator[Any]=null
// Ditto for the bytes after the put
var bytesAfterPut:ByteBuffer=null
// Size of the block in bytes (to return to caller)
var size =0L
myInfo.synchronized{
logTrace("Put for block "+ blockId +" took "+Utils.getUsedTimeMs(startTimeMs)
+" to get into synchronized block")
var marked =false
try{
if(level.useMemory){
// Save it just to memory first, even if it also has useDisk set to true; we will later
// drop it to disk if the memory store can't hold it.
val res = memoryStore.putValues(blockId, values, level,true)
size = res.size
res.data match {
caseRight(newBytes)=> bytesAfterPut = newBytes
caseLeft(newIterator)=> valuesAfterPut = newIterator
}
}else{
// Save directly to disk.
// Don't get back the bytes unless we replicate them.
val askForBytes = level.replication >1
val res = diskStore.putValues(blockId, values, level, askForBytes)
size = res.size
res.data match {
caseRight(newBytes)=> bytesAfterPut = newBytes
case _ =>
}
}
// Now that the block is in either the memory or disk store, let other threads read it,
// and tell the master about it.
marked =true
myInfo.markReady(size)
if(tellMaster){
reportBlockStatus(blockId, myInfo)
}
}finally{
// If we failed at putting the block to memory/disk, notify other possible readers
// that it has failed, and then remove it from the block info map.
if(! marked){
// Note that the remove must happen before markFailure otherwise another thread
// could've inserted a new BlockInfo before we remove it.
blockInfo.remove(blockId)
myInfo.markFailure()
logWarning("Putting block "+ blockId +" failed")
}
}
}
logDebug("Put block "+ blockId +" locally took "+Utils.getUsedTimeMs(startTimeMs))
// Replicate block if required
if(level.replication >1){
val remoteStartTime =System.currentTimeMillis
// Serialize the block if not already done
if(bytesAfterPut ==null){
if(valuesAfterPut ==null){
thrownewSparkException(
"Underlying put returned neither an Iterator nor bytes! This shouldn't happen.")
}
bytesAfterPut = dataSerialize(blockId, valuesAfterPut)
}
replicate(blockId, bytesAfterPut, level)
logDebug("Put block "+ blockId +" remotely took "+Utils.getUsedTimeMs(remoteStartTime))
}
BlockManager.dispose(bytesAfterPut)
return size
}
对于put()
操作,主要分为以下3个步骤:
- 为block创建
BlockInfo
结构体存储block相关信息,同时将其加锁使其不能被访问。 - 根据block的storage level将block存储到memory或是disk上,同时解锁标识该block已经ready,可被访问。
- 根据block的replication数决定是否将该block replicate到远端。
接着我们来看一下get()
函数的实现:
defget(blockId:String):Option[Iterator[Any]]={
val local= getLocal(blockId)
if(local.isDefined){
logInfo("Found block %s locally".format(blockId))
returnlocal
}
val remote = getRemote(blockId)
if(remote.isDefined){
logInfo("Found block %s remotely".format(blockId))
return remote
}
None
}
get()
首先会从local的BlockManager
中查找block,如果找到则返回相应的block,若local没有找到该block,则发起请求从其他的executor上的BlockManager
中查找block。在通常情况下Spark任务的分配是根据block的分布决定的,任务往往会被分配到拥有block的节点上,因此getLocal()
就能找到所需的block;但是在资源有限的情况下,Spark会将任务调度到与block不同的节点上,这样就必须通过getRemote()
来获得block。
我们先来看一下getLocal()
:
def getLocal(blockId:String):Option[Iterator[Any]]={
logDebug("Getting local block "+ blockId)
val info = blockInfo.get(blockId).orNull
if(info !=null){
info.synchronized{
// In the another thread is writing the block, wait for it to become ready.
if(!info.waitForReady()){
// If we get here, the block write failed.
logWarning("Block "+ blockId +" was marked as failure.")
returnNone
}
val level = info.level
logDebug("Level for block "+ blockId +" is "+ level)
// Look for the block in memory
if(level.useMemory){
logDebug("Getting block "+ blockId +" from memory")
memoryStore.getValues(blockId) match {
caseSome(iterator)=>
returnSome(iterator)
caseNone=>
logDebug("Block "+ blockId +" not found in memory")
}
}
// Look for block on disk, potentially loading it back into memory if required
if(level.useDisk){
logDebug("Getting block "+ blockId +" from disk")
if(level.useMemory && level.deserialized){
diskStore.getValues(blockId) match {
caseSome(iterator)=>
// Put the block back in memory before returning it
// TODO: Consider creating a putValues that also takes in a iterator ?
val elements =newArrayBuffer[Any]
elements ++= iterator
memoryStore.putValues(blockId, elements, level,true).data match {
caseLeft(iterator2)=>
returnSome(iterator2)
case _ =>
thrownewException("Memory store did not return back an iterator")
}
caseNone=>
thrownewException("Block "+ blockId +" not found on disk, though it should be")
}
}elseif(level.useMemory &&!level.deserialized){
// Read it as a byte buffer into memory first, then return it
diskStore.getBytes(blockId) match {
caseSome(bytes)=>
// Put a copy of the block back in memory before returning it. Note that we can't
// put the ByteBuffer returned by the disk store as that's a memory-mapped file.
// The use of rewind assumes this.
assert(0== bytes.position())
val copyForMemory =ByteBuffer.allocate(bytes.limit)
copyForMemory.put(bytes)
memoryStore.putBytes(blockId, copyForMemory, level)
bytes.rewind()
returnSome(dataDeserialize(blockId, bytes))
caseNone=>
thrownewException("Block "+ blockId +" not found on disk, though it should be")
}
}else{
diskStore.getValues(blockId) match {
caseSome(iterator)=>
returnSome(iterator)
caseNone=>
thrownewException("Block "+ blockId +" not found on disk, though it should be")
}
}
}
}
}else{
logDebug("Block "+ blockId +" not registered locally")
}
returnNone
}
getLocal()
首先会根据block id获得相应的BlockInfo
并从中取出该block的storage level,根据storage level的不同getLocal()
又进入以下不同分支:
- level.useMemory == true:从memory中取出block并返回,若没有取到则进入分支2。
- level.useDisk == true:
- level.useMemory == true: 将block从disk中读出并写入内存以便下次使用时直接从内存中获得,同时返回该block。
- level.useMemory == false: 将block从disk中读出并返回
- level.useDisk == false: 没有在本地找到block,返回None。
接下来我们来看一下getRemote()
:
def getRemote(blockId:String):Option[Iterator[Any]]={
if(blockId ==null){
thrownewIllegalArgumentException("Block Id is null")
}
logDebug("Getting remote block "+ blockId)
// Get locations of block
val locations = master.getLocations(blockId)
// Get block from remote locations
for(loc <- locations){
logDebug("Getting remote block "+ blockId +" from "+ loc)
val data =BlockManagerWorker.syncGetBlock(
GetBlock(blockId),ConnectionManagerId(loc.host, loc.port))
if(data !=null){
returnSome(dataDeserialize(blockId, data))
}
logDebug("The value of block "+ blockId +" is null")
}
logDebug("Block "+ blockId +" not found")
returnNone
}
getRemote()
首先取得该block的所有location信息,然后根据location向远端发送请求获取block,只要有一个远端返回block该函数就返回而不继续发送请求。
至此我们简单介绍了BlockManager
类中的get()
和put()
函数,使用这两个函数外部类可以轻易地存取block数据。
Partition如何转化为Block
在storage模块里面所有的操作都是和block相关的,但是在RDD里面所有的运算都是基于partition的,那么partition是如何与block对应上的呢?
RDD计算的核心函数是iterator()
函数:
finaldef iterator(split:Partition, context:TaskContext):Iterator[T]={
if(storageLevel !=StorageLevel.NONE){
SparkEnv.get.cacheManager.getOrCompute(this, split, context, storageLevel)
}else{
computeOrReadCheckpoint(split, context)
}
}
如果当前RDD的storage level不是NONE的话,表示该RDD在BlockManager
中有存储,那么调用CacheManager
中的getOrCompute()
函数计算RDD,在这个函数中partition和block发生了关系:
首先根据RDD id和partition index构造出block id (rdd_xx_xx),接着从BlockManager
中取出相应的block。
- 如果该block存在,表示此RDD在之前已经被计算过和存储在
BlockManager
中,因此取出即可,无需再重新计算。 - 如果该block不存在则需要调用RDD的
computeOrReadCheckpoint()
函数计算出新的block,并将其存储到BlockManager
中。
需要注意的是block的计算和存储是阻塞的,若另一线程也需要用到此block则需等到该线程block的loading结束。
def getOrCompute[T](rdd: RDD[T], split:Partition, context:TaskContext, storageLevel:StorageLevel)
:Iterator[T]={
val key ="rdd_%d_%d".format(rdd.id, split.index)
logDebug("Looking for partition "+ key)
blockManager.get(key) match {
caseSome(values)=>
// Partition is already materialized, so just return its values
return values.asInstanceOf[Iterator[T]]
caseNone=>
// Mark the split as loading (unless someone else marks it first)
loading.synchronized{
if(loading.contains(key)){
logInfo("Another thread is loading %s, waiting for it to finish...".format (key))
while(loading.contains(key)){
try{loading.wait()}catch{case _ :Throwable=>}
}
logInfo("Finished waiting for %s".format(key))
// See whether someone else has successfully loaded it. The main way this would fail
// is for the RDD-level cache eviction policy if someone else has loaded the same RDD
// partition but we didn't want to make space for it. However, that case is unlikely
// because it's unlikely that two threads would work on the same RDD partition. One
// downside of the current code is that threads wait serially if this does happen.
blockManager.get(key) match {
caseSome(values)=>
return values.asInstanceOf[Iterator[T]]
caseNone=>
logInfo("Whoever was loading %s failed; we'll try it ourselves".format (key))
loading.add(key)
}
}else{
loading.add(key)
}
}
try{
// If we got here, we have to load the split
logInfo("Partition %s not found, computing it".format(key))
val computedValues = rdd.computeOrReadCheckpoint(split, context)
// Persist the result, so long as the task is not running locally
if(context.runningLocally){return computedValues }
val elements =newArrayBuffer[Any]
elements ++= computedValues
blockManager.put(key, elements, storageLevel,true)
return elements.iterator.asInstanceOf[Iterator[T]]
}finally{
loading.synchronized{
loading.remove(key)
loading.notifyAll()
}
}
}
}
这样RDD的transformation、action就和block数据建立了联系,虽然抽象上我们的操作是在partition层面上进行的,但是partition最终还是被映射成为block,因此实际上我们的所有操作都是对block的处理和存取。
End
本文就storage模块的两个层面进行了介绍-通信层和存储层。通信层中简单介绍了类结构和组成以及类在通信层中所扮演的不同角色,还有不同角色之间通信的报文,同时简单介绍了通信层的启动和注册细节。存储层中分别介绍了DiskStore
和MemoryStore
中对于block的存和取的实现代码,同时分析了BlockManager
中put()
和get()
接口,最后简单介绍了Spark RDD中的partition与BlockManager
中的block之间的关系,以及如何交互存取block的。
本文从整体上分析了storage模块的实现,并未就具体实现做非常细节的分析,相信在看完本文对storage模块有一个整体的印象以后再去分析细节的实现会有事半功倍的效果。
Spark源码分析之-Storage模块的更多相关文章
- 【转】Spark源码分析之-Storage模块
原文地址:http://blog.csdn.net/aiuyjerry/article/details/8595991 Storage模块主要负责数据存取,包括MapReduce Shuffle中间结 ...
- 【转】Spark源码分析之-deploy模块
原文地址:http://jerryshao.me/architecture/2013/04/30/Spark%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90%E4%B9%8B- ...
- 【转】Spark源码分析之-scheduler模块
原文地址:http://jerryshao.me/architecture/2013/04/21/Spark%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90%E4%B9%8B- ...
- Spark源码分析 – BlockManager
参考, Spark源码分析之-Storage模块 对于storage, 为何Spark需要storage模块?为了cache RDD Spark的特点就是可以将RDD cache在memory或dis ...
- Spark源码分析 – 汇总索引
http://jerryshao.me/categories.html#architecture-ref http://blog.csdn.net/pelick/article/details/172 ...
- Spark源码分析 – Deploy
参考, Spark源码分析之-deploy模块 Client Client在SparkDeploySchedulerBackend被start的时候, 被创建, 代表一个application和s ...
- Spark源码分析 – SparkContext
Spark源码分析之-scheduler模块 这位写的非常好, 让我对Spark的源码分析, 变的轻松了许多 这里自己再梳理一遍 先看一个简单的spark操作, val sc = new SparkC ...
- Spark源码分析之九:内存管理模型
Spark是现在很流行的一个基于内存的分布式计算框架,既然是基于内存,那么自然而然的,内存的管理就是Spark存储管理的重中之重了.那么,Spark究竟采用什么样的内存管理模型呢?本文就为大家揭开Sp ...
- spark 源码分析之十五 -- Spark内存管理剖析
本篇文章主要剖析Spark的内存管理体系. 在上篇文章 spark 源码分析之十四 -- broadcast 是如何实现的?中对存储相关的内容没有做过多的剖析,下面计划先剖析Spark的内存机制,进而 ...
随机推荐
- Swoole 协程 MySQL 客户端与异步回调 MySQL 客户端的对比
Swoole 协程 MySql 客户端与 异步回调 MySql 客户端的对比 为什么要对比这两种不同模式的客户端? 异步 MySQL 回调客户端是虽然在 Swoole 1.8.6 版本就已经发布了, ...
- QDUOJ 炸老师与他的女朋友们 bfs+状压
炸老师与他的女朋友们 Description qdu最帅的炸老师今天又要抽空去找他的女朋友们了,但是考虑到他的好gay友ycb仍是个单身狗,炸老师作为基友不希望打击他.所以他在找女朋友们的路途中必须要 ...
- Halcon - 获取图像数据(灰度值)
在 Halcon 中,或许大部分人都知道如何通过 get_grayval 获取图像的灰度值,这条算子在获取单个像素时是比较好用的.但是当你想获取一幅大尺寸图像的一行甚至所有的灰度数据时,它就会变得很吃 ...
- Jmeter获取登录的token
这是之前在公司一个实际的接口性能测试项目中,遇到的问题.现在有空总结一下.我们所做的项目一般都需要先登录,这个时候就需要把登录和所要测试的接口分为两个事务,Jmeter中即为事务控制器. 1.首先,我 ...
- MYSQL中coalesce函数的用法
coalesce():返回参数中的第一个非空表达式(从左向右依次类推): 例如: select coalesce(null,4,5); // 返回4 select coalesce(null,null ...
- 315. Count of Smaller Numbers After Self(Fenwick Tree)
You are given an integer array nums and you have to return a new counts array. The counts array has ...
- UE4 Runtime下动态给Actor添加组件
http://www.v5xy.com/?p=858 UE4的组件分为两种:USceneComponent, UActorComponent UActorComponent (NewObject(th ...
- 笔记-JavaWeb学习之旅17
1.过滤选择器 首元素选择器:first 获得选择的元素中的第一个元素 尾元素选择器:last获得选择元素中的最后一个元素 非元素选择器:not(selector) 不包括指定内容的元素 偶数选择器: ...
- SublimeText3 snippet 编写总结
SublimeText3 snippet 编写总结 SublimeText的snippet定义功能也十分强大, 类似VAssist. 在菜单tool->New Snippet中定义. 打开后显 ...
- python之set集合、深浅拷贝
一.基本数据类型补充 1,关于int和str在之前的学习中已经介绍了80%以上了,现在再补充一个字符串的基本操作: li = ['李嘉诚','何炅','海峰','刘嘉玲'] s = "_&q ...