RDD本身presist可以是本地存储,本地存储级别的持久化实现方式如下:

DiskBlockManager负责管理和维护block和磁盘存储的映射关系,通过blockId作为文件名称,然后如果是多个目录通过blcokId的hash值进行分发。

包括创建目录,删除,读取文件,以及一些退出删除文件的机制。

/**
* Creates and maintains the logical mapping between logical blocks and physical on-disk
* locations. One block is mapped to one file with a name given by its BlockId.
* 创建和维护blocks和磁盘存储位置的映射关系。每个block对应一个文件。文件名字是bclockId。
* Block files are hashed among the directories listed in spark.local.dir (or in
* SPARK_LOCAL_DIRS, if it's set).
*
* spark.local.dir目录存储 block 的文件。是通过文件名的hash到各个spark.local.dirs目录里面
*/
private[spark] class DiskBlockManager(conf: SparkConf, deleteFilesOnStop: Boolean) extends Logging { private[spark] val subDirsPerLocalDir = conf.getInt("spark.diskStore.subDirectories", 64) /* Create one local directory for each path mentioned in spark.local.dir; then, inside this
* directory, create multiple subdirectories that we will hash files into, in order to avoid
* having really large inodes at the top level. */
private[spark] val localDirs: Array[File] = createLocalDirs(conf)
if (localDirs.isEmpty) {
logError("Failed to create any local dir.")
System.exit(ExecutorExitCode.DISK_STORE_FAILED_TO_CREATE_DIR)
}
// The content of subDirs is immutable but the content of subDirs(i) is mutable. And the content
// of subDirs(i) is protected by the lock of subDirs(i)
private val subDirs = Array.fill(localDirs.length)(new Array[File](subDirsPerLocalDir)) private val shutdownHook = addShutdownHook() /** Looks up a file by hashing it into one of our local subdirectories. */
// This method should be kept in sync with
// org.apache.spark.network.shuffle.ExternalShuffleBlockResolver#getFile(). // 通过文件名的hash在目录中查找文件
def getFile(filename: String): File = {
// Figure out which local directory it hashes to, and which subdirectory in that
val hash = Utils.nonNegativeHash(filename)
val dirId = hash % localDirs.length
val subDirId = (hash / localDirs.length) % subDirsPerLocalDir // Create the subdirectory if it doesn't already exist
val subDir = subDirs(dirId).synchronized {
val old = subDirs(dirId)(subDirId)
if (old != null) {
old
} else {
val newDir = new File(localDirs(dirId), "%02x".format(subDirId))
if (!newDir.exists() && !newDir.mkdir()) {
throw new IOException(s"Failed to create local dir in $newDir.")
}
subDirs(dirId)(subDirId) = newDir
newDir
}
} new File(subDir, filename)
} def getFile(blockId: BlockId): File = getFile(blockId.name) /** Check if disk block manager has a block. */
def containsBlock(blockId: BlockId): Boolean = {
getFile(blockId.name).exists()
} /** List all the files currently stored on disk by the disk manager. */
def getAllFiles(): Seq[File] = {
// Get all the files inside the array of array of directories
subDirs.flatMap { dir =>
dir.synchronized {
// Copy the content of dir because it may be modified in other threads
dir.clone()
}
}.filter(_ != null).flatMap { dir =>
val files = dir.listFiles()
if (files != null) files else Seq.empty
}
} /** List all the blocks currently stored on disk by the disk manager. */
def getAllBlocks(): Seq[BlockId] = {
getAllFiles().map(f => BlockId(f.getName))
} /** Produces a unique block id and File suitable for storing local intermediate results. */
def createTempLocalBlock(): (TempLocalBlockId, File) = {
var blockId = new TempLocalBlockId(UUID.randomUUID())
while (getFile(blockId).exists()) {
blockId = new TempLocalBlockId(UUID.randomUUID())
}
(blockId, getFile(blockId))
} /** Produces a unique block id and File suitable for storing shuffled intermediate results. */
def createTempShuffleBlock(): (TempShuffleBlockId, File) = {
var blockId = new TempShuffleBlockId(UUID.randomUUID())
while (getFile(blockId).exists()) {
blockId = new TempShuffleBlockId(UUID.randomUUID())
}
(blockId, getFile(blockId))
} /**
* Create local directories for storing block data. These directories are
* located inside configured local directories and won't
* be deleted on JVM exit when using the external shuffle service.
*
* 在rootDir中创建blockmgr目录,用来存储block数据
*
*/
private def createLocalDirs(conf: SparkConf): Array[File] = {
Utils.getConfiguredLocalDirs(conf).flatMap { rootDir =>
try {
val localDir = Utils.createDirectory(rootDir, "blockmgr")
logInfo(s"Created local directory at $localDir")
Some(localDir)
} catch {
case e: IOException =>
logError(s"Failed to create local dir in $rootDir. Ignoring this directory.", e)
None
}
}
} private def addShutdownHook(): AnyRef = {
logDebug("Adding shutdown hook") // force eager creation of logger
ShutdownHookManager.addShutdownHook(ShutdownHookManager.TEMP_DIR_SHUTDOWN_PRIORITY + 1) { () =>
logInfo("Shutdown hook called")
DiskBlockManager.this.doStop()
}
} /** Cleanup local dirs and stop shuffle sender. */
private[spark] def stop() {
// Remove the shutdown hook. It causes memory leaks if we leave it around.
try {
ShutdownHookManager.removeShutdownHook(shutdownHook)
} catch {
case e: Exception =>
logError(s"Exception while removing shutdown hook.", e)
}
doStop()
} //删除目录
private def doStop(): Unit = {
if (deleteFilesOnStop) {
localDirs.foreach { localDir =>
if (localDir.isDirectory() && localDir.exists()) {
try {
if (!ShutdownHookManager.hasRootAsShutdownDeleteDir(localDir)) {
Utils.deleteRecursively(localDir)
}
} catch {
case e: Exception =>
logError(s"Exception while deleting local spark dir: $localDir", e)
}
}
}
}
}
}

具体调用句柄在DiskStore中,调用put方法,将指定的block写到本地。

private[spark] class DiskStore(conf: SparkConf, diskManager: DiskBlockManager) extends Logging {

  private val minMemoryMapBytes = conf.getSizeAsBytes("spark.storage.memoryMapThreshold", "2m")

  def getSize(blockId: BlockId): Long = {
diskManager.getFile(blockId.name).length
} /**
* Invokes the provided callback function to write the specific block.
* 调用提供的回掉方法把指定的block写到磁盘
*
* @throws IllegalStateException if the block already exists in the disk store.
*/
def put(blockId: BlockId)(writeFunc: FileOutputStream => Unit): Unit = {
if (contains(blockId)) {
throw new IllegalStateException(s"Block $blockId is already present in the disk store")
}
logDebug(s"Attempting to put block $blockId")
val startTime = System.currentTimeMillis
//生成block文件,blockid作为文件名,包含一些创建文件夹的操作
val file = diskManager.getFile(blockId)
val fileOutputStream = new FileOutputStream(file)
var threwException: Boolean = true
try {
writeFunc(fileOutputStream)
threwException = false
} finally {
try {
Closeables.close(fileOutputStream, threwException)
} finally {
if (threwException) {
remove(blockId)
}
}
}
val finishTime = System.currentTimeMillis
logDebug("Block %s stored as %s file on disk in %d ms".format(
file.getName,
Utils.bytesToString(file.length()),
finishTime - startTime))
} def putBytes(blockId: BlockId, bytes: ChunkedByteBuffer): Unit = {
put(blockId) { fileOutputStream =>
val channel = fileOutputStream.getChannel
Utils.tryWithSafeFinally {
bytes.writeFully(channel)
} {
channel.close()
}
}
} //读取出指定的block数据放到内存中
def getBytes(blockId: BlockId): ChunkedByteBuffer = {
val file = diskManager.getFile(blockId.name)
val channel = new RandomAccessFile(file, "r").getChannel
Utils.tryWithSafeFinally {
// For small files, directly read rather than memory map
if (file.length < minMemoryMapBytes) {
val buf = ByteBuffer.allocate(file.length.toInt)
channel.position(0)
while (buf.remaining() != 0) {
if (channel.read(buf) == -1) {
throw new IOException("Reached EOF before filling buffer\n" +
s"offset=0\nfile=${file.getAbsolutePath}\nbuf.remaining=${buf.remaining}")
}
}
buf.flip()
new ChunkedByteBuffer(buf)
} else {
new ChunkedByteBuffer(channel.map(MapMode.READ_ONLY, 0, file.length))
}
} {
channel.close()
}
} //删除block数据
def remove(blockId: BlockId): Boolean = {
val file = diskManager.getFile(blockId.name)
if (file.exists()) {
val ret = file.delete()
if (!ret) {
logWarning(s"Error deleting ${file.getPath()}")
}
ret
} else {
false
}
} def contains(blockId: BlockId): Boolean = {
val file = diskManager.getFile(blockId.name)
file.exists()
}
}

spark DiskBlockManager的更多相关文章

  1. 跟我一起数据挖掘(22)——spark入门

    Spark简介 Spark是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行,Spark,拥有Hadoop MapReduce所具有的优点:但不同于MapR ...

  2. 搭建Spark的单机版集群

    一.创建用户 # useradd spark # passwd spark 二.下载软件 JDK,Scala,SBT,Maven 版本信息如下: JDK jdk-7u79-linux-x64.gz S ...

  3. Spark源码学习1.5——BlockManager.scala

    一.BlockResult类 该类用来表示返回的匹配的block及其相关的参数.共有三个参数: data:Iterator [Any]. readMethod: DataReadMethod.Valu ...

  4. Spark BlockManager的通信及内存占用分析(源码阅读九)

    之前阅读也有总结过Block的RPC服务是通过NettyBlockRpcServer提供打开,即下载Block文件的功能.然后在启动jbo的时候由Driver上的BlockManagerMaster对 ...

  5. 《深入理解Spark:核心思想与源码分析》(前言及第1章)

    自己牺牲了7个月的周末和下班空闲时间,通过研究Spark源码和原理,总结整理的<深入理解Spark:核心思想与源码分析>一书现在已经正式出版上市,目前亚马逊.京东.当当.天猫等网站均有销售 ...

  6. Spark Idea Maven 开发环境搭建

    一.安装jdk jdk版本最好是1.7以上,设置好环境变量,安装过程,略. 二.安装Maven 我选择的Maven版本是3.3.3,安装过程,略. 编辑Maven安装目录conf/settings.x ...

  7. Spark源码系列(六)Shuffle的过程解析

    Spark大会上,所有的演讲嘉宾都认为shuffle是最影响性能的地方,但是又无可奈何.之前去百度面试hadoop的时候,也被问到了这个问题,直接回答了不知道. 这篇文章主要是沿着下面几个问题来开展: ...

  8. Spark编译安装和运行

    一.环境说明 Mac OSX Java 1.7.0_71 Spark 二.编译安装 tar -zxvf spark-.tgz cd spark- ./sbt/sbt assembly ps:如果之前执 ...

  9. Spark metrics on wordcount example

    I read the section Metrics on spark website. I wish to try it on the wordcount example, I can't make ...

随机推荐

  1. es6+最佳入门实践(11)

    11.async函数 async 函数是什么?一句话,它就是 Generator 函数的语法糖.通俗的说就是Generator函数的另一种写法,这种写法更简洁,除此之外,async函数还对Genrat ...

  2. 计算n阶行列式的模板

    之前在学习计数问题的时候也在网上找了很多关于行列式的资料 但是发现很多地方都只介绍2\3阶的情况 一些论文介绍的方法又看不懂 然后就一直耽搁着 今天恰好出到这样的题目 发现标算的代码简介明了 还挺开心 ...

  3. [POJ1113&POJ1696]凸包卷包裹算法和Graham扫描法应用各一例

    凸包的算法比较形象好理解 代码写起来也比较短 所以考前看一遍应该就没什么问题了..>_< POJ1113 刚开始并没有理解为什么要用凸包,心想如果贴着城堡走不是更好吗? 突然发现题目中有要 ...

  4. 【BZOJ】1770 [Usaco2009 Nov]lights 燈

    [算法]高斯消元-异或方程组 [题解]良心简中题意 首先开关顺序没有意义. 然后就是每个点选或不选使得最后得到全部灯开启. 也就是我们需要一种确定的方案,这种方案使每盏灯都是开启的. 异或中1可以完美 ...

  5. The service base of EF I am using

    using CapMon.Data; using System; using System.Collections.Generic; using System.Linq; using System.T ...

  6. Ubuntu Touch环境搭建

    最近搞了一下Nexus 5的MultiRom Manger,体验了一把Ubuntu Touch和Android L,总体感觉还不错,不过Android L的NFC驱动还有问题,Ubuntu Touch ...

  7. jQuery,月历,左右点击事件实现月份的改变

    html页面: <div class="recordbriefing-title-top-body"> <span class="record-left ...

  8. django自带的orm之查询

    一.filter条件查询 用法: 模型类.objects.filter(模型类属性名__查询操作符 = 值) 判等: exact # 例:查询id为1的员工 select * from employe ...

  9. SQL语句中的select高级用法

    #转载请联系 为了更好的了解下面的知识点,我们先创建两张表并插入数据. # 学生表 +----+-----------+------+--------+--------+--------+------ ...

  10. 【cocos2d-js官方文档】二十一、v3相对于v2版本的api变动

    分类: cocos2d-js(28) 目录(?)[+] CCAudio.js SimpleAudioEngine.js改名为CCAudio.js. AudioEngine中删除了以下几个方法: pre ...