spark hadoop 对比 Resilient Distributed Datasets
hadoop 迭代消耗大 每次迭代启动一个完整的MapReduce作业
spark 首要目标就是避免运算时 过多的网络和磁盘IO开销
Resilient Distributed Datasets
http://www.cs.cmu.edu/~pavlo/courses/fall2013/static/slides/spark.pdf
Resilient Distributed Datasets
Presented by Henggang Cui
15799b Talk
1
Why not MapReduce
• Provide fault-tolerance, but:
• Hard to reuse intermediate results across
multiple computations
– stable storage for sharing data across jobs
• Hard to support interactive ad-hoc queries
2
Why not Other In-Memory Storage
• Examples: Piccolo
– Apply fine-grained updates to shared states
• Efficient, but:
• Hard to provide fault-tolerance
– need replication or checkpointing
3
Resilient Distributed Datasets (RDDs)
• Restricted form of distributed shared memory
– read-only, partitioned collection of records
– can only be built through coarse‐grained
deterministic transformations
• data in stable storage
• transformations from other RDDs.
• Express computation by
– defining RDDs
4
Fault Recovery
• Efficient fault recovery using lineage
– log one operation to apply to many elements
(lineage)
– recompute lost partitions on failure
5
Example
lines = spark.textFile("hdfs://...")
errors = lines.filter(_.startsWith("ERROR"))
hdfs_errors = errors.filter(_.contains(“HDFS"))
6
Advantages of the RDD Model
• Efficient fault recovery
– fine-grained and low-overhead using lineage
• Immutable nature can mitigate stragglers
– backup tasks to mitigate stragglers
• Graceful degradation when RAM is not
enough
7
Spark
• Implementation of the RDD abstraction
– Scala interface
• Two components
– Driver
– Workers
8
• Driver
– defines and invokes actions on RDDs
– tracks the RDDs’ lineage
• Workers
– store RDD partitions
– perform RDD
transformations
Spark Runtime
9
Supported RDD Operations
• Transformations
– map (f: T->U)
– filter (f: T->Bool)
– join()
– ... (and lots of others)
• Actions
– count()
– save()
– ... (and lots of others)
10
Representing RDDs
• A graph-based representation for RDDs
• Pieces of information for each RDD
– a set of partitions
– a set of dependencies on parent RDDs
– a function for computing it from its parents
– metadata about its partitioning scheme and data
placement
11
RDD Dependencies
• Narrow dependencies
– each partition of the parent RDD is used by at
most one partition of the child RDD
• Wide dependencies
– multiple child partitions may depend on it
12
RDD Dependencies
13
RDD Dependencies
• Narrow dependencies
– allow for pipelined execution on one cluster node
– easy fault recovery
• Wide dependencies
– require data from all parent partitions to be
available and to be shuffled across the nodes
– a single failed node might cause a complete reexecution.
14
Job Scheduling
• To execute an action on an RDD
– scheduler decide the stages from the RDD’s
lineage graph
– each stage contains as many pipelined
transformations with narrow dependencies as
possible
15
Job Scheduling
16
Memory Management
• Three options for persistent RDDs
– in-memory storage as deserialized Java objects
– in-memory storage as serialized data
– on-disk storage
• LRU eviction policy at the level of RDDs
– when there’s not enough memory, evict a
partition from the least recently accessed RDD
17
Checkpointing
• Checkpoint RDDs to prevent long lineage
chains during fault recovery
• Simpler to checkpoint than shared memory
– Read-only nature of RDDs
18
Discussions
19
Checkpointing or Versioning?
20
• Frequent checkpointing, or
Keep all versions of ranks?
spark hadoop 对比 Resilient Distributed Datasets的更多相关文章
- Apache Spark 2.2.0 中文文档 - Spark RDD(Resilient Distributed Datasets)论文 | ApacheCN
Spark RDD(Resilient Distributed Datasets)论文 概要 1: 介绍 2: Resilient Distributed Datasets(RDDs) 2.1 RDD ...
- Apache Spark RDD(Resilient Distributed Datasets)论文
Spark RDD(Resilient Distributed Datasets)论文 概要 1: 介绍 2: Resilient Distributed Datasets(RDDs) 2.1 RDD ...
- Apache Spark 2.2.0 中文文档 - Spark RDD(Resilient Distributed Datasets)
Spark RDD(Resilient Distributed Datasets)论文 概要 1: 介绍 2: Resilient Distributed Datasets(RDDs) 2.1 RDD ...
- Spark的核心RDD(Resilient Distributed Datasets弹性分布式数据集)
Spark的核心RDD (Resilient Distributed Datasets弹性分布式数据集) 原文链接:http://www.cnblogs.com/yjd_hycf_space/p/7 ...
- spark 笔记 2: Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing
http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf ucb关于spark的论文,对spark中核心组件RDD最原始.本质的理解, ...
- RDD内存迭代原理(Resilient Distributed Datasets)---弹性分布式数据集
Spark的核心RDD Resilient Distributed Datasets(弹性分布式数据集) Spark运行原理与RDD理论 Spark与MapReduce对比,MapReduce的计 ...
- Scala当中什么是RDD(Resilient Distributed Datasets)弹性分布式数据集
RDD(Resilient Distributed Datasets)弹性分布式数据集.你不好理解的话,可以把RDD就可以看成是一个简单的"动态数组"(比如ArrayList),对 ...
- 【Spark】RDD(Resilient Distributed Dataset)究竟是什么?
目录 基本概念 官方文档 概述 含义 RDD出现的原因 五大属性 以单词统计为例,一张图熟悉RDD当中的五大属性 解构图 RDD弹性 RDD特点 分区 只读 依赖 缓存 checkpoint 基本概念 ...
- 大数据 --> Spark与Hadoop对比
Spark与Hadoop对比 什么是Spark Spark是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行计算框架,Spark基于map reduce算法 ...
随机推荐
- C#线程锁使用全功略
C#线程锁使用全功略 前两篇简单介绍了线程同步lock,Monitor,同步事件EventWaitHandler,互斥体Mutex的基本用法,在此基础上,我们对 它们用法进行比较,并给出什么时候需要锁 ...
- “完美”解决微信小程序购物车抛物动画,在连续点击时出现计算错误问题,定时器停不下来。
最近做,微信点餐小程序,遇到添加商品时出现抛物动画,参考借鉴了这位大神的方法 https://www.cnblogs.com/greengage/p/7815842.html 但出现了一个问题,连续点 ...
- 笔试算法题(32):归并算法求逆序对 & 将数组元素转换为数组中剩下的其他元素的乘积
出题:多人按照从低到高排成一个前后队列,如果前面的人比后面的高就认为是一个错误对: 例如:[176,178,180,170,171]中的错误对 为 <176,170>, <176,1 ...
- 虚拟机Linux与本地虚拟网卡配置---NAT链接方式
虚拟机Linux与本地虚拟网卡配置---NAT链接方式 **********这是我亲自尝试多次实践出来的结果,不是复制粘贴************************* 首先进行初始化,这样避免有 ...
- python3.x Day5 socket编程
socket编程: socket 是大多应用层的底层的封装,实际封装的就是 发送,接收,但其实很复杂,在传输层协议之上(TCP/IP,UDP) 既然是网络通讯,一般按照服务端,客户端来处理:服务端: ...
- python 类的装饰器
我们知道,在不改变原有代码的基础上,我们可以使用装饰器为函数添加新的功能.同理,一切皆对象,我们也可以使用装饰器为类添加类属性.what? def deco(obj): obj.x = 1 obj.y ...
- ORM之连表操作
ORM之连表操作 -----------------------------连表的正向操作------------------------- 在models.py中创建两张表UserType和User ...
- IDE简介
IDE(Integrated Development Environment) 集成开发环境 十种集成开发工具: 微软 Visual Studio (VS) NetNeans PyCharm Inte ...
- STM32——通用定时器基本定时功能
STM32——————通用定时器基本定时功能 1. ...
- c++中的三角函数
c++中想求cos或sin: 1.首先得包含头文件,include<math.h> 2.sin(),cos(),中是弧度数,即若是角度a,则应写成cou<<sin(a*pi/1 ...