There Are Now 3 Apache Spark APIs. Here’s How to Choose the Right One
See Apache Spark 2.0 API Improvements: RDD, DataFrame, DataSet and SQL here.
Apache Spark is evolving at a rapid pace, including changes and additions to core APIs. One of the most disruptive areas of change is around the representation of data sets. Spark 1.0 used the RDD API but in the past twelve months, two new alternative and incompatible APIs have been introduced. Spark 1.3 introduced the radically different DataFrame API and the recently released Spark 1.6 release introduces a preview of the new Dataset API.
Many existing Spark developers will be wondering whether to jump from RDDs directly to the Dataset API, or whether to first move to the DataFrame API. Newcomers to Spark will have to choose which API to start learning with.
This article provides an overview of each of these APIs, and outlines the strengths and weaknesses of each one. A companion github repository provides working examples that are a good starting point for experimentation with the approaches outlined in this article.
Talk to a Spark expert.Contact Us.
The RDD (Resilient Distributed Dataset) API has been in Spark since the 1.0 release. This interface and its Java equivalent, JavaRDD, will be familiar to any developers who have worked through the standard Spark tutorials. From a developer’s perspective, an RDD is simply a set of Java or Scala objects representing data.
The RDD API provides many transformation methods, such as map()
, filter()
, and reduce()
for performing computations on the data. Each of these methods results in a new RDD representing the transformed data. However, these methods are just defining the operations to be performed and the transformations are not performed until an action method is called. Examples of action methods are collect()
and saveAsObjectFile()
.
Example of RDD transformations and actions
Scala:
1
2
3
|
rdd.filter(_.age > 21) // transformation
.map(_.last) // transformation
.saveAsObjectFile("under21.bin") // action
|
Java:
1
2
3
|
rdd.filter(p -> p.getAge() < 21) // transformation
.map(p -> p.getLast()) // transformation
.saveAsObjectFile("under21.bin"); // action
|
The main advantage of RDDs is that they are simple and well understood because they deal with concrete classes, providing a familiar object-oriented programming style with compile-time type-safety. For example, given an RDD containing instances of Person we can filter by age by referencing the age attribute of each Person object:
Example: Filter by attribute with RDD
Scala:
1
|
rdd.filter(_.age > 21)
|
Java:
1
|
rdd.filter(person -> person.getAge() > 21)
|
The main disadvantage to RDDs is that they don’t perform particularly well. Whenever Spark needs to distribute the data within the cluster, or write the data to disk, it does so using Java serialization by default (although it is possible to use Kryo as a faster alternative in most cases). The overhead of serializing individual Java and Scala objects is expensive and requires sending both data and structure between nodes (each serialized object contains the class structure as well as the values). There is also the overhead of garbage collection that results from creating and destroying individual objects.
DataFrame API
Spark 1.3 introduced a new DataFrame API as part of the Project Tungsten initiative which seeks to improve the performance and scalability of Spark. The DataFrame API introduces the concept of a schema to describe the data, allowing Spark to manage the schema and only pass data between nodes, in a much more efficient way than using Java serialization. There are also advantages when performing computations in a single process as Spark can serialize the data into off-heap storage in a binary format and then perform many transformations directly on this off-heap memory, avoiding the garbage-collection costs associated with constructing individual objects for each row in the data set. Because Spark understands the schema, there is no need to use Java serialization to encode the data.
The DataFrame API is radically different from the RDD API because it is an API for building a relational query plan that Spark’s Catalyst optimizer can then execute. The API is natural for developers who are familiar with building query plans, but not natural for the majority of developers. The query plan can be built from SQL expressions in strings or from a more functional approach using a fluent-style API.
Example: Filter by attribute with DataFrame
Note that these examples have the same syntax in both Java and Scala
SQL Style
1
|
df.filter("age > 21");
|
Expression builder style:
1
|
df.filter(df.col("age").gt(21));
|
Because the code is referring to data attributes by name, it is not possible for the compiler to catch any errors. If attribute names are incorrect then the error will only detected at runtime, when the query plan is created.
Another downside with the DataFrame API is that it is very scala-centric and while it does support Java, the support is limited. For example, when creating a DataFrame from an existing RDD of Java objects, Spark’s Catalyst optimizer cannot infer the schema and assumes that any objects in the DataFrame implement the scala.Product
interface. Scala case classes work out the box because they implement this interface.
Dataset API
The Dataset API, released as an API preview in Spark 1.6, aims to provide the best of both worlds; the familiar object-oriented programming style and compile-time type-safety of the RDD API but with the performance benefits of the Catalyst query optimizer. Datasets also use the same efficient off-heap storage mechanism as the DataFrame API.
When it comes to serializing data, the Dataset API has the concept of encoders which translate between JVM representations (objects) and Spark’s internal binary format. Spark has built-in encoders which are very advanced in that they generate byte code to interact with off-heap data and provide on-demand access to individual attributes without having to de-serialize an entire object. Spark does not yet provide an API for implementing custom encoders, but that is planned for a future release.
Additionally, the Dataset API is designed to work equally well with both Java and Scala. When working with Java objects, it is important that they are fully bean-compliant. In writing the examples to accompany this article, we ran into errors when trying to create a Dataset in Java from a list of Java objects that were not fully bean-compliant.
Need help with Spark APIs? Contact Us.
Example: Creating Dataset from a list of objects
Scala
1
2
3
4
5
|
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val sampleData: Seq[ScalaPerson] = ScalaData.sampleData()
val dataset = sqlContext.createDataset(sampleData)
|
Java
1
2
3
4
|
JavaSparkContext sc = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(sc);
List data = JavaData.sampleData();
Dataset dataset = sqlContext.createDataset(data, Encoders.bean(JavaPerson.class));
|
Transformations with the Dataset API look very much like the RDD API and deal with the Person class rather than an abstraction of a row.
Example: Filter by attribute with Dataset
Scala
1
|
dataset.filter(_.age < 21);
|
Java
1
|
dataset.filter(person -> person.getAge() < 21);
|
Despite the similarity with RDD code, this code is building a query plan, rather than dealing with individual objects, and if age is the only attribute accessed, then the rest of the the object’s data will not be read from off-heap storage.
Get started with your Big Data Strategy. Contact Us.
Conclusions
If you are developing primarily in Java then it is worth considering a move to Scala before adopting the DataFrame or Dataset APIs. Although there is an effort to support Java, Spark is written in Scala and the code often makes assumptions that make it hard (but not impossible) to deal with Java objects.
If you are developing in Scala and need your code to go into production with Spark 1.6.0 then the DataFrame API is clearly the most stable option available and currently offers the best performance.
However, the Dataset API preview looks very promising and provides a more natural way to code. Given the rapid evolution of Spark it is likely that this API will mature very quickly through 2016 and become the de-facto API for developing new applications.
See Apache Spark 2.0 API Improvements: RDD, DataFrame, DataSet and SQL here.
There Are Now 3 Apache Spark APIs. Here’s How to Choose the Right One的更多相关文章
- A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets(中英双语)
文章标题 A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets 且谈Apache Spark的API三剑客:RDD.Dat ...
- 且谈 Apache Spark 的 API 三剑客:RDD、DataFrame 和 Dataset
作者:Jules S. Damji 译者:足下 本文翻译自 A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets ,翻译已 ...
- Why Apache Spark is a Crossover Hit for Data Scientists [FWD]
Spark is a compelling multi-purpose platform for use cases that span investigative, as well as opera ...
- Apache Spark 2.2.0 中文文档 - 概述 | ApacheCN
Spark 概述 Apache Spark 是一个快速的, 多用途的集群计算系统. 它提供了 Java, Scala, Python 和 R 的高级 API,以及一个支持通用的执行图计算的优化过的引擎 ...
- Apache Spark 2.2.0 中文文档 - Spark SQL, DataFrames and Datasets Guide | ApacheCN
Spark SQL, DataFrames and Datasets Guide Overview SQL Datasets and DataFrames 开始入门 起始点: SparkSession ...
- 分享一个.NET平台开源免费跨平台的大数据分析框架.NET for Apache Spark
今天早上六点半左右微信群里就看到张队发的关于.NET Spark大数据的链接https://devblogs.microsoft.com/dotnet/introducing-net-for-apac ...
- APACHE SPARK 2.0 API IMPROVEMENTS: RDD, DATAFRAME, DATASET AND SQL
What’s New, What’s Changed and How to get Started. Are you ready for Apache Spark 2.0? If you are ju ...
- Introducing Apache Spark Datasets(中英双语)
文章标题 Introducing Apache Spark Datasets 作者介绍 Michael Armbrust, Wenchen Fan, Reynold Xin and Matei Zah ...
- Introducing DataFrames in Apache Spark for Large Scale Data Science(中英双语)
文章标题 Introducing DataFrames in Apache Spark for Large Scale Data Science 一个用于大规模数据科学的API——DataFrame ...
随机推荐
- InstallShield Limited Edition使用说明
从Visual Studio 2012开始,微软就把自家原来的安装与部署工具彻底废掉了,转而让大家去安装使用第三方的打包工具“InstallShield Limited Edition for Vis ...
- redis 系列23 哨兵Sentinel (上)
一.概述 Sentinel(哨岗或哨兵)是Redis的高可用解决方案:由一个或多个Sentinel实例(instance)组成的Sentinel系统(system)可以监视任意多个主服务器,以及这些主 ...
- JS 中 原生方法 (二) --- 数组 (修---添加ES6新增)
const arr = [1, 2, 3, 5, 'a', 'b'] /** * * length * 这个只能被 称之为 数组的原生属性, 返回 一个 number * arr.length */ ...
- SpringBoot入门教程(九)定时任务Schedule
在日常项目运行中,我们总会有需求在某一时间段周期性的执行某个动作.比如每天在某个时间段导出报表,或者每隔多久统计一次现在在线的用户量.在springboot中可以有很多方案去帮我们完成定时器的工作,有 ...
- linux磁盘管理系列一:磁盘配额管理
磁盘管理系列 linux磁盘管理系列一:磁盘配额管理 http://www.cnblogs.com/zhaojiedi1992/p/zhaojiedi_linux_040_quota.html l ...
- Linux 的进程间通信:管道
本文由云+社区发表 作者:邹立巍 版权声明: 本文章内容在非商业使用前提下可无需授权任意转载.发布. 转载.发布请务必注明作者和其微博.微信公众号地址,以便读者询问问题和甄误反馈,共同进步. 微博ID ...
- java 虚拟机内存划分,类加载过程以及对象的初始化
涉及关键词: 虚拟机运行时内存 java内存划分 类加载顺序 类加载时机 类加载步骤 对象初始化顺序 构造代码块顺序 构造方法 顺序 内存区域 java内存图 堆 方法区 虚拟机栈 本地 ...
- 痞子衡嵌入式:飞思卡尔Kinetis系列MCU启动那些事(2)- KBOOT形态(ROM/Bootloader/Flashloader)
大家好,我是痞子衡,是正经搞技术的痞子.今天痞子衡给大家介绍的是飞思卡尔Kinetis系列MCU的KBOOT形态. 痞子衡在前一篇文章里简介了 KBOOT架构,我们知道KBOOT是一个完善的Bootl ...
- Mysql加锁过程详解(8)-理解innodb的锁(record,gap,Next-Key lock)
Mysql加锁过程详解(1)-基本知识 Mysql加锁过程详解(2)-关于mysql 幻读理解 Mysql加锁过程详解(3)-关于mysql 幻读理解 Mysql加锁过程详解(4)-select fo ...
- 多种Timer的场景应用
前言 今天讲讲各种Timer的使用. 三种Timer组件 .Net框架提供了三种常规Timer组件,分别是System.Windows.Forms.Timer.System.Timers.Timer和 ...