What’s New, What’s Changed and How to get Started.

Are you ready for Apache Spark 2.0?

If you are just getting started with Apache Spark, the 2.0 release is the one to start with as the APIs have just gone through a major overhaul to improve ease-of-use.

If you are using an older version and want to learn what has changed then this article will give you the low down on why you should upgrade and what the impact to your code will be.

What’s new with Apache Spark 2.0?

Let’s start with the good news, and there’s plenty.

  • There are really only two programmatic APIs now; RDD and Dataset. For backwards compatibility, DataFrame still exists but is just a synonym for a Dataset.
  • Spark SQL has been improved to support a wider range of queries, including correlated subqueries. This was largely led by an effort to run TPC-DS benchmarks in Spark.
  • Performance is once again significantly improved thanks to advanced “whole stage code generation” when compiling query plans

CSV support is now built-in and based on the DataBricks spark-csv project, making it a breeze to create Datasets from CSV data with little coding.

Spark 2.0 is a major release, and there are some breaking changes that mean you may need to rewrite some of your code. Hereare some things we ran into when updating our apache-spark-examples.

  • For Scala users, SparkSession replaces SparkContext and SQLContext as the top-level context, but still provides access to SQLContext and SQLContext for backwards compatibility
  • DataFrame is now a synonym for Dataset[Row] and you can use these two types interchangeably,  although we recommend using the latter.
  • Performing a map() operation on a Dataset now returns a Dataset rather than an RDD, reducing the need to keep switching between the two APIs, and improving performance.
  • Some Java functional interfaces, such as FlatMapFunction, have been updated to return Iterator<T>rather than Iterable<T>.

Get help upgrading to Apache Spark 2.0 or making the transition from Java to Scala. Contact Us!

RDD vs. Dataset 2.0

Both the RDD API and the Dataset API represent data sets of a specific class. For instance, you can create an RDD[Person] as well as a Dataset[Person] so both can provide compile-time type-safety. Both can also be used with the generic Row structure provided in Spark for cases where classes might not exist that represent the data being manipulated, such as when reading CSV files.

RDDs can be used with any Java or Scala class and operate by manipulating those objects directly with all of the associated costs of object creation, serialization and garbage collection.

Datasets are limited to classes that implement the Scala Product trait, such as case classes. There is a very good reason for this limitation. Datasets store data in an optimized binary format, often in off-heap memory, to avoid the costs of deserialization and garbage collection. Even though it feels like you are coding against regular objects, Spark is really generating its own optimized byte-code for accessing the data directly.

RDD

 
1
2
3
// raw object manipulation
val rdd: RDD[Person] = …
val rdd2: RDD[String] = rdd.map(person => person.lastName)

Dataset

 
1
2
3
// optimized direct access to off-heap memory without deserializing objects
val ds: Dataset[Person] = …
val ds2: Dataset[String] = ds.map(person => person.lastName)

Getting Started with Scala

Here are some code samples to help you get started fast with Apache Spark 2.0 and Scala.

Creating SparkSession

SparkSession is now the starting point for a Spark driver program, instead of creating a SparkContext and a SQLContext.

 
1
2
3
4
5
6
7
8
val spark = SparkSession.builder
      .master("local[*]")
      .appName("Example")
      .getOrCreate()
 
// accessing legacy SparkContext and SQLContext
spark.sparkContext
spark.sqlContext

Creating a Dataset from a collection

SparkSession provides a createDataset method that accepts a collection.

 
1
var ds: Dataset[String] = spark.createDataset(List("one","two","three"))

Converting an RDD to a Dataset

SparkSession provides a createDataset method for converting an RDD to a Dataset. This only works if you import spark.implicits_ (where spark is the name of the SparkSession variable).

 
1
2
3
4
5
// always import implicits so that Spark can infer types when creating Datasets
import spark.implicits._
 
val rdd: RDD[Person] = ??? // assume this exists
val dataset: Dataset[Person] = spark.createDataset[Person](rdd)

Converting a DataFrame to a Dataset

A DataFrame (which is really a Dataset[Row]) can be converted to a Dataset of a specific class by performing a map() operation.

 
1
2
3
4
5
6
7
8
// read a text file into a DataFrame a.k.a. Dataset[Row]
var df: Dataset[Row] = spark.read.text("people.txt")
 
// use map() to convert to a Dataset of a specific class
var ds: Dataset[Person] = spark.read.text("people.txt")
      .map(row => parsePerson(row))
 
def parsePerson(row: Row) : Person = ??? // fill in parsing logic here

Reading a CSV directly as a Dataset

The built-in CSV support makes it easy to read a CSV and return a Dataset of a specific case class. This only works if the CSV contains a header row and the field names match the case class.

 
1
2
3
4
val ds: Dataset[Person] = spark.read
    .option("header","true")
    .csv("people.csv")
    .as[Person]

Getting Started with Java

Here are some code samples to help you get started fast with Spark 2.0 and Java.

Creating SparkSession

 
1
2
3
4
5
6
7
SparkSession spark = SparkSession.builder()
  .master("local[*]")
  .appName("Example")
  .getOrCreate();
 
  // Java still requires of the JavaSparkContext
  JavaSparkContext sc = new JavaSparkContext(spark.sparkContext());

Creating a Dataset from a collection

SparkSession provides a createDataset method that accepts a collection.

 
1
2
3
4
Dataset<Person> ds = spark.createDataset(
    Collections.singletonList(new Person(1, "Joe", "Bloggs")),
    Encoders.bean(Person.class)
);

Converting an RDD to a Dataset

SparkSession provides a createDataset method for converting an RDD to a Dataset.

 
1
2
3
4
Dataset<Person> ds = spark.createDataset(
  javaRDD.rdd(), // convert a JavaRDD to an RDD
  Encoders.bean(Person.class)
);

Converting a DataFrame to a Dataset

A DataFrame (which is really a Dataset[Row]) can be converted to a Dataset of a specific class by performing a map() operation.

 
1
2
3
4
5
6
7
8
Dataset<Person> ds = df.map(new MapFunction<Row, Person>() {
  @Override
  public Person call(Row value) throws Exception {
    return new Person(Integer.parseInt(value.getString(0)),
                      value.getString(1),
                      value.getString(2));
  }
}, Encoders.bean(Person.class));

Reading a CSV directly as a Dataset

The built-in CSV support makes it easy to read a CSV and return a Dataset of a specific case class. This only works if the CSV contains a header row and the field names match the case class.

 
1
2
3
4
Dataset<Person> ds = spark.read()
  .option("header", "true")
  .csv("testdata/people.csv")
  .as(Encoders.bean(Person.class));

Spark+Scala beats Spark+Java

Using Apache Spark with Java is harder than using Apache Spark with Scala and we spent significantly longer upgrading our Java examples than we did with our Scala examples, including running into some confusing runtime errors that were hard to track down (for example, we hit a runtime error with Spark’s code generation because one of our Java classes was not declared as public).

Also, we weren’t always able to use concise lambda functions even though we are using Java 8, and had to revert to anonymous inner classes with verbose (and confusing) syntax.

Conclusion

Spark 2.0 represents a significant milestone in the evolution of this open source project and provides cleaner APIs and improved performance compared to the 1.6 release.

The Scala API is a joy to code with, but the Java API can often be frustrating. It’s worth biting the bullet and switching to Scala.

Full source code for a number of examples is available from our github repo here.

Get help upgrading to Spark 2.0 or making the transition from Java to Scala. Contact Us!

APACHE SPARK 2.0 API IMPROVEMENTS: RDD, DATAFRAME, DATASET AND SQL的更多相关文章

  1. Apache Spark 2.0三种API的传说:RDD、DataFrame和Dataset

    Apache Spark吸引广大社区开发者的一个重要原因是:Apache Spark提供极其简单.易用的APIs,支持跨多种语言(比如:Scala.Java.Python和R)来操作大数据. 本文主要 ...

  2. Apache Spark 3.0 预览版正式发布,多项重大功能发布

    2019年11月08日 数砖的 Xingbo Jiang 大佬给社区发了一封邮件,宣布 Apache Spark 3.0 预览版正式发布,这个版本主要是为了对即将发布的 Apache Spark 3. ...

  3. Apache Spark 3.0 将内置支持 GPU 调度

    如今大数据和机器学习已经有了很大的结合,在机器学习里面,因为计算迭代的时间可能会很长,开发人员一般会选择使用 GPU.FPGA 或 TPU 来加速计算.在 Apache Hadoop 3.1 版本里面 ...

  4. spark的数据结构 RDD——DataFrame——DataSet区别

    转载自:http://blog.csdn.net/wo334499/article/details/51689549 RDD 优点: 编译时类型安全 编译时就能检查出类型错误 面向对象的编程风格 直接 ...

  5. Spark注册UDF函数,用于DataFrame DSL or SQL

    import org.apache.spark.sql.SparkSession import org.apache.spark.sql.functions._ object Test2 { def ...

  6. sparkSQL中RDD——DataFrame——DataSet的区别

    spark中RDD.DataFrame.DataSet都是spark的数据集合抽象,RDD针对的是一个个对象,但是DF与DS中针对的是一个个Row RDD 优点: 编译时类型安全 编译时就能检查出类型 ...

  7. There Are Now 3 Apache Spark APIs. Here’s How to Choose the Right One

    See Apache Spark 2.0 API Improvements: RDD, DataFrame, DataSet and SQL here. Apache Spark is evolvin ...

  8. RDD, DataFrame or Dataset

    总结: 1.RDD是一个Java对象的集合.RDD的优点是更面向对象,代码更容易理解.但在需要在集群中传输数据时需要为每个对象保留数据及结构信息,这会导致数据的冗余,同时这会导致大量的GC. 2.Da ...

  9. 且谈 Apache Spark 的 API 三剑客:RDD、DataFrame 和 Dataset

    作者:Jules S. Damji 译者:足下 本文翻译自 A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets ,翻译已 ...

随机推荐

  1. .NET Core中的性能测试工具BenchmarkDotnet

    背景介绍 之前一篇博客中,我们讲解.NET Core中的CSV解析库,在文章的最后,作者使用了性能基准测试工具BenchmarkDotNet测试了2个不同CSV解析库的性能,本篇我们来详细介绍一下Be ...

  2. 版本管理工具Git(二)GitLab部署和配置

    安装 # 安装依赖包 sudo yum install -y curl policycoreutils-python openssh-server # 启用并启动SSHD sudo systemctl ...

  3. Spring拓展接口之FactoryBean,我们来看看其源码实现

    前言 开心一刻 那年去相亲,地点在饭店里,威特先上了两杯水,男方绅士的喝了一口,咧嘴咋舌轻放桌面,手抚额头闭眼一脸陶醉,白水硬是喝出了82年拉菲的感觉.如此有生活情调的幽默男人,果断拿下,相处后却发现 ...

  4. 服务器配置https

    服务器配置https 第一步.申请证书 这个网上有很多申请方法,不论你是阿里云还是腾讯云都有自带的申请途经,这里就不再赘述. 第二步.进行配置(linux) 1.在tomcat的conf目录下创建新的 ...

  5. express中间件系统的基本实现

    一直觉得express的中间件系统这种流式处理非常形象,就好像加工流水线一样,每个环节都在针对同一个产品的不同部分完成自己的工作,最后得到一个成品.今天就来实现一个简易的[中间件队列]. 一. API ...

  6. 杭电ACM2008--数值统计

    数值统计 Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Others)Total Submis ...

  7. [android] 创建模拟器遇到的常见错误

    1.错误提示: invalid command line sdk安装目录有中文添加ANDROID_SDK_HOME环境变量,指向sdk安装目录2.模拟器无法安装应用模拟器开启其实是开启了的程序占用这个 ...

  8. Mysql外键的使用

    MySQL外键(请确保数据库是innodb类型)网上有很多介绍的文章,这里我就凭自己的理解再次整理了下,废话不多说,直入正题哈.外键的作用: 保持数据一致性,完整性,主要目的是控制存储在外键表中的数据 ...

  9. C# 绘制PDF图形——基本图形、自定义图形、色彩透明度

    引言 在PDF中我们可以通过C#程序代码来添加非常丰富的元素来呈现我们想要表达的内容,如绘制表格.文字,添加图形.图像等等.在本篇文章中,我将介绍如何在PDF中绘制图形,并设置图形属性的操作. 文章中 ...

  10. JavaScript是如何工作的:使用MutationObserver跟踪DOM的变化

    摘要: 掌握MutationObserver. 这是专门探索 JavaScript 及其所构建的组件的系列文章的第10篇. 如果你错过了前面的章节,可以在这里找到它们: JavaScript 是如何工 ...