RDD、DataFrame、Dataset三者三者之间转换
转化: RDD、DataFrame、Dataset三者有许多共性,有各自适用的场景常常需要在三者之间转换 DataFrame/Dataset转RDD: 这个转换很简单 val rdd1=testDF.rdd
val rdd2=testDS.rdd RDD转DataFrame: import spark.implicits._
val testDF = rdd.map {line=>
(line._1,line._2)
}.toDF("col1","col2") 一般用元组把一行的数据写在一起,然后在toDF中指定字段名 RDD转Dataset:
import spark.implicits._
case class Coltest(col1:String,col2:Int)extends Serializable //定义字段名和类型
val testDS = rdd.map {line=>
Coltest(line._1,line._2)
}.toDS 可以注意到,定义每一行的类型(case class)时,已经给出了字段名和类型,后面只要往case class里面添加值即可 Dataset转DataFrame: 这个也很简单,因为只是把case class封装成Row import spark.implicits._
val testDF = testDS.toDF DataFrame转Dataset: import spark.implicits._
case class Coltest(col1:String,col2:Int)extends Serializable //定义字段名和类型
val testDS = testDF.as[Coltest] 这种方法就是在给出每一列的类型后,使用as方法,转成Dataset,这在数据类型是DataFrame又需要针对各个字段处理时极为方便
特别注意: 在使用一些特殊的操作时,一定要加上 import spark.implicits._ 不然toDF、toDS无法使用
package dataframe
import org.apache.spark.sql.{DataFrame, Dataset, SparkSession}
//
// Explore interoperability between DataFrame and Dataset. Note that Dataset
// is covered in much greater detail in the 'dataset' directory.
//
object DatasetConversion {
case class Cust(id: Integer, name: String, sales: Double, discount: Double, state: String)
case class StateSales(state: String, sales: Double)
def main(args: Array[String]) {
val spark =
SparkSession.builder()
.appName("DataFrame-DatasetConversion")
.master("local[4]")
.getOrCreate()
import spark.implicits._
// create a sequence of case class objects
// (we defined the case class above)
val custs = Seq(
Cust(1, "Widget Co", 120000.00, 0.00, "AZ"),
Cust(2, "Acme Widgets", 410500.00, 500.00, "CA"),
Cust(3, "Widgetry", 410500.00, 200.00, "CA"),
Cust(4, "Widgets R Us", 410500.00, 0.0, "CA"),
Cust(5, "Ye Olde Widgete", 500.00, 0.0, "MA")
)
// Create the DataFrame without passing through an RDD
val customerDF : DataFrame = spark.createDataFrame(custs)
//
// println("*** DataFrame schema")
//
// customerDF.printSchema()
//
// println("*** DataFrame contents")
//
// customerDF.show()
// +---+---------------+--------+--------+-----+
//| id| name| sales|discount|state|
//+---+---------------+--------+--------+-----+
//| 1| Widget Co|120000.0| 0.0| AZ|
//| 2| Acme Widgets|410500.0| 500.0| CA|
//| 3| Widgetry|410500.0| 200.0| CA|
//| 4| Widgets R Us|410500.0| 0.0| CA|
//| 5|Ye Olde Widgete| 500.0| 0.0| MA|
//+---+---------------+--------+--------+-----+
//
// println("*** Select and filter the DataFrame")
//
val smallerDF =
customerDF.select("sales", "state").filter($"state".equalTo("CA"))
//
// smallerDF.show()
//
// +--------+-----+
//| sales|state|
//+--------+-----+
//|410500.0| CA|
//|410500.0| CA|
//|410500.0| CA|
//+--------+-----+
///////////////////////////////////////////////////////////////////////////////////
// Convert it to a Dataset by specifying the type of the rows -- use a case
// class because we have one and it's most convenient to work with. Notice
// you have to choose a case class that matches the remaining columns.
// BUT also notice that the columns keep their order from the DataFrame --
// later you'll see a Dataset[StateSales] of the same type where the
// columns have the opposite order, because of the way it was created.
val customerDS : Dataset[StateSales] = smallerDF.as[StateSales]
//
// println("*** Dataset schema")
//
// customerDS.printSchema()
//
// println("*** Dataset contents")
//
// customerDS.show()
// Select and other operations can be performed directly on a Dataset too,
// but be careful to read the documentation for Dataset -- there are
// "typed transformations", which produce a Dataset, and
// "untyped transformations", which produce a DataFrame. In particular,
// you need to project using a TypedColumn to gate a Dataset.
// val verySmallDS : Dataset[Double] = customerDS.select($"sales".as[Double])
//
// println("*** Dataset after projecting one column")
//
// verySmallDS.show()
//
//+--------+
//| sales|
//+--------+
//|410500.0|
//|410500.0|
//|410500.0|
//+--------+
// If you select multiple columns on a Dataset you end up with a Dataset
// of tuple type, but the columns keep their names.
val tupleDS : Dataset[(String, Double)] =
customerDS.select($"state".as[String], $"sales".as[Double])
//
// println("*** Dataset after projecting two columns -- tuple version")
//
// tupleDS.show()
//
//+-----+--------+
//|state| sales|
//+-----+--------+
//| CA|410500.0|
//| CA|410500.0|
//| CA|410500.0|
//+-----+--------+
// You can also cast back to a Dataset of a case class. Notice this time
// the columns have the opposite order than the last Dataset[StateSales]
// val betterDS: Dataset[StateSales] = tupleDS.as[StateSales]
//
// println("*** Dataset after projecting two columns -- case class version")
//
// betterDS.show()
//
//+-----+--------+
//|state| sales|
//+-----+--------+
//| CA|410500.0|
//| CA|410500.0|
//| CA|410500.0|
//+-----+--------+
// Converting back to a DataFrame without making other changes is really easy
// val backToDataFrame : DataFrame = tupleDS.toDF()
//
// println("*** This time as a DataFrame")
//
// backToDataFrame.show()
//
//+-----+--------+
//|state| sales|
//+-----+--------+
//| CA|410500.0|
//| CA|410500.0|
//| CA|410500.0|
//+-----+--------+
//
// // While converting back to a DataFrame you can rename the columns
val renamedDataFrame : DataFrame = tupleDS.toDF("MyState", "MySales")
println("*** Again as a DataFrame but with renamed columns")
renamedDataFrame.show()
// +-------+--------+
//|MyState| MySales|
//+-------+--------+
//| CA|410500.0|
//| CA|410500.0|
//| CA|410500.0|
//+-------+--------+
}
}
RDD、DataFrame、Dataset三者三者之间转换的更多相关文章
- APACHE SPARK 2.0 API IMPROVEMENTS: RDD, DATAFRAME, DATASET AND SQL
What’s New, What’s Changed and How to get Started. Are you ready for Apache Spark 2.0? If you are ju ...
- spark的数据结构 RDD——DataFrame——DataSet区别
转载自:http://blog.csdn.net/wo334499/article/details/51689549 RDD 优点: 编译时类型安全 编译时就能检查出类型错误 面向对象的编程风格 直接 ...
- sparkSQL中RDD——DataFrame——DataSet的区别
spark中RDD.DataFrame.DataSet都是spark的数据集合抽象,RDD针对的是一个个对象,但是DF与DS中针对的是一个个Row RDD 优点: 编译时类型安全 编译时就能检查出类型 ...
- RDD, DataFrame or Dataset
总结: 1.RDD是一个Java对象的集合.RDD的优点是更面向对象,代码更容易理解.但在需要在集群中传输数据时需要为每个对象保留数据及结构信息,这会导致数据的冗余,同时这会导致大量的GC. 2.Da ...
- spark rdd df dataset
RDD.DataFrame.DataSet的区别和联系 共性: 1)都是spark中得弹性分布式数据集,轻量级 2)都是惰性机制,延迟计算 3)根据内存情况,自动缓存,加快计算速度 4)都有parti ...
- byte[] 、Bitmap与Drawbale 三者直接的转换
经常遇到这种类似头疼的问题 byte[] .Bitmap与Drawbale 三者直接的转换 1.byte[] ->Bitmap Bitmap Bitmap = BitmapFactory.dec ...
- Spark入门之DataFrame/DataSet
目录 Part I. Gentle Overview of Big Data and Spark Overview 1.基本架构 2.基本概念 3.例子(可跳过) Spark工具箱 1.Dataset ...
- C#中对象,字符串,dataTable、DataReader、DataSet,对象集合转换成Json字符串方法。
C#中对象,字符串,dataTable.DataReader.DataSet,对象集合转换成Json字符串方法. public class ConvertJson { #region 私有方法 /// ...
- 关于不同进制数之间转换的数学推导【Written By KillerLegend】
关于不同进制数之间转换的数学推导 涉及范围:正整数范围内二进制(Binary),八进制(Octonary),十进制(Decimal),十六进制(hexadecimal)之间的转换 数的进制有多种,比如 ...
随机推荐
- RabbitMQ 分布式设置和高可用性讨论
abbitMQ的集群主要有配置方式,分别是:本地局域网Cluster,federation,shovel. RabbitMQ Cluster主要是用于同一个网段内的局域网. federation和sh ...
- arcgis二次开发遇到System.Runtime.InteropServices.COMException (0x80040228) :异常来自HRESULT:0x80040228
出现此问题只需要在控件上拖入一个LicenseControl就可以了 参考资料:http://yaogu.blog.163.com/blog/static/1849990662012101283256 ...
- ruby-from-other-languages
http://www.ruby-lang.org/en/documentation/ruby-from-other-languages/to-ruby-from-python/ http://www. ...
- [development][C] C语言标准
GUN C的标准文档: 也就是glibc https://www.gnu.org/software/libc/ http://man7.org/linux/man-pages/dir_section_ ...
- 抽屉之Tornado实战(4)--发帖及上传图片
对于链接,点击获取标题时,本质发送ajax请求,然后去链接抓取信息,发布又是发送ajax请求 发布信息,还要有发布者的信息,并在信息表需要记录发布者的用户名,发布者的头像,发布者的id,而这些信息可以 ...
- Java之旅_面向对象_重写和重载
参考并摘自:http://www.runoob.com/java/java-override-overload.html 重写(Override) 子类对父类(允许访问的)方法的实现过程进行重新编写, ...
- appium入门(1)__ appium介绍
摘自:http://www.testclass.net/appium/appium-base-summary/ 1.特点 appium 是一个自动化测试开源工具,支持 iOS 平台和 Android ...
- 原声js,取消事件冒泡,点击按钮,显示box,点击屏幕其他地方,box隐藏
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...
- InnoDB启用大内存页
在 Linux 操作系统上运行内存需求量较大的应用程序时,由于其采用的默认页面大小为 4KB,因而将会产生较多 TLB Miss 和缺页中断,从而大大影响应用程序的性能.当操作系统以 2MB 甚至更大 ...
- darknet的安装及报错解决
darknet 是YOLO网络的一个框架,安装见官网:https://pjreddie.com/darknet/ 跟着步骤就可以安装好了. 由于官网是全英文的,所以本文根据官网进行中文释义. 本人在按 ...