Dataset是一个强类型的特定领域的对象,这种对象可以函数式或者关系操作并行地转换。每个Dataset也有一个被称为一个DataFrame的类型化视图,这种DataFrame是Row类型的Dataset,即Dataset[Row]
  Dataset是“懒惰”的,只在执行行动操作时触发计算。本质上,数据集表示一个逻辑计划,该计划描述了产生数据所需的计算。当执行行动操作时,Spark的查询优化程序优化逻辑计划,并生成一个高效的并行和分布式物理计划。

示例数据字段解释

// affairs:一年来婚外情的频率 
// gender:性别 
// age:年龄 
// yearsmarried:婚龄 
// children:是否有小孩 
// religiousness:宗教信仰程度(5分制,1分表示反对,5分表示非常信仰)
// education:学历
// occupation:职业(逆向编号的戈登7种分类) 
// rating:对婚姻的自我评分(5分制,1表示非常不幸福,5表示非常幸福)
 

1.导入常用的包

import scala.math._
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.Row
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.Column
import org.apache.spark.sql.DataFrameReader
import org.apache.spark.sql.functions._
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.Encoder
import org.apache.spark.sql.DataFrameStatFunctions

2.创建SparkSession,并导入示例数据

val spark = SparkSession.builder().appName("Spark SQL basic example").config("spark.some.config.option", "some-value").getOrCreate()

// For implicit conversions like converting RDDs to DataFrames
import spark.implicits._ val dataList: List[(Double, String, Double, Double, String, Double, Double, Double, Double)] = List(
(0, "male", 37, 10, "no", 3, 18, 7, 4),
(0, "female", 27, 4, "no", 4, 14, 6, 4),
(0, "female", 32, 15, "yes", 1, 12, 1, 4),
(0, "male", 57, 15, "yes", 5, 18, 6, 5),
(0, "male", 22, 0.75, "no", 2, 17, 6, 3),
(0, "female", 32, 1.5, "no", 2, 17, 5, 5),
(0, "female", 22, 0.75, "no", 2, 12, 1, 3),
(0, "male", 57, 15, "yes", 2, 14, 4, 4),
(0, "female", 32, 15, "yes", 4, 16, 1, 2),
(0, "male", 22, 1.5, "no", 4, 14, 4, 5)) val data = dataList.toDF("affairs", "gender", "age", "yearsmarried", "children", "religiousness", "education", "occupation", "rating") data.printSchema()
root
|-- affairs: double (nullable = false)
|-- gender: string (nullable = true)
|-- age: double (nullable = false)
|-- yearsmarried: double (nullable = false)
|-- children: string (nullable = true)
|-- religiousness: double (nullable = false)
|-- education: double (nullable = false)
|-- occupation: double (nullable = false)
|-- rating: double (nullable = false)

 3.操作指定的列和行

// 在Spark-shell中展示,前n条记录
data.show(7)
+-------+------+----+------------+--------+-------------+---------+----------+------+
|affairs|gender| age|yearsmarried|children|religiousness|education|occupation|rating|
+-------+------+----+------------+--------+-------------+---------+----------+------+
| 0.0| male|37.0| 10.0| no| 3.0| 18.0| 7.0| 4.0|
| 0.0|female|27.0| 4.0| no| 4.0| 14.0| 6.0| 4.0|
| 0.0|female|32.0| 15.0| yes| 1.0| 12.0| 1.0| 4.0|
| 0.0| male|57.0| 15.0| yes| 5.0| 18.0| 6.0| 5.0|
| 0.0| male|22.0| 0.75| no| 2.0| 17.0| 6.0| 3.0|
| 0.0|female|32.0| 1.5| no| 2.0| 17.0| 5.0| 5.0|
| 0.0|female|22.0| 0.75| no| 2.0| 12.0| 1.0| 3.0|
+-------+------+----+------------+--------+-------------+---------+----------+------+
only showing top 7 rows // 取前n条记录
val data3=data.limit(5) // 过滤
data.filter("age>50 and gender=='male' ").show
+-------+------+----+------------+--------+-------------+---------+----------+------+
|affairs|gender| age|yearsmarried|children|religiousness|education|occupation|rating|
+-------+------+----+------------+--------+-------------+---------+----------+------+
| 0.0| male|57.0| 15.0| yes| 5.0| 18.0| 6.0| 5.0|
| 0.0| male|57.0| 15.0| yes| 2.0| 14.0| 4.0| 4.0|
+-------+------+----+------------+--------+-------------+---------+----------+------+ // 数据框的所有列 val columnArray=data.columns
columnArray: Array[String] = Array(affairs, gender, age, yearsmarried, children, religiousness, education, occupation, rating) // 查询某些列的数据
data.select("gender", "age", "yearsmarried", "children").show(3)
+------+----+------------+--------+
|gender| age|yearsmarried|children|
+------+----+------------+--------+
| male|37.0| 10.0| no|
|female|27.0| 4.0| no|
|female|32.0| 15.0| yes|
+------+----+------------+--------+
only showing top 3 rows val colArray=Array("gender", "age", "yearsmarried", "children")
colArray: Array[String] = Array(gender, age, yearsmarried, children) data.selectExpr(colArray:_*).show(3)
+------+----+------------+--------+
|gender| age|yearsmarried|children|
+------+----+------------+--------+
| male|37.0| 10.0| no|
|female|27.0| 4.0| no|
|female|32.0| 15.0| yes|
+------+----+------------+--------+
only showing top 3 rows // 操作指定的列,并排序
// data.selectExpr("gender", "age+1","cast(age as bigint)").orderBy($"gender".desc, $"age".asc).show
data.selectExpr("gender", "age+1 as age1","cast(age as bigint) as age2").sort($"gender".desc, $"age".asc).show
+------+----+----+
|gender|age1|age2|
+------+----+----+
| male|23.0| 22|
| male|23.0| 22|
| male|38.0| 37|
| male|58.0| 57|
| male|58.0| 57|
|female|23.0| 22|
|female|28.0| 27|
|female|33.0| 32|
|female|33.0| 32|
|female|33.0| 32|
+------+----+----+

4.查看SparkSQL逻辑和物理执行计划

val data4=data.selectExpr("gender", "age+1 as age1","cast(age as bigint) as age2").sort($"gender".desc, $"age".asc)
data4: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [gender: string, age1: double ... 1 more field] // 查看物理执行计划
data4.explain()
== Physical Plan ==
*Project [gender#20, age1#135, age2#136L]
+- *Sort [gender#20 DESC, age#21 ASC], true, 0
+- Exchange rangepartitioning(gender#20 DESC, age#21 ASC, 200)
+- LocalTableScan [gender#20, age1#135, age2#136L, age#21] // 查看逻辑和物理执行计划
data4.explain(extended=true)
== Parsed Logical Plan ==
'Sort ['gender DESC, 'age ASC], true
+- Project [gender#20, (age#21 + cast(1 as double)) AS age1#135, cast(age#21 as bigint) AS age2#136L]
+- Project [_1#9 AS affairs#19, _2#10 AS gender#20, _3#11 AS age#21, _4#12 AS yearsmarried#22, _5#13 AS children#23, _6#14 AS religiousness#24, _7#15 AS education#25, _8#16 AS occupation#2
6, _9#17 AS rating#27] +- LocalRelation [_1#9, _2#10, _3#11, _4#12, _5#13, _6#14, _7#15, _8#16, _9#17] == Analyzed Logical Plan ==
gender: string, age1: double, age2: bigint
Project [gender#20, age1#135, age2#136L]
+- Sort [gender#20 DESC, age#21 ASC], true
+- Project [gender#20, (age#21 + cast(1 as double)) AS age1#135, cast(age#21 as bigint) AS age2#136L, age#21]
+- Project [_1#9 AS affairs#19, _2#10 AS gender#20, _3#11 AS age#21, _4#12 AS yearsmarried#22, _5#13 AS children#23, _6#14 AS religiousness#24, _7#15 AS education#25, _8#16 AS occupatio
n#26, _9#17 AS rating#27] +- LocalRelation [_1#9, _2#10, _3#11, _4#12, _5#13, _6#14, _7#15, _8#16, _9#17] == Optimized Logical Plan ==
Project [gender#20, age1#135, age2#136L]
+- Sort [gender#20 DESC, age#21 ASC], true
+- LocalRelation [gender#20, age1#135, age2#136L, age#21] == Physical Plan ==
*Project [gender#20, age1#135, age2#136L]
+- *Sort [gender#20 DESC, age#21 ASC], true, 0
+- Exchange rangepartitioning(gender#20 DESC, age#21 ASC, 200)
+- LocalTableScan [gender#20, age1#135, age2#136L, age#21]

Spark2 Dataset行列操作和执行计划的更多相关文章

  1. Spark2 Dataset聚合操作

    data.groupBy("gender").agg(count($"age"),max($"age").as("maxAge&q ...

  2. Spark Dataset DataFrame 操作

    Spark Dataset DataFrame 操作 相关博文参考 sparksql中dataframe的用法 一.Spark2 Dataset DataFrame空值null,NaN判断和处理 1. ...

  3. 【T-SQL进阶】03.执行计划之旅-1

    到大牛们说执行计划,总是很惶恐,是对知识的缺乏的惶恐,所以必须得学习执行计划,以减少对这一块知识的惶恐,下面是对执行计划的第一讲-理解执行计划. 本系列[T-SQL]主要是针对T-SQL的总结. T- ...

  4. 【SQL进阶】03.执行计划之旅1 - 初探

    听到大牛们说执行计划,总是很惶恐,是对知识的缺乏的惶恐,所以必须得学习执行计划,以减少对这一块知识的惶恐,下面是对执行计划的第一讲-理解执行计划. 本系列[T-SQL]主要是针对T-SQL的总结. S ...

  5. Oracle 利用执行计划来避免排序操作

    在oracle中,利用index来避免排序 SQL) NOT NULL); SQL> CREATE INDEX IND_T_NOSORT_NAME ON T_NOSORT(NAME); SQL& ...

  6. sql server 根据执行计划查询耗时操作

    with QS as( select cp.objtype as object_type, /*类型*/ db_name(st.dbid) as [database], /*数据库*/ object_ ...

  7. db2执行计划具体操作

    explain 1.如果第一次执行,请先(在dbinst用户下) connect to dbname,执行db2 -tvf $HOME/sqllib/misc/EXPLAIN.DDL建立执行计划表 2 ...

  8. es 300G 数据删除 执行计划 curl REST 操作

    es 300G 数据删除 [es union_2017执行计划] [测试执行环境]线上D服务器[测试用例]get:curl -XGET ES:9200/_cat/indices?v post:curl ...

  9. MSSQLSERVER执行计划详解

    序言 本篇主要目的有二: 1.看懂t-sql的执行计划,明白执行计划中的一些常识. 2.能够分析执行计划,找到优化sql性能的思路或方案. 如果你对sql查询优化的理解或常识不是很深入,那么推荐几骗博 ...

随机推荐

  1. Android开发学习笔记-SharedPreferences的用法

    SharedPreferences介绍:   做软件开发应该都知道,很多软件会有配置文件,里面存放这程序运行当中的各个属性值,由于其配置信息并不多,如果采用数据库来存放并不划算,因为数据库连接跟操作等 ...

  2. linux环境中如何删除文件的前n行?

    需求描述: 今天看了一个系统的临时文件,有5.6G的大小,这个文件也没有用了,想要将大部分的文件都删除掉. 在此记录下删除的过程.删除前n行的记录. 操作过程: 对于数据量比较大的情况(本例5800万 ...

  3. [GPU] CUDA for Deep Learning, why?

    又是一枚祖国的骚年,阅览做做笔记:http://www.cnblogs.com/neopenx/p/4643705.html 这里只是一些基础知识.帮助理解DL tool的实现. 最新补充:我需要一台 ...

  4. 7 -- Spring的基本用法 -- 11... 基于XML Schema的简化配置方式

    7.11 基于XML Schema的简化配置方式 Spring允许使用基于XML Schema的配置方式来简化Spring配置文件. 7.11.1 使用p:命名空间简化配置 p:命名空间不需要特定的S ...

  5. 我的notepad++

    我觉得,做开发的一定要有一个简单,但功能强大的文本编辑器.我比较喜欢notepad++,而且一直使用.准备通过这篇文章分享一下我的notepad++配置. 希望广大notepad++用户,如果有好的配 ...

  6. Django 配置

    Django 配置   运行 django-admin.py startproject [project-name] 命令会生成一系列文件,在Django 1.6版本以后的 settings.py 文 ...

  7. vux 使用 loading 组件

    1)声明引入Loading import { Loading } from 'vux' 2)在模版底部添加 组件(需要添加到 template>div 标签里) <template> ...

  8. linux系统cpu和内存占用率

    1.top 使用权限:所有使用者 使用方式:top [-] [d delay] [q] [c] [S] [s] [i] [n] [b] 说明:即时显示process的动态 d :改变显示的更新速度,或 ...

  9. vc11(vs2012)下编译php源码

    需要原料: vs2012.php源码 1.本机的mingw没搞定,参考网上文章尝试vs2012编译,借助vs2012自带的命令行工具: 需要去bison官网下载bison.exe放在“c:/windo ...

  10. C语言EOF是什么?

    C语言 EOF是什么? Linux中,在新的一行的开头,按下Ctrl-D,就代表EOF(如果在一行的中间按下Ctrl-D,则表示输出"标准输入"的缓存区,所以这时必须按两次Ctrl ...