在2.0版本之前,使用Spark必须先创建SparkConf和SparkContext

catalog:目录

Spark2.0中引入了SparkSession的概念,SparkConf、SparkContext 和 SQLContext 都已经被封装在 SparkSession 当中,并且可以通过 builder 的方式创建;可以通过 SparkSession 创建并操作 Dataset 和 DataFrame

SparkSession  The entry point to programming Spark with the Dataset and DataFrame API.

scala> import org.apache.spark.sql.SparkSession
SparkSession SparkSessionExtensions

scala> val spsession=SparkSession.builder().getOrCreate()
spsession: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@577d07b

scala> session.
baseRelationToDataFrame conf emptyDataFrame implicits range sessionState sql streams udf
catalog createDataFrame emptyDataset listenerManager read sharedState sqlContext table version
close createDataset experimental newSession readStream sparkContext stop time

scala> spsession.read.
csv format jdbc json load option options orc parquet schema table text textFile

--------------------------------------------------------------------------------------------------------------------------------------

scala> import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.SparkSession

scala> val lines=spsession.read.textFile("/tmp/person.txt")
lines: org.apache.spark.sql.Dataset[String] = [value: string]

//session的导入隐式转换

scala> import spsession.implicits._
import spsession.implicits._

scala> lines.show
+-----------------+
| value|
+-----------------+
|2,zhangsan,50,866|
| 4,laoliu,522,30|
|5,zhangsan,20,565|
| 6,limi,522,65|
| 1,xiliu,50,6998|
| 7,llihmj,23,565|
+-----------------+

scala> val rowrdd=lines.map(x=>{val arr=x.split("[,]");(arr(0).toLong,arr(1),arr(2).toInt,arr(3).toInt)})
rowrdd: org.apache.spark.sql.Dataset[(Long, String, Int, Int)] = [_1: bigint, _2: string ... 2 more fields]

scala> val personDF=rowrdd.toDF("id","name","age","fv")
personDF: org.apache.spark.sql.DataFrame = [id: bigint, name: string ... 2 more fields]

scala> personDF.printSchema
root
|-- id: long (nullable = false)
|-- name: string (nullable = true)
|-- age: integer (nullable = false)
|-- fv: integer (nullable = false)

scala> personDF.show
+---+--------+---+----+
| id| name|age| fv|
+---+--------+---+----+
| 2|zhangsan| 50| 866|
| 4| laoliu|522| 30|
| 5|zhangsan| 20| 565|
| 6| limi|522| 65|
| 1| xiliu| 50|6998|
| 7| llihmj| 23| 565|
+---+--------+---+----+

-------------------------------------------------------------------------

scala> import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.SparkSession

scala> val spsession=SparkSession.builder().getOrCreate()
spsession: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@4c89c98a

scala> val lines=spsession.read.textFile("/tmp/person.txt")
lines: org.apache.spark.sql.Dataset[String] = [value: string]

scala> val rowDF=lines.map(x=>{val arr=x.split("[,]");(arr(0).toLong,arr(1),arr(2).toInt,arr(3).toInt)})
rowDF: org.apache.spark.sql.Dataset[(Long, String, Int, Int)] = [_1: bigint, _2: string ... 2 more fields]

scala> rowDF.printSchema
root
|-- _1: long (nullable = false)
|-- _2: string (nullable = true)
|-- _3: integer (nullable = false)
|-- _4: integer (nullable = false)

scala> rowDF.show
+---+--------+---+----+
| _1| _2| _3| _4|
+---+--------+---+----+
| 2|zhangsan| 50| 866|
| 4| laoliu|522| 30|
| 5|zhangsan| 20| 565|
| 6| limi|522| 65|
| 1| xiliu| 50|6998|
| 7| llihmj| 23| 565|
+---+--------+---+----+

scala> rowDF.createTempView("Aaa")

scala> spsession.sql("select * from Aaa").show
+---+--------+---+----+
| _1| _2| _3| _4|
+---+--------+---+----+
| 2|zhangsan| 50| 866|
| 4| laoliu|522| 30|
| 5|zhangsan| 20| 565|
| 6| limi|522| 65|
| 1| xiliu| 50|6998|
| 7| llihmj| 23| 565|
+---+--------+---+----+

scala> import spsession.implicits._
import spsession.implicits._

scala> lines.show
+-----------------+
| value|
+-----------------+
|2,zhangsan,50,866|
| 4,laoliu,522,30|
|5,zhangsan,20,565|
| 6,limi,522,65|
| 1,xiliu,50,6998|
| 7,llihmj,23,565|
+-----------------+

scala> val wordDF=lines.flatMap(_.split(","))
wordDF: org.apache.spark.sql.Dataset[String] = [value: string]

scala> wordDF.groupBy($"value" as "word").count
res24: org.apache.spark.sql.DataFrame = [word: string, count: bigint]

scala> wordDF.groupBy($"value" as "word").agg(count("*") as "count")
res30: org.apache.spark.sql.DataFrame = [word: string, count: bigint]

scala> rowDF.groupBy($"_3" as "age").agg(count("*") as "count",avg($"_4") as "avg").show
+---+-----+------+
|age|count| avg|
+---+-----+------+
| 20| 1| 565.0|
| 23| 1| 565.0|
| 50| 2|3932.0|
|522| 2| 47.5|
+---+-----+------+

scala> rowDF.groupBy($"_3" as "age").agg(count("*"),avg($"_4")).show
+---+--------+-------+
|age|count(1)|avg(_4)|
+---+--------+-------+
| 20| 1| 565.0|
| 23| 1| 565.0|
| 50| 2| 3932.0|
|522| 2| 47.5|
+---+--------+-------+

A DataFrame is a Dataset organized into named columns.

scala> val jsonDF=spsession.read.json("/tmp/pdf1json/part*")
jsonDF: org.apache.spark.sql.DataFrame = [age: bigint, fv: bigint ... 1 more field]

scala> spsession.read.json("/tmp/pdf1json/part*").show
+---+----+--------+
|age| fv| name|
+---+----+--------+
| 50|6998| xiliu|
| 50| 866|zhangsan|
| 20| 565|zhangsan|
| 23| 565| llihmj|
+---+----+--------+

scala> spsession.read.format("json").load("/tmp/pdf1json/part*").show
+---+----+--------+
|age| fv| name|
+---+----+--------+
| 50|6998| xiliu|
| 50| 866|zhangsan|
| 20| 565|zhangsan|
| 23| 565| llihmj|
+---+----+--------+

scala> val jsonDF=spsession.read.json("/tmp/pdf1json/part*")
jsonDF: org.apache.spark.sql.DataFrame = [age: bigint, fv: bigint ... 1 more field]

scala> jsonDF.cube("age").mean("fv").show
+----+-------+
| age|avg(fv)|
+----+-------+
| 20| 565.0|
|null| 2248.5|
| 50| 3932.0|
| 23| 565.0|
+----+-------+

scala> jsonDF.cube("age").agg(max("fv"),count("name"),sum("fv")).show
+----+-------+-----------+-------+
| age|max(fv)|count(name)|sum(fv)|
+----+-------+-----------+-------+
| 20| 565| 1| 565|
|null| 6998| 4| 8994|
| 50| 6998| 2| 7864|
| 23| 565| 1| 565|

---------------------------------------------------------------

scala> val lines=spsession.read.textFile("/tmp/person.txt")
lines: org.apache.spark.sql.Dataset[String] = [value: string]

scala> lines.show
+-----------------+
| value|
+-----------------+
|2,zhangsan,50,866|
| 4,laoliu,522,30|
|5,zhangsan,20,565|
| 6,limi,522,65|
| 1,xiliu,50,6998|
| 7,llihmj,23,565|
+-----------------+

scala> val lineds=lines.map(x=>{val arr=x.split(",");(arr(0),arr(1),arr(2),arr(3))})
lineds: org.apache.spark.sql.Dataset[(String, String, String, String)] = [_1: string, _2: string ... 2 more fields]

scala> lineds.show
+---+--------+---+----+
| _1| _2| _3| _4|
+---+--------+---+----+
| 2|zhangsan| 50| 866|
| 4| laoliu|522| 30|
| 5|zhangsan| 20| 565|
| 6| limi|522| 65|
| 1| xiliu| 50|6998|
| 7| llihmj| 23| 565|
+---+--------+---+----+

scala> val personDF= lineds.withColumnRenamed("_1","id").withColumnRenamed("_2","name")
personDF: org.apache.spark.sql.DataFrame = [id: string, name: string ... 2 more fields]

scala> personDF.show
+---+--------+---+----+
| id| name| _3| _4|
+---+--------+---+----+
| 2|zhangsan| 50| 866|
| 4| laoliu|522| 30|
| 5|zhangsan| 20| 565|
| 6| limi|522| 65|
| 1| xiliu| 50|6998|
| 7| llihmj| 23| 565|
+---+--------+---+----+

scala> personDF.sort($"id" desc).show
warning: there was one feature warning; re-run with -feature for details
+---+--------+---+----+
| id| name| _3| _4|
+---+--------+---+----+
| 7| llihmj| 23| 565|
| 6| limi|522| 65|
| 5|zhangsan| 20| 565|
| 4| laoliu|522| 30|
| 2|zhangsan| 50| 866|
| 1| xiliu| 50|6998|
+---+--------+---+----+

scala> val lines=spsession.read.textFile("/tmp/person.txt")
lines: org.apache.spark.sql.Dataset[String] = [value: string]

scala> lines.map(x=>{val arr= x.split(",");(arr(0),arr(1),arr(2),arr(3))}).toDF("id","name","age","fv").show
+---+--------+---+----+
| id| name|age| fv|
+---+--------+---+----+
| 2|zhangsan| 50| 866|
| 4| laoliu|522| 30|
| 5|zhangsan| 20| 565|
| 6| limi|522| 65|
| 1| xiliu| 50|6998|
| 7| llihmj| 23| 565|
+---+--------+---+----+

SparkSession的更多相关文章

  1. 源码中的哲学——通过构建者模式创建SparkSession

    spark2.2在使用的时候使用的是SparkSession,这个SparkSession创建的时候很明显的使用了创建者模式.通过观察源代码,简单的模拟了下,可以当作以后编码风格的参考: 官方使用 i ...

  2. [Spark SQL] SparkSession、DataFrame 和 DataSet 练习

    本課主題 DataSet 实战 DataSet 实战 SparkSession 是 SparkSQL 的入口,然后可以基于 sparkSession 来获取或者是读取源数据来生存 DataFrameR ...

  3. 【sparkSQL】SparkSession的认识

    https://www.cnblogs.com/zzhangyuhang/p/9039695.html https://www.jianshu.com/p/dea6a78b9dff 在Spark1.6 ...

  4. 【spark】SparkSession的API

    SparkSession是一个比较重要的类,它的功能的实现,肯定包含比较多的函数,这里介绍下它包含哪些函数. builder函数public static SparkSession.Builder b ...

  5. pyspark SparkSession及dataframe基本操作

    from pyspark import SparkContext, SparkConf import os from pyspark.sql.session import SparkSession f ...

  6. scala学习(3)-----wordcount【sparksession】

    参考: spark中文官方网址:http://spark.apachecn.org/#/ https://www.iteblog.com/archives/1674.html 一.知识点: 1.Dat ...

  7. Spark2.0 VS Spark 1.* -------SparkSession的区别

    Spark .0以前版本: val sparkConf = new SparkConf().setAppName("soyo") val spark = new SparkCont ...

  8. SparkSession - Spark SQL 的 入口

    SparkSession - Spark SQL 的 入口 翻译自:https://jaceklaskowski.gitbooks.io/mastering-apache-spark/content/ ...

  9. spark教程(八)-SparkSession

    spark 有三大引擎,spark core.sparkSQL.sparkStreaming, spark core 的关键抽象是 SparkContext.RDD: SparkSQL 的关键抽象是 ...

随机推荐

  1. java开发中的常见类和对象-建议阅读时间3分钟

    1.Dao 数据访问对象 此对象用于访问数据库.实现类一般用于用于操作数据库! 一般操作修改,添加,删除数据库操作的步骤很相似,就写了一个公共类DAO类 ,修改,添加,删除数据库操作时 直接调用公共类 ...

  2. DOS批处理 - 函数教程

    DOS Batch - Function Tutorial What it is, why it`s important and how to write your own. Description: ...

  3. 特殊字符搜索网站 http://symbolhound.com/

    最近在学习makefile,想搜索一下 $@是啥意思,结果google由于忽略了特殊字符,结果啥也没找到, 后来在stackoverflow上看到了别人同样的问题 http://stackoverfl ...

  4. Maven的dependency scope属性

    官方地址:https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependen ...

  5. spring boot中,jar包、war包的区别

    jar包.war包 比较(表格) 项目 jar包 war包 在开发调试完成之后,可以将应用打成JAR包的形式,可以直接使用Maven插件的package命令,最终会形成一个可运行的 JAR包.我们使用 ...

  6. FineUI 3升级4.1.1时,SingleClickExpand属性改什么了? (树控件单击展开)

    private Tree InitTreeMenu(List<Menu> menus) { Tree treeMenu = new Tree(); treeMenu.ID = " ...

  7. 淘宝用户api 如何获得App Key和API Secret

    下面我们通过截图的方式详细说明申请淘宝应用的步骤. 一.访问淘宝开放平台http://open.taobao.com/ 申请成为合作伙伴 二.填写个人信息申请入住 三.点击创建应用 四.填写应用名称, ...

  8. Hadoop专业解决方案-第12章 为Hadoop应用构建企业级的安全解决方案

    一.前言: 非常感谢Hadoop专业解决方案群:313702010,兄弟们的大力支持,在此说一声辛苦了,春节期间,项目进度有所延迟,不过元宵节以后大家已经步入正轨, 目前第12章 为Hadoop应用构 ...

  9. ALGO-2_蓝桥杯_算法训练_最大最小公倍数

    问题描述 已知一个正整数N,问从1~N中任选出三个数,他们的最小公倍数最大可以为多少. 输入格式 输入一个正整数N. 输出格式 输出一个整数,表示你找到的最小公倍数. 样例输入 样例输出 数据规模与约 ...

  10. 【Spring学习笔记-MVC-8】SpringMVC之类型转换Converter

    作者:ssslinppp       1. 摘要 在spring 中定义了3中类型转换接口,分别为: Converter接口              :使用最简单,最不灵活: ConverterFa ...