1、创建数据框架 Creating DataFrames

val df = spark.read.json("file:///usr/local/spark/examples/src/main/resources/people.json");
df.show();

写到hdfs路径:
df.select("age", "name").write.save("examples/src/main/resources/peopleOUT.json")

再读出来:
val peopleDF = spark.read.format("parquet").load("/user/root/examples/src/main/resources/peopleOUT.json")
格式是parquet,是hdfs保存后的列格式存储格式,是一个通用存储格式。
注意:如果用json来读肯定就会乱码了
可以直接show,也可以通过路径来写SQL
val sqlDF = spark.sql("SELECT * FROM parquet.`examples/src/main/resources/peopleOUT.json`")

2、如果没有向HDFS文件系统里写数据,试试这条语句

val df = spark.read.json("examples/src/main/resources/people.json")

会提示错误:
org.apache.spark.sql.AnalysisException: Path does not exist: hdfs://localhost:9000/user/root/examples/src/main/resources/people.json;
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:382)

3、弱类型数据集查询

import spark.implicits._

// 数据集结构
df.printSchema()
// root
// |-- age: long (nullable = true)
// |-- name: string (nullable = true)

// 只显示 "name" 列
df.select("name").show()
// +-------+
// | name|
// +-------+
// |Michael|
// | Andy|
// | Justin|
// +-------+

// 年龄加1
df.select($"name", $"age" + 1).show()
// +-------+---------+
// | name|(age + 1)|
// +-------+---------+
// |Michael| null|
// | Andy| 31|
// | Justin| 20|
// +-------+---------+

// 年龄大于21
df.filter($"age" > 21).show()
// +---+----+
// |age|name|
// +---+----+
// | 30|Andy|
// +---+----+

// 按年龄分组计数
df.groupBy("age").count().show()

4、使用sql查询的方式

Running SQL Queries

df.createOrReplaceTempView("people")

val sqlDF = spark.sql("SELECT * FROM people")
sqlDF.show()

全局临时视图
df.createGlobalTempView("people")

// Global temporary view is tied to a system preserved database `global_temp`
spark.sql("SELECT * FROM global_temp.people").show()
// +----+-------+
// | age| name|
// +----+-------+
// |null|Michael|
// | 30| Andy|
// | 19| Justin|
// +----+-------+

// Global temporary view is cross-session
spark.newSession().sql("SELECT * FROM global_temp.people").show()

5、定义一个自定义类型

case class Person(name: String, age: Long)

6、schema反射型推理模式 Inferring the Schema

import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.Encoder

// For implicit conversions from RDDs to DataFrames
import spark.implicits._

// Create an RDD of Person objects from a text file, convert it to a Dataframe
val peopleDF = spark.sparkContext
.textFile("examples/src/main/resources/people.txt")
.map(_.split(","))
.map(attributes => Person(attributes(0), attributes(1).trim.toInt))
.toDF()
// Register the DataFrame as a temporary view
peopleDF.createOrReplaceTempView("people")

// SQL statements can be run by using the sql methods provided by Spark
val teenagersDF = spark.sql("SELECT name, age FROM people WHERE age BETWEEN 13 AND 19")

// The columns of a row in the result can be accessed by field index
teenagersDF.map(teenager => "Name: " + teenager(0)).show()
// +------------+
// | value|
// +------------+
// |Name: Justin|
// +------------+

// or by field name
teenagersDF.map(teenager => "Name: " + teenager.getAs[String]("name")).show()
// +------------+
// | value|
// +------------+
// |Name: Justin|
// +------------+

// No pre-defined encoders for Dataset[Map[K,V]], define explicitly
implicit val mapEncoder = org.apache.spark.sql.Encoders.kryo[Map[String, Any]]
// Primitive types and case classes can be also defined as
// implicit val stringIntMapEncoder: Encoder[Map[String, Any]] = ExpressionEncoder()

// row.getValuesMap[T] retrieves multiple columns at once into a Map[String, T]
teenagersDF.map(teenager => teenager.getValuesMap[Any](List("name", "age"))).collect()
// Array(Map("name" -> "Justin", "age" -> 19))

7、schema指定式查询模式

import org.apache.spark.sql.types._

// Create an RDD
val peopleRDD = spark.sparkContext.textFile("examples/src/main/resources/people.txt")

// The schema is encoded in a string
val schemaString = "name age"

// Generate the schema based on the string of schema
val fields = schemaString.split(" ")
.map(fieldName => StructField(fieldName, StringType, nullable = true))
val schema = StructType(fields)

// Convert records of the RDD (people) to Rows
val rowRDD = peopleRDD
.map(_.split(","))
.map(attributes => Row(attributes(0), attributes(1).trim))

// Apply the schema to the RDD
val peopleDF = spark.createDataFrame(rowRDD, schema)

// Creates a temporary view using the DataFrame
peopleDF.createOrReplaceTempView("people")

// SQL can be run over a temporary view created using DataFrames
val results = spark.sql("SELECT name FROM people")

// The results of SQL queries are DataFrames and support all the normal RDD operations
// The columns of a row in the result can be accessed by field index or by field name
results.map(attributes => "Name: " + attributes(0)).show()
// +-------------+
// | value|
// +-------------+
// |Name: Michael|
// | Name: Andy|
// | Name: Justin|
// +-------------+

8、自定义聚合函数

http://spark.apache.org/docs/latest/sql-programming-guide.html#untyped-user-defined-aggregate-functions

9、parquet格式的读写

import spark.implicits._

val peopleDF = spark.read.json("examples/src/main/resources/people.json")

// DataFrames can be saved as Parquet files, maintaining the schema information
peopleDF.write.parquet("people.parquet")

// Read in the parquet file created above
// Parquet files are self-describing so the schema is preserved
// The result of loading a Parquet file is also a DataFrame
val parquetFileDF = spark.read.parquet("people.parquet")

// Parquet files can also be used to create a temporary view and then used in SQL statements
parquetFileDF.createOrReplaceTempView("parquetFile")
val namesDF = spark.sql("SELECT name FROM parquetFile WHERE age BETWEEN 13 AND 19")
namesDF.map(attributes => "Name: " + attributes(0)).show()

10、手写json的数据源

val otherPeopleRDD = spark.sparkContext.makeRDD(
"""{"name":"Yin","address":{"city":"Columbus","state":"Ohio"}}""" :: Nil)
val otherPeople = spark.read.json(otherPeopleRDD)
otherPeople.show()
// +---------------+----+
// | address|name|
// +---------------+----+
// |[Columbus,Ohio]| Yin|
// +---------------+----+

11、json数据源

val path = "examples/src/main/resources/people.json"
val peopleDF = spark.read.json(path)

// The inferred schema can be visualized using the printSchema() method
peopleDF.printSchema()
// root
// |-- age: long (nullable = true)
// |-- name: string (nullable = true)

// Creates a temporary view using the DataFrame
peopleDF.createOrReplaceTempView("people")

// SQL statements can be run by using the sql methods provided by spark
val teenagerNamesDF = spark.sql("SELECT name FROM people WHERE age BETWEEN 13 AND 19")
teenagerNamesDF.show()

12、hive数据源

import org.apache.spark.sql.Row
import org.apache.spark.sql.SparkSession

case class Record(key: Int, value: String)

// warehouseLocation points to the default location for managed databases and tables
val warehouseLocation = "spark-warehouse"

val spark = SparkSession
.builder()
.appName("Spark Hive Example")
.config("spark.sql.warehouse.dir", warehouseLocation)
.enableHiveSupport()
.getOrCreate()

import spark.implicits._
import spark.sql

sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")

// Queries are expressed in HiveQL
sql("SELECT * FROM src").show()
// +---+-------+
// |key| value|
// +---+-------+
// |238|val_238|
// | 86| val_86|
// |311|val_311|
// ...

// Aggregation queries are also supported.
sql("SELECT COUNT(*) FROM src").show()
// +--------+
// |count(1)|
// +--------+
// | 500 |
// +--------+

// The results of SQL queries are themselves DataFrames and support all normal functions.
val sqlDF = sql("SELECT key, value FROM src WHERE key < 10 ORDER BY key")

// The items in DaraFrames are of type Row, which allows you to access each column by ordinal.
val stringsDS = sqlDF.map {
case Row(key: Int, value: String) => s"Key: $key, Value: $value"
}
stringsDS.show()
// +--------------------+
// | value|
// +--------------------+
// |Key: 0, Value: val_0|
// |Key: 0, Value: val_0|
// |Key: 0, Value: val_0|
// ...

// You can also use DataFrames to create temporary views within a SparkSession.
val recordsDF = spark.createDataFrame((1 to 100).map(i => Record(i, s"val_$i")))
recordsDF.createOrReplaceTempView("records")

// Queries can then join DataFrame data with data stored in Hive.
sql("SELECT * FROM records r JOIN src s ON r.key = s.key").show()
// +---+------+---+------+
// |key| value|key| value|
// +---+------+---+------+
// | 2| val_2| 2| val_2|
// | 4| val_4| 4| val_4|
// | 5| val_5| 5| val_5|
// ...

13、jdbc数据源,表的读写(整表)

比如用mysql,首先要把jar文件(如mysql-connector-java-5.1.26.jar)放在jars目录,然后用jar包参数启动

spark-shell --driver-class-path /usr/local/spark/jars/mysql-connector-java-5.1.21.jar --jars /usr/local/spark/jars/mysql-connector-java-5.1.21.jar

val jdbcDF = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/openfire").option("dbtable", "ofUser").option("user", "root").option("password", "mysql").load()

// Saving data to a JDBC source
jdbcDF.write
.format("jdbc")
.option("url", "jdbc:postgresql:dbserver")
.option("dbtable", "schema.tablename")
.option("user", "username")
.option("password", "password")
.save()

jdbcDF2.write
.jdbc("jdbc:postgresql:dbserver", "schema.tablename", connectionProperties)

Spark基础脚本入门实践1的更多相关文章

  1. Spark基础脚本入门实践3:Pair RDD开发

    Pair RDD转化操作 val rdd = sc.parallelize(List((1,2),(3,4),(3,6))) //reduceByKey,通过key来做合并val r1 = rdd.r ...

  2. Spark基础脚本入门实践2:基础开发

    1.最基本的Map用法 val data = Array(1, 2, 3, 4, 5)val distData = sc.parallelize(data)val result = distData. ...

  3. 【原创 Hadoop&Spark 动手实践 5】Spark 基础入门,集群搭建以及Spark Shell

    Spark 基础入门,集群搭建以及Spark Shell 主要借助Spark基础的PPT,再加上实际的动手操作来加强概念的理解和实践. Spark 安装部署 理论已经了解的差不多了,接下来是实际动手实 ...

  4. 0.Python 爬虫之Scrapy入门实践指南(Scrapy基础知识)

    目录 0.0.Scrapy基础 0.1.Scrapy 框架图 0.2.Scrapy主要包括了以下组件: 0.3.Scrapy简单示例如下: 0.4.Scrapy运行流程如下: 0.5.还有什么? 0. ...

  5. sass、less和stylus的安装使用和入门实践

    刚 开始的时候,说实话,我很反感使用css预处理器这种新玩意的,因为其中涉及到了编程的东西,私以为很复杂,而且考虑到项目不是一天能够完成的,也很少是 一个人完成的,对于这种团队的项目开发,前端实践用c ...

  6. Spark在美团的实践

    https://tech.meituan.com/2016/03/31/spark-in-meituan.html 本文已发表在<程序员>杂志2016年4月期. 前言 美团是数据驱动的互联 ...

  7. cmd 与 bash 基础命令入门

    身为一个程序员会用命令行来进行一些简单的操作,不是显得很装逼嘛!?嘿嘿~ ヾ(>∀<) cmd 与 bash 基础命令入门       简介       CMD 基础命令          ...

  8. IM开发者的零基础通信技术入门(二):通信交换技术的百年发展史(下)

    1.系列文章引言 1.1 适合谁来阅读? 本系列文章尽量使用最浅显易懂的文字.图片来组织内容,力求通信技术零基础的人群也能看懂.但个人建议,至少稍微了解过网络通信方面的知识后再看,会更有收获.如果您大 ...

  9. IM开发者的零基础通信技术入门(一):通信交换技术的百年发展史(上)

    [来源申明]本文原文来自:微信公众号“鲜枣课堂”,官方网站:xzclass.com,原题为:<通信交换的百年沧桑(上)>,本文引用时已征得原作者同意.为了更好的内容呈现,即时通讯网在收录时 ...

随机推荐

  1. Match-----Correlation-----find_ncc_model_exposure

    * This example program shows how to use HALCON's correlation-based* matching. In particular it demon ...

  2. CAFE: a computational tool for the study of gene family evolution

    1.摘要 摘要:我们提出了CAFE(计算分析基因家族进化),这是一个统计分析基因家族进化规模的工具.它使用随机的出生和死亡过程来模拟一个系统发育过程中基因家族大小的进化.对于一个特定的系统发育树,并给 ...

  3. DJango 基础 (3)

    模板路径 在配置文件setting.py中找到TEMPLATES设置来配置. 这是一个设置选项的列表,模板大都包含两项通用设置:两种方式配置模板: 第一种: DIRS 定义一个目录列表,模板引擎按列表 ...

  4. 11. Container With Most Water (JAVA)

    Given n non-negative integers a1, a2, ..., an , where each represents a point at coordinate (i, ai). ...

  5. short s1 = 1; s1 = s1 + 1;和 short s1 = 1; s1 += 1;的问题,终于弄懂了

    对于short s1 = 1; s1 = s1 + 1; 由于s1+1运算时会自动提升表达式的类型,所以结果是int型,再赋值给short类型s1时,编译器将报告需要强制转换类型的错误. 对于shor ...

  6. 源发行版 1.8 需要目标发行版 1.8以及usage of api documented as @since 1.8+

    Maven项目每个Module都有单独的pom.xml,如果不在pom.xml中进行配置,则默认将Module的Language Level设置为5.所以要在pom.xml文件中添加插件进行配置. & ...

  7. Solidity的三种合约间的调用方式 call、delegatecall 和 callcode

    0x00 前言 Solidity(http://solidity.readthedocs.io/en/v0.4.24/) 是一种用与编写以太坊智能合约的高级语言,语法类似于 JavaScript. S ...

  8. MySQL开发——【介绍、安装】

    什么是数据库? 数据库(Database)是按照数据结构来组织.存储和管理数据的仓库, 每个数据库都有一个或多个不同的API用于创建,访问,管理,搜索和复制所保存的数据. 数据库的分类? 关系型数据库 ...

  9. Android开发之页面跳转传递list集合

    这篇随笔这里详细记录两个activity之间如何传递list集合中的数据. 1.首先要对javabean进行序列化处理,即实现Serializable. package com.anhua.bean; ...

  10. Zookeeper 集群配置及启动

    准备工作 1. 集群机器 192.168.8.2 192.168.8.6 192.168.8.11 2. 包 zookeeper-3.4.10.tar.gz 集群配置 1. 解压路径 192.168. ...