1 Overview
Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can also act as distributed SQL query engine.
 
2 DataFrames
A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python
 
2.1 SQLContext
The entry point into all functionality in Spark SQL is the SQLContext class, or one of its descendants. To create a basic SQLContext, all you need is a SparkContext.
val sqlContext =new org.apache.spark.sql.SQLContext(sc)
 
2.2 Creating DataFrames
val df = sqlContext.read.json("/home/slh/data/people.json")
 
2.3 Operations
show()
printSchema()
select("name").show()
select(df("name"), df("age") + 1).show()
filter(df("age") > 13).show()
groupBy("age").count().show()
 
2.4 Running SQL Queries
The sql function on a SQLContext enables applications to run SQL queries programmatically and returns the result as a DataFrame.
val df =  sqlContext.sql("select * from table")
 
2.5 Interoperating with RDDs
Spark SQL supports two different methods for converting existing RDDs into DataFrames. The first method uses reflection to infer the schema of an RDD that contains specific types of objects. This reflection based approach leads to more concise code and works well when you already know the schema while writing your Spark application.
The second method for creating DataFrames is through a programmatic interface that allows you to construct a schema and then apply it to an existing RDD. While this method is more verbose, it allows you to construct DataFrames when the columns and their types are not known until runtime.
 
3 Data Sources
Spark SQL supports operating on a variety of data sources through the DataFrame interface. A DataFrame can be operated on as normal RDDs and can also be registered as a temporary table. Registering a DataFrame as a table allows you to run SQL queries over its data. This section describes the general methods for loading and saving data using the Spark Data Sources and then goes into specific options that are available for the built-in data sources.
 
3.1 Load/Save Functions
Generic:
val df = sqlContext.read.load("examples/src/main/resources/users.parquet")
df.select("name", "favorite_color").write.save("namesAndFavColors.parquet")
Specifying:
val df = sqlContext.read.format("json").load("examples/src/main/resources/people.json")
df.select("name", "age").write.format("parquet").save("namesAndAges.parquet")
Save Modes:
SaveMode.ErrorIfExists(default)
SaveMode.Append
SaveMode.Overwrite
SaveMode.Ignore
 
3.2 Parquet Files
Parquet is a columnar format that is supported by many other data processing systems. Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema of the original data.
 
// The RDD is implicitly converted to a DataFrame by implicits, allowing it to be stored using Parquet.
people.write.parquet("people.parquet")

// Read in the parquet file created above. Parquet files are self-describing so the schema is preserved.
// The result of loading a Parquet file is also a DataFrame.
val parquetFile = sqlContext.read.parquet("people.parquet")

 
3.3 JSON Datasets
val path = "examples/src/main/resources/people.json"
val people = sqlContext.read.json(path)

val anotherPeopleRDD = sc.parallelize(
"""{"name":"Yin","address":{"city":"Columbus","state":"Ohio"}}""" :: Nil)
val anotherPeople = sqlContext.read.json(anotherPeopleRDD)

 
3.4 Hive Tables
Spark SQL also supports reading and writing data stored in Apache Hive. However, since Hive has a large number of dependencies, it is not included in the default Spark assembly. Hive support is enabled by adding the -Phive and -Phive-thriftserver flags to Spark’s build. This command builds a new assembly jar that includes Hive. Note that this Hive assembly jar must also be present on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries (SerDes) in order to access data stored in Hive.
 
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

Spark SQL - DataFrame的更多相关文章

  1. Spark SQL DataFrame新增一列的四种方法

    方法一:利用createDataFrame方法,新增列的过程包含在构建rdd和schema中 方法二:利用withColumn方法,新增列的过程包含在udf函数中 方法三:利用SQL代码,新增列的过程 ...

  2. spark第七篇:Spark SQL, DataFrame and Dataset Guide

    预览 Spark SQL是用来处理结构化数据的Spark模块.有几种与Spark SQL进行交互的方式,包括SQL和Dataset API. 本指南中的所有例子都可以在spark-shell,pysp ...

  3. Spark SQL,如何将 DataFrame 转为 json 格式

    今天主要介绍一下如何将 Spark dataframe 的数据转成 json 数据.用到的是 scala 提供的 json 处理的 api. 用过 Spark SQL 应该知道,Spark dataf ...

  4. Spark操作dataFrame进行写入mysql,自定义sql的方式

    业务场景: 现在项目中需要通过对spark对原始数据进行计算,然后将计算结果写入到mysql中,但是在写入的时候有个限制: 1.mysql中的目标表事先已经存在,并且当中存在主键,自增长的键id 2. ...

  5. spark sql的agg函数,作用:在整体DataFrame不分组聚合

    .agg(expers:column*) 返回dataframe类型 ,同数学计算求值 df.agg(max("age"), avg("salary")) df ...

  6. spark结构化数据处理:Spark SQL、DataFrame和Dataset

    本文讲解Spark的结构化数据处理,主要包括:Spark SQL.DataFrame.Dataset以及Spark SQL服务等相关内容.本文主要讲解Spark 1.6.x的结构化数据处理相关东东,但 ...

  7. Spark SQL 用户自定义函数UDF、用户自定义聚合函数UDAF 教程(Java踩坑教学版)

    在Spark中,也支持Hive中的自定义函数.自定义函数大致可以分为三种: UDF(User-Defined-Function),即最基本的自定义函数,类似to_char,to_date等 UDAF( ...

  8. Spark修炼之道(进阶篇)——Spark入门到精通:第九节 Spark SQL执行流程解析

    1.总体执行流程 使用下列代码对SparkSQL流程进行分析.让大家明确LogicalPlan的几种状态,理解SparkSQL总体执行流程 // sc is an existing SparkCont ...

  9. 大数据技术之_19_Spark学习_03_Spark SQL 应用解析 + Spark SQL 概述、解析 、数据源、实战 + 执行 Spark SQL 查询 + JDBC/ODBC 服务器

    第1章 Spark SQL 概述1.1 什么是 Spark SQL1.2 RDD vs DataFrames vs DataSet1.2.1 RDD1.2.2 DataFrame1.2.3 DataS ...

随机推荐

  1. [转] 編程風格要素-The Elements of Programming Style 中文英文中英對照

    转自: http://www.loliman3000.com/tech/2fe33ce32906f0302412881.php 下面的程序風格規則提煉自Brian Kernighan和P. J. Pl ...

  2. 在mac上搭建python环境

    原文出处:http://blog.justbilt.com/2014/07/02/setup_python_on_mac/ 这两天重新搞了下python的环境,发现好多地方还是容易忘记,因此有了这篇文 ...

  3. .net 数据库连接池超时问题

    一.数据库Connection Pool 连接池是什么 每当程序需要读写数据库的时候.Connection.Open()会使用ConnectionString连接到数据库,数据库会为程序建立 一个连接 ...

  4. openstack kilo 流量

  5. 第二百二十九天 how can I 坚持

    百度-让人更容易的获取信息,腾讯-让人更方便的交流,阿里-让人更方便的消费,每个公司都有自己的使命,每个公司的使命都是围绕着人. 创新-其实应该是在每个人的内心深处都或多或少有一些新的想法,但是什么是 ...

  6. Cocos2d-x项目移植到WinRT/Win8小记

    Cocos2d-x项目移植到WinRT/Win8小记 作者: K.C. 日期: 11/17/2013 Date: 2013-11-17 23:33 Title: Cocos2d-x项目移植到WinRT ...

  7. ArcObjects10.0引用控件报错

    错误如下:ArcGIS version not specified. You must call RuntimeManager.Bind before creating any ArcGIS comp ...

  8. Unity3D Script KeynoteII

    [Unity3D Script KeynoteII] 1.使用代码操作Particle. //粒子对象 GameObject particle = null; //粒子X轴方向速度 float vel ...

  9. LightOJ 1259 Goldbach`s Conjecture (哥德巴赫猜想 + 素数筛选法)

    http://lightoj.com/volume_showproblem.php?problem=1259 题目大意:给你一个数n,这个数能分成两个素数a.b,n = a + b且a<=b,问 ...

  10. postconf 命令常用参数

    postfix的main.cf配置文件一般不直接编辑,而多使用postconf命令来配置‘ postconf -d:查看默认配置: postconf -n:查看当前配置(即当前生效的配置): post ...