Spark记录-本地Spark读取Hive数据简单例子
注意:将mysql的驱动包拷贝到spark/lib下,将hive-site.xml拷贝到项目resources下,远程调试不要使用主机名 import org.apache.spark._
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.hive.HiveContext
import java.io.FileNotFoundException
import java.io.IOException object HiveSelect {
def main(args: Array[String]) {
System.setProperty("hadoop.home.dir", "D:\\hadoop") //加载hadoop组件
val conf = new SparkConf().setAppName("HiveApp").setMaster("spark://192.168.66.66:7077")
.set("spark.executor.memory", "1g")
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.setJars(Seq("D:\\workspace\\scala\\out\\scala.jar"))//加载远程spark
//.set("hive.metastore.uris", "thrift://192.168.66.66:9083")//远程hive的meterstore地址
// .set("spark.driver.extraClassPath","D:\\json\\mysql-connector-java-5.1.39.jar")
val sparkcontext = new SparkContext(conf);
try {
val hiveContext = new HiveContext(sparkcontext);
hiveContext.sql("use siat"); //使用数据库
hiveContext.sql("DROP TABLE IF EXISTS src") //删除表
hiveContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING) " +
"ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' ");//创建表
hiveContext.sql("LOAD DATA LOCAL INPATH 'D:\\workspace\\scala\\src.txt' INTO TABLE src "); //导入数据
hiveContext.sql(" SELECT * FROM src").collect().foreach(println);//查询数据
}
catch {
case e: FileNotFoundException => println("Missing file exception")
case ex: IOException => println("IO Exception")
case ee: ArithmeticException => println(ee)
case eee: Throwable => println("found a unknown exception" + eee)
case ef: NumberFormatException => println(ef)
case ec: Exception => println(ec)
case e: IllegalArgumentException => println("illegal arg. exception");
case e: IllegalStateException => println("illegal state exception");
}
finally {
sparkcontext.stop()
}
}
}
附录1:scala-spark api-http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.package
org.apache.spark org.apache.spark.api.java org.apache.spark.api.java.function org.apache.spark.broadcast org.apache.spark.graphx org.apache.spark.graphx.impl org.apache.spark.graphx.lib org.apache.spark.graphx.util org.apache.spark.input org.apache.spark.internal org.apache.spark.internal.io org.apache.spark.io org.apache.spark.launcher org.apache.spark.mapred org.apache.spark.metrics.source org.apache.spark.ml org.apache.spark.ml.attribute org.apache.spark.ml.classification org.apache.spark.ml.clustering org.apache.spark.ml.evaluation org.apache.spark.ml.feature org.apache.spark.ml.fpm org.apache.spark.ml.linalg org.apache.spark.ml.param org.apache.spark.ml.recommendation org.apache.spark.ml.regression org.apache.spark.ml.source.libsvm org.apache.spark.ml.stat org.apache.spark.ml.stat.distribution org.apache.spark.ml.tree org.apache.spark.ml.tuning org.apache.spark.ml.util org.apache.spark.mllib org.apache.spark.mllib.classification org.apache.spark.mllib.clustering org.apache.spark.mllib.evaluation org.apache.spark.mllib.feature org.apache.spark.mllib.fpm org.apache.spark.mllib.linalg org.apache.spark.mllib.linalg.distributed org.apache.spark.mllib.optimization org.apache.spark.mllib.pmml org.apache.spark.mllib.random org.apache.spark.mllib.rdd org.apache.spark.mllib.recommendation org.apache.spark.mllib.regression org.apache.spark.mllib.stat org.apache.spark.mllib.stat.distribution org.apache.spark.mllib.stat.test org.apache.spark.mllib.tree org.apache.spark.mllib.tree.configuration org.apache.spark.mllib.tree.impurity org.apache.spark.mllib.tree.loss org.apache.spark.mllib.tree.model org.apache.spark.mllib.util org.apache.spark.partial org.apache.spark.rdd org.apache.spark.scheduler org.apache.spark.scheduler.cluster org.apache.spark.security org.apache.spark.serializer org.apache.spark.sql org.apache.spark.sql.api.java org.apache.spark.sql.catalog org.apache.spark.sql.expressions org.apache.spark.sql.expressions.javalang org.apache.spark.sql.expressions.scalalang org.apache.spark.sql.hive org.apache.spark.sql.hive.execution org.apache.spark.sql.hive.orc org.apache.spark.sql.jdbc org.apache.spark.sql.sources org.apache.spark.sql.streaming org.apache.spark.sql.types org.apache.spark.sql.util org.apache.spark.status.api.v1 org.apache.spark.status.api.v1.streaming org.apache.spark.storage org.apache.spark.streaming org.apache.spark.streaming.api.java org.apache.spark.streaming.dstream org.apache.spark.streaming.flume org.apache.spark.streaming.kafka org.apache.spark.streaming.kinesis org.apache.spark.streaming.receiver org.apache.spark.streaming.scheduler org.apache.spark.streaming.scheduler.rate org.apache.spark.streaming.util org.apache.spark.ui.env org.apache.spark.ui.exec org.apache.spark.ui.jobs org.apache.spark.ui.storage org.apache.spark.util org.apache.spark.util.random org.apache.spark.util.sketch
Spark记录-本地Spark读取Hive数据简单例子的更多相关文章
- R语言读取Hive数据表
R通过RJDBC包连接Hive 目前Hive集群是可以通过跳板机来访问 HiveServer, 将Hive 中的批量数据读入R环境,并进行后续的模型和算法运算. 1. 登录跳板机后需要首先在Linux ...
- javascript读取xml文件读取节点数据的例子
分享下用javascript读取xml文件读取节点数据方法. 读取的节点数据,还有一种情况是读取节点属性数据. <head> <title></title> < ...
- Spark记录-Spark-Shell客户端操作读取Hive数据
1.拷贝hive-site.xml到spark/conf下,拷贝mysql-connector-java-xxx-bin.jar到hive/lib下 2.开启hive元数据服务:hive --ser ...
- Spark SQL读取hive数据时报找不到mysql驱动
Exception: Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BoneC ...
- Spark从HDFS上读取JSON数据
代码如下: import org.apache.spark.sql.Row; import org.apache.spark.SparkConf; import org.apache.spark.ap ...
- Spark记录-阿里巴巴开源工具DataX数据同步工具使用
1.官网下载 下载地址:https://github.com/alibaba/DataX DataX 是阿里巴巴集团内被广泛使用的离线数据同步工具/平台,实现包括 MySQL.Oracle.SqlSe ...
- python 读取hive数据
话不多说,直接上代码 from pyhive import hivedef pyhive(hql): conn = hive.Connection(host='HiveServer2 host', p ...
- ListBox和ComboBox绑定数据简单例子
1. 将集合数据绑定到ListBox和ComboBox控件,界面上显示某个属性的内容 //自定义了Person类(有Name,Age,Heigth等属性) List<Person> per ...
- Spark读取elasticsearch数据指南
最近要在 Spark job 中通过 Spark SQL 的方式读取 Elasticsearch 数据,踩了一些坑,总结于此. 环境说明 Spark job 的编写语言为 Scala,scala-li ...
随机推荐
- [BZOJ4144][AMPPZ2014]Petrol[多源最短路+MST]
题意 题目链接 分析 假设在 \(a \rightarrow b\) 的最短路径中出现了一个点 \(x\) 满足到 \(x\) 最近的点是 \(c\) ,那么我们完全可以从 \(a\) 直接走到 \( ...
- Sterling B2B Integrator与SAP交互 - 01 简介
公司近期实施上线了SAP系统,由于在和客户的数据交互中采用了较多的EDI数据交换,且多数客户所采用的EDI数据并不太相同(CSV,XML,X12,WebService),所以在EDI架构上选择了IBM ...
- 深入了解Kubernetes REST API的工作方式
关于Kubernetes REST API的工作方式: 在哪里以及如何定义从REST路径到处理REST调用的函数的映射? 与etcd的交互发生在哪里? 从客户端发出请求到保存在etcd中对象的端到端路 ...
- 用Unity简单实现第三人称人物的移动和转向
上图不重要,因为实现人物的移动用的是动画,没有什么可说的,主要是下面实现人物的转向. 比如在一个平面中,玩家按了w和d键则人物会面向右前方向前进,如果此时玩家按了a和s键则人物会面向左后方向前进,那么 ...
- 290. Word Pattern【LeetCode by java】
今天发现LintCode页面刷新不出来了,所以就转战LeetCode.还是像以前一样,做题顺序:难度从低到高,每天至少一题. Given a pattern and a string str, fin ...
- GitHub笔记(一)——本地库基础操作
零.基础概念理解——可以访问廖雪峰老师的网站https://www.liaoxuefeng.com/wiki/0013739516305929606dd18361248578c67b8067c8c01 ...
- 手机访问PC端
输入所要访问的端口,然后默认下一步即可.
- 《Linux内核分析》第五周学习总结 扒开系统调用的三层皮(下)
扒开系统调用的三层皮(下) 郝智宇 无转载 <Linux内核分析>MOOC课程http://mooc.study.163.com/course/USTC-1000029000 一.给Men ...
- 【Webpack2.X笔记】 配合react项目进行配置
前言: 本文是自己在工作中使用webpack进行react开发项目构建的一些经验总结,做以记录防范后续踩坑. 如果您还没有webpack相关基础,请先移步 入门Webpack,看这篇就够了 进行基础学 ...
- oracle 配置本地Net服务
1.查看当前数据库名字(前提是已经创建了),先前忘记数据库实例名乱输,然后创建一直失败 开始,程序,Oracle_xxxx_home1 ,配置和移植工具, Database Configuration ...