hive 2.3.4 on spark 2.4.0

Hive on Spark provides Hive with the ability to utilize Apache Spark as its execution engine.

set hive.execution.engine=spark;

1 version

Hive on Spark is only tested with a specific version of Spark, so a given version of Hive is only guaranteed to work with a specific version of Spark. Other versions of Spark may work with a given version of Hive, but that is not guaranteed. Below is a list of Hive versions and their corresponding compatible Spark versions.

以上版本对应是测试过的,其他版本也可能可用,需要测试;

2 yarn

Instead of the capacity scheduler, the fair scheduler is required.  This fairly distributes an equal share of resources for jobs in the YARN cluster.

yarn-site.xml

<property>

<name>yarn.resourcemanager.scheduler.class</name>

<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>

</property>

3 spark

$ export SPARK_HOME=...

Note that you must have a version of Spark which does not include the Hive jars. Meaning one which was not built with the Hive profile. If you will use Parquet tables, it's recommended to also enable the "parquet-provided" profile. Otherwise there could be conflicts in Parquet dependency.

不能直接使用现有的spark安装目录,一个是hive依赖,一个parquet依赖,这两个依赖很容易导致问题;

4 library

$ ln -s $SPARK_HOME/jars/scala-library-2.11.8.jar $HIVE_HOME/lib/scala-library-2.11.8.jar
$ ln -s $SPARK_HOME/jars/spark-core_2.11-2.0.2.jar $HIVE_HOME/lib/spark-core_2.11-2.0.2.jar
$ ln -s $SPARK_HOME/jars/spark-network-common_2.11-2.0.2.jar $HIVE_HOME/lib/spark-network-common_2.11-2.0.2.jar

Prior to Hive 2.2.0, link the spark-assembly jar to HIVE_HOME/lib

spark2之前的版本有spark-assembly.jar,直接将该jar link到HIVE_HOME/lib

5 hive

$ hive
hive> set hive.execution.engine=spark;

默认的spark.master=yarn,更多配置

set spark.master=<Spark Master URL>
set spark.eventLog.enabled=true;
set spark.eventLog.dir=<Spark event log folder (must exist)>
set spark.executor.memory=512m;
set spark.executor.instances=10;
set spark.executor.cores=1;
set spark.serializer=org.apache.spark.serializer.KryoSerializer;

以上配置可以像设置hive config一样直接执行,也可以放到hive-site.xml中,也可以放到HIVE_CONF_DIR中的spark-defaults.conf中

This can be done either by adding a file "spark-defaults.conf" with these properties to the Hive classpath, or by setting them on Hive configuration (hive-site.xml).

6 报错

hive执行sql报错:

FAILED: SemanticException Failed to get a spark session: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client

hive执行日志位于 /tmp/$user/hive.log

详细错误日志

2019-03-05 11:06:43 ERROR ApplicationMaster:91 - User class threw exception: java.lang.NoSuchFieldError: SPARK_RPC_SERVER_ADDRESS
java.lang.NoSuchFieldError: SPARK_RPC_SERVER_ADDRESS
at org.apache.hive.spark.client.rpc.RpcConfiguration.<clinit>(RpcConfiguration.java:47)
at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:134)
at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:516)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)

因为spark打包时加了hive依赖,尝试使用没有hive的包

https://archive.apache.org/dist/spark/spark-2.0.0/spark-2.0.0-bin-hadoop2.4-without-hive.tgz

再执行,报parquet版本冲突

Caused by: java.lang.NoSuchMethodError: org.apache.parquet.schema.Types$MessageTypeBuilder.addFields([Lorg/apache/parquet/schema/Type;)Lorg/apache/parquet/schema/Types$BaseGroupBuilder;

只能编译了

1)spark 2.0-2.2

./dev/make-distribution.sh --name "hadoop2-without-hive" --tgz "-Pyarn,hadoop-provided,hadoop-2.7,parquet-provided"

得到spark-2.0.2-bin-hadoop2-without-hive.tgz

2)spark 2.3及以上

./dev/make-distribution.sh --name "hadoop2-without-hive" --tgz "-Pyarn,hadoop-provided,hadoop-2.7,parquet-provided,orc-provided"

得到spark-2.4.0-bin-hadoop2-without-hive.tgz

使用spark-2.0.2-bin-hadoop2-without-hive.tgz再执行,还有报错

2019-03-05T17:10:55,537 ERROR [901dc3cf-a990-4e8b-95ec-fcf6a9c9002c main] ql.Driver: FAILED: SemanticException Failed to get a spark session: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client.
org.apache.hadoop.hive.ql.parse.SemanticException: Failed to get a spark session: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client.

详细错误日志

2019-03-05T17:08:37,364 INFO [stderr-redir-1] client.SparkClientImpl: Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream

缺少jar,直接从spark-2.0.0-bin-hadoop2.4-without-hive里拷贝

$ cd spark-2.0.2-bin-hadoop2-without-hive
$ cp ../spark-2.4.0-bin-hadoop2.6/jars/hadoop-* jars/
$ cp ../spark-2.4.0-bin-hadoop2.6/jars/slf4j-* jars/
$ cp ../spark-2.4.0-bin-hadoop2.6/jars/log4j-* jars/
$ cp ../spark-2.4.0-bin-hadoop2.6/jars/guava-* jars/
$ cp ../spark-2.4.0-bin-hadoop2.6/jars/commons-* jars/
$ cp ../spark-2.4.0-bin-hadoop2.6/jars/protobuf-* jars/
$ cp ../spark-2.4.0-bin-hadoop2.6/jars/htrace-* jars/

这次ok了,执行sql输出

Query ID = hadoop_20190305180847_e8b638c8-394c-496d-a43e-26a0a17f9e18
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Spark Job = d5fea72c-c67c-49ec-9f4c-650a795c74c3
Running with YARN Application = application_1551754784891_0008
Kill Command = $HADOOP_HOME/bin/yarn application -kill application_1551754784891_0008

Query Hive on Spark job[1] stages: [2, 3]

Status: Running (Hive on Spark job[1])
--------------------------------------------------------------------------------------
STAGES ATTEMPT STATUS TOTAL COMPLETED RUNNING PENDING FAILED
--------------------------------------------------------------------------------------
Stage-2 ........ 0 FINISHED 275 275 0 0 0
Stage-3 ........ 0 FINISHED 1009 1009 0 0 0
--------------------------------------------------------------------------------------
STAGES: 02/02 [==========================>>] 100% ELAPSED TIME: 149.58 s
--------------------------------------------------------------------------------------
Status: Finished successfully in 149.58 seconds
OK

使用spark-2.4.0-bin-hadoop2-without-hive.tgz也没有问题;

参考:

https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark

https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started

【原创】大数据基础之Hive(5)hive on spark的更多相关文章

  1. 【原创】大数据基础之Kudu(4)spark读写kudu

    spark2.4.3+kudu1.9 1 批量读 val df = spark.read.format("kudu") .options(Map("kudu.master ...

  2. CentOS6安装各种大数据软件 第八章:Hive安装和配置

    相关文章链接 CentOS6安装各种大数据软件 第一章:各个软件版本介绍 CentOS6安装各种大数据软件 第二章:Linux各个软件启动命令 CentOS6安装各种大数据软件 第三章:Linux基础 ...

  3. 【原创】大数据基础之Benchmark(2)TPC-DS

    tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...

  4. 【原创】大数据基础之Zookeeper(2)源代码解析

    核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...

  5. 【原创】大数据基础之Hive(5)性能调优Performance Tuning

    1 compress & mr hive默认的execution engine是mr hive> set hive.execution.engine;hive.execution.eng ...

  6. 【原创】大数据基础之Hive(3)最简绿色部署

    hadoop部署参考:https://www.cnblogs.com/barneywill/p/10428098.html 1 拷贝到所有服务器上并解压 # ansible all-servers - ...

  7. 了解大数据的技术生态系统 Hadoop,hive,spark(转载)

    首先给出原文链接: 原文链接 大数据本身是一个很宽泛的概念,Hadoop生态圈(或者泛生态圈)基本上都是为了处理超过单机尺度的数据处理而诞生的.你能够把它比作一个厨房所以须要的各种工具. 锅碗瓢盆,各 ...

  8. 大数据学习系列之四 ----- Hadoop+Hive环境搭建图文详解(单机)

    引言 在大数据学习系列之一 ----- Hadoop环境搭建(单机) 成功的搭建了Hadoop的环境,在大数据学习系列之二 ----- HBase环境搭建(单机)成功搭建了HBase的环境以及相关使用 ...

  9. 大数据入门第十一天——hive详解(一)入门与安装

    一.基本概念 1.什么是hive The Apache Hive ™ data warehouse software facilitates reading, writing, and managin ...

随机推荐

  1. .Net版微信支付

    一. 案例介绍 这里模拟一个实际业务场景,进行介绍微信支付,业务功能包括:登录.注册.充值.查看充值记录. 页面图:       二. 概要设计 1.数据库设计 这里数据库包括两张表:用户表和订单表. ...

  2. 常见排序算法之python实现

    冒泡排序 简介 冒泡排序(英语:Bubble Sort)是一种简单的排序算法.它重复地遍历要排序的数列,一次比较两个元素,如果他们的顺序错误就把他们交换过来.遍历数列的工作是重复地进行直到没有再需要交 ...

  3. ajax方式下载文件

    在web项目中需要下载文件,由于传递的参数比较多(通过参数在服务器端动态下载指定文件),所以希望使用post方式传递参数.通常,在web前端需要下载文件,都是通过指定<a>标签的href属 ...

  4. C++ 出现异常“.... \debug_heap.cpp Line:980 Expression:__acrt_first_block==header"

    本人是在写dll项目中出现了这个问题,经过一天的研究,尝试了三个步骤1.在配置属性->常规->MFC的使用中,将在静态库中使用MFC改为在共享DLL中使用MFC.但是还会出错2.原因是dl ...

  5. request.setCharacterEncoding()、response.setCharacterEncoding()的区别

    request.setCharacterEncoding()是你设置获得数据的编码方式.response.setCharacterEncoding()是你响应时设置的编码.response.setCo ...

  6. Generic XXE Detection

    参考连接:https://www.christian-schneider.net/GenericXxeDetection.html In this article I present some tho ...

  7. bebugger调试理解commonJS原理

    上面图片是bebugger一个导入的模块,使用vscode可以轻松的看到调用栈,通过断点调试进入断点 Mode函数的原型里面有一个require方法,函数里面有Module._load()加载模块,传 ...

  8. 网址,域名,IP,主机名的区别

    域名 通常 Internet 主机域名的一般结构为:主机名.三级域名.二级域名.顶级域名(又称为一级域名).   二级域名及其以上级别的域名,统称为子域名,有多少个点就是几级域名   顶级域名分为两类 ...

  9. hibernate多表操作

    一.表之间的关系 1.一对一 2.一对多 3.多对多 二.表之间关系建表原则 1.一对多:在多的一方创建一个外键,指向一的一方的主键 2.多对多:创建一个中间表,中间表至少有两个字段,分别作为外键指向 ...

  10. Django REST framework 简介

    需求 REST framework需要如下: Python (2.7, 3.2, 3.3, 3.4, 3.5, 3.6) Django (1.10, 1.11, 2.0) 下面的文件包可以选择性安装 ...