3.hive的thriftserver服务
1.ThiftServer介绍
正常的hive仅允许使用HiveQL执行查询、更新等操作,并且该方式比较笨拙单一。幸好Hive提供了轻客户端的实现,通过HiveServer或者HiveServer2,客户端可以在不启动CLI的情况下对Hive中的数据进行操作,两者都允许远程客户端使用多种编程语言如Java、Python向Hive提交请求,取回结果 使用jdbc协议连接hive的thriftserver服务器
可以实现远程访问
可以通过命令链接多个hive
2.ThiftServer启动
启动hive的thriftserver
#cd /soft/hive/bin/
#./hiveserver2
#默认启动非后台启动 需要开另外一个终端
#端口号为10000
使用beeline连接hiveserver2服务器,client端命令行程序
#beeline
Beeline version 2.1.1 by Apache Hive
#输入要链接的服务
beeline> !connect jdbc:hive2://localhost:10000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000:
Enter password for jdbc:hive2://localhost:10000:
Connected to: Apache Hive (version 2.1.1)
Driver: Hive JDBC (version 2.1.1)
17/07/13 10:31:00 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000>
3.beeline的基本用法
查看数据库
0: jdbc:hive2://localhost:10000> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
| liuyao |
+----------------+--+
2 rows selected (1.485 seconds)
查看表
0: jdbc:hive2://localhost:10000> use liuyao;
No rows affected (0.123 seconds)
0: jdbc:hive2://localhost:10000> show tables;
+-----------+--+
| tab_name |
+-----------+--+
| test |
+-----------+--+
1 row selected (0.283 seconds)
或者使用
0: jdbc:hive2://localhost:10000> !tables
+------------+--------------+-------------+-------------+----------+-----------+-------------+------------+----------------------------+-----------------+--+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | REMARKS | TYPE_CAT | TYPE_SCHEM | TYPE_NAME | SELF_REFERENCING_COL_NAME | REF_GENERATION |
+------------+--------------+-------------+-------------+----------+-----------+-------------+------------+----------------------------+-----------------+--+
| | liuyao | test | TABLE | NULL | NULL | NULL | NULL | NULL | NULL |
+------------+--------------+-------------+-------------+----------+-----------+-------------+------------+----------------------------+-----------------+--+
0: jdbc:hive2://localhost:10000>
创建表
CREATE TABLE emp0
(
name string,
arr ARRAY<string>,
stru1 STRUCT<sex:string,age:int>,
map1 MAP<string,int>,
map2 MAP<string,ARRAY<string>>
)
;
查看表结构
0: jdbc:hive2://localhost:10000> desc emp0;
+-----------+-----------------------------+----------+--+
| col_name | data_type | comment |
+-----------+-----------------------------+----------+--+
| name | string | |
| arr | array<string> | |
| stru1 | struct<sex:string,age:int> | |
| map1 | map<string,int> | |
| map2 | map<string,array<string>> | |
+-----------+-----------------------------+----------+--+
5 rows selected (0.296 seconds)
删除表
0: jdbc:hive2://localhost:10000> use default;
No rows affected (0.092 seconds)
0: jdbc:hive2://localhost:10000> drop table emp0;
No rows affected (1.823 seconds)
4. 数据导入查询等演示
生成数据
#vim /root/hive.data
放入以下数据
Michael|Montreal,Toronto|Male,30|DB:80|Product:Developer^DLead
Will|Montreal|Male,35|Perl:85|Product:Lead,Test:Lead
Shelley|New York|Female,27|Python:80|Test:Lead,COE:Architect
Lucy|Vancouver|Female,57|Sales:89,HR:94|Sales:Lead
创建表
CREATE TABLE emp1
(
name string,
arr ARRAY<string>,
stru1 STRUCT<sex:string,age:int>,
map1 MAP<string,int>,
map2 MAP<string,ARRAY<string>>
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|' //字段分隔符
COLLECTION ITEMS TERMINATED BY ','
MAP KEYS TERMINATED BY ':' //map的kv之间分隔符
LINES TERMINATED BY '\n'; //集合元素分隔符导入数据
如果数据在本地用
0: jdbc:hive2://localhost:10000> load data local inpath '/root/hive.data' into table emp0;
No rows affected (0.956 seconds)
0: jdbc:hive2://localhost:10000>
查询
0: jdbc:hive2://localhost:10000> select * from emp0;
+------------+-------------------------+----------------------------+-----------------------+----------------------------------------+--+
| emp0.name | emp0.arr | emp0.stru1 | emp0.map1 | emp0.map2 |
+------------+-------------------------+----------------------------+-----------------------+----------------------------------------+--+
| Michael | ["Montreal","Toronto"] | {"sex":"Male","age":30} | {"DB":80} | {"Product":["Developer^DLead"]} |
| Will | ["Montreal"] | {"sex":"Male","age":35} | {"Perl":85} | {"Product":["Lead"],"Test":["Lead"]} |
| Shelley | ["New York"] | {"sex":"Female","age":27} | {"Python":80} | {"Test":["Lead"],"COE":["Architect"]} |
| Lucy | ["Vancouver"] | {"sex":"Female","age":57} | {"Sales":89,"HR":94} | {"Sales":["Lead"]} |
| | NULL | NULL | NULL | NULL |
+------------+-------------------------+----------------------------+-----------------------+----------------------------------------+--+
5 rows selected (1.049 seconds)
0: jdbc:hive2://localhost:10000> select arr[0] from emp0;
+------------+--+
| c0 |
+------------+--+
| Montreal |
| Montreal |
| New York |
| Vancouver |
| NULL |
+------------+--+ 5 rows selected (0.656 seconds)
0: jdbc:hive2://localhost:10000> select stru1 from emp0;
+----------------------------+--+
| stru1 |
+----------------------------+--+
| {"sex":"Male","age":30} |
| {"sex":"Male","age":35} |
| {"sex":"Female","age":27} |
| {"sex":"Female","age":57} |
| NULL |
+----------------------------+--+
5 rows selected (0.193 seconds)
0: jdbc:hive2://localhost:10000> select map1 from emp0;
+-----------------------+--+
| map1 |
+-----------------------+--+
| {"DB":80} |
| {"Perl":85} |
| {"Python":80} |
| {"Sales":89,"HR":94} |
| NULL |
+-----------------------+--+
5 rows selected (0.216 seconds)
0: jdbc:hive2://localhost:10000> select map1["DB"] from emp0;
+-------+--+
| c0 |
+-------+--+
| 80 |
| NULL |
| NULL |
| NULL |
| NULL |
+-------+--+
5 rows selected (0.249 seconds)
5.使用api编程方式连接到thriftserver服务器
public class TestCURD {
@Test
public void select() throws Exception{
String driverClass = "org.apache.hive.jdbc.HiveDriver";
String url = "jdbc:hive2://192.168.10.145:10000/liuyao";
Class.forName(driverClass);
Connection connection = DriverManager.getConnection(url);
System.out.println(connection);
Statement statement = connection.createStatement();
ResultSet rs = statement.executeQuery("SELECT * FROM emp0");
while (rs.next()){
int id = rs.getInt(1);
String name = rs.getString(2);
System.out.println(id + "," + name);
}
rs.close();
connection.close();
}
}
3.hive的thriftserver服务的更多相关文章
- SparkSQL ThriftServer服务的使用和程序中JDBC的连接
SparkSQL ThriftServer服务的使用和程序中JDBC的连接 此时要注意版本问题,我第一次用的是hive2.1.1的,因为要用sparksql的hive服务,但是sparksql默认的是 ...
- 037 SparkSQL ThriftServer服务的使用和程序中JDBC的连接
一:使用 1.实质 提供JDBC/ODBC连接的服务 服务运行方式是一个Spark的应用程序,只是这个应用程序支持JDBC/ODBC的连接, 所以:可以通过应用的4040页面来进行查看操作 2.启动服 ...
- Hive环境搭建和SparkSql整合
一.搭建准备环境 在搭建Hive和SparkSql进行整合之前,首先需要搭建完成HDFS和Spark相关环境 这里使用Hive和Spark进行整合的目的主要是: 1.使用Hive对SparkSql中产 ...
- 大数据学习(12)—— Hive Server2服务
什么是Hive Server2 上一篇我们启动了hive --service metastore服务,可以通过命令行来访问hive服务,但是它不支持多客户端同时访问,参见官网说明:HiveServer ...
- 【自动化】基于Spark streaming的SQL服务实时自动化运维
设计背景 spark thriftserver目前线上有10个实例,以往通过监控端口存活的方式很不准确,当出故障时进程不退出情况很多,而手动去查看日志再重启处理服务这个过程很低效,故设计利用Spark ...
- hive on spark VS SparkSQL VS hive on tez
http://blog.csdn.net/wtq1993/article/details/52435563 http://blog.csdn.net/yeruby/article/details/51 ...
- Hadoop学习笔记—17.Hive框架学习
一.Hive:一个牛逼的数据仓库 1.1 神马是Hive? Hive 是建立在 Hadoop 基础上的数据仓库基础构架.它提供了一系列的工具,可以用来进行数据提取转化加载(ETL),这是一种可以存储. ...
- 附录C 编译安装Hive
如果需要直接安装Hive,可以跳过编译步骤,从Hive的官网下载编译好的安装包,下载地址为http://hive.apache.org/downloads.html . C.1 编译Hive C.1 ...
- Hive Streaming 追加 ORC 文件
1.概述 在存储业务数据的时候,随着业务的增长,Hive 表存储在 HDFS 的上的数据会随时间的增加而增加,而以 Text 文本格式存储在 HDFS 上,所消耗的容量资源巨大.那么,我们需要有一种方 ...
随机推荐
- Java常用修饰符总结
修饰符是用于限定类型以及类型成员申明的一种符号,可用于修饰类.变量和方法,分为访问修饰符和非访问修饰符.访问修饰符控制访问权限,不同的访问修饰符有不同的权限范围,而非访问修饰符则是提供一些特有功能. ...
- 19-3-8Python中编码的进阶、文件操作初识、深浅copy
编码的进阶 ASCII:英文字母,数字,特殊符号,——> 二进制的对应关系 Str: 1个字符——> 1个字节 Unicode:万国码:世界上所有的文字与二进制的对应关系 1个字符——& ...
- java 字节流文件复制方法总结
1.使用字节流每次读写单个字节 public static void main(String[] args) throws IOException { FileInputStream fis = ne ...
- Immutable.js 以及在 react+redux 项目中的实践
来自一位美团大牛的分享,相信可以帮助到你. 原文链接:https://juejin.im/post/5948985ea0bb9f006bed7472?utm_source=tuicool&ut ...
- JS中判断字符串中出现次数最多的字符及出现的次数
<script type="text/javascript"> var str = 'qwertyuilo.,mnbvcsarrrrrrrrtyuiop;l,mhgfd ...
- [笔记] FireDAC DataSet 导入及导出 JSON
刚好需要将 FireDAC DataSet (TFDDataSet, TFDQuery...) 转成 JSON,网上找了一圈,原来从 XE6 开始就支持这个功能了: 储存: DataSet1.Save ...
- 07-容器类Widget
容器类Widget 容器类Widget一般只是包装其子Widget,对其添加一些修饰(补白或背景色等).变换(旋转或剪裁等).或限制(大小等) Padding Padding可以给其子节点添加补白(填 ...
- 数据结构与算法之Stack(栈)——in dart
用dart 语言实现一个简单的stack(栈).栈的内部用List实现. class Stack<E> { final List<E> _stack; final int ca ...
- pentestbox更新msf
pentestbox成功升级msf 1. 输入 msfupdate 进行软件更新 2. 在[*] Updating gems...,软件报错,提示找不到文件路径,输入以下两条命令,尝试单独安装 g ...
- Why mobile web apps are slow
http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/ I’ve had an unusual number of interest ...