1)本地目录/home/hadoop/test下的test4.txt文件内容(每行数据之间用tab键隔开)如下所示:

[hadoop@master test]$ sudo vim test4.txt

    dajiangtai
hadoop
hive
hbase
spark

2)启动hiveserver2

[hadoop@master test]$ cd ${HIVE_HOME}/bin
[hadoop@master bin]$ ll
total
-rwxr-xr-x hadoop hadoop Jan beeline
drwxr-xr-x hadoop hadoop May : ext
-rwxr-xr-x hadoop hadoop Jan hive
-rwxr-xr-x hadoop hadoop Jan hive-config.sh
-rwxr-xr-x hadoop hadoop Jan hiveserver2
-rwxr-xr-x hadoop hadoop Jan metatool
-rwxr-xr-x hadoop hadoop Jan schematool [hadoop@master bin]$ ./hiveserver2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/modules/hadoop-2.6./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/modules/hive1.0.0/lib/hive-jdbc-1.0.-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

3) 程序代码

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class Hive {
private static String driverName = "org.apache.hive.jdbc.HiveDriver";//hive驱动名称
private static String url = "jdbc:hive2://master:10000/default";//连接hive2服务的连接地址,Hive0.11.0以上版本提供了一个全新的服务:HiveServer2
private static String user = "hadoop";//对HDFS有操作权限的用户
private static String password = "";//在非安全模式下,指定一个用户运行查询,忽略密码
private static String sql = "";
private static ResultSet res;
public static void main(String[] args) {
try {
Class.forName(driverName);//加载HiveServer2驱动程序
Connection conn = DriverManager.getConnection(url, user, password);//根据URL连接指定的数据库
Statement stmt = conn.createStatement(); //创建的表名
String tableName = "testHiveDriverTable"; /** 第一步:表存在就先删除 **/
sql = "drop table " + tableName;
stmt.execute(sql); /** 第二步:表不存在就创建 **/
sql = "create table " + tableName + " (key int, value string) row format delimited fields terminated by '\t' STORED AS TEXTFILE";
stmt.execute(sql); // 执行“show tables”操作
sql = "show tables '" + tableName + "'";
res = stmt.executeQuery(sql);
if (res.next()) {
System.out.println(res.getString());
} // 执行“describe table”操作
sql = "describe " + tableName;
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString() + "\t" + res.getString());
} // 执行“load data into table”操作
String filepath = "/home/hadoop/test/test4.txt";//hive服务所在节点的本地文件路径
sql = "load data local inpath '" + filepath + "' into table " + tableName;
stmt.execute(sql); // 执行“select * query”操作
sql = "select * from " + tableName;
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getInt() + "\t" + res.getString());
} // 执行“regular hive query”操作,此查询会转换为MapReduce程序来处理
sql = "select count(*) from " + tableName;
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString());
}
conn.close();
conn = null;
} catch (ClassNotFoundException e) {
e.printStackTrace();
System.exit();
} catch (SQLException e) {
e.printStackTrace();
System.exit();
}
}
}

4) 运行结果(右击-->Run as-->Run on Hadoop)

此时直接运行会报错,解决方案请见下一篇博文:HiveSQLException: Error while compiling statement: No privilege 'Create' found for outputs { database:default }

运行日志如下:

-- ::, INFO [org.apache.hive.jdbc.Utils] - Supplied authorities: master:
-- ::, INFO [org.apache.hive.jdbc.Utils] - Resolved authority: master:
-- ::, INFO [org.apache.hive.jdbc.HiveConnection] - Will try to open client transport with JDBC Uri: jdbc:hive2://master:10000/default
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - opening transport org.apache.thrift.transport.TSaslClientTransport@3834d63f
-- ::, DEBUG [org.apache.thrift.transport.TSaslClientTransport] - Sending mechanism name PLAIN and initial response of length
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: Writing message with status START and payload length
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: Writing message with status COMPLETE and payload length
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: Start message handled
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: Main negotiation loop complete
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: SASL Client receiving last message
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: Received message with status COMPLETE and payload length
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
testhivedrivertable
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
key int
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
value string
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
1 dajiangtai
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
2 hadoop
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
3 hive
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
4 hbase
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string:
5 spark
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.hive.jdbc.HiveQueryResultSet] - Fetched row string: -- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - writing data length:
-- ::, DEBUG [org.apache.thrift.transport.TSaslTransport] - CLIENT: reading data length:

执行“show tables”运行结果:

testhivedrivertable

执行“describe table”运行结果:

key    int
value string

执行“select * query”运行结果:

    dajiangtai
hadoop
hive
hbase
spark
或者从集群上查看运行结果。
hive> show tables;
OK
copy_student1
copy_student2
copy_student3
copy_student4
employee
group_gender_agg
group_gender_sum
group_test
index_test
index_tmp
partition_test
student1
student2
test
test_view
testhivedrivertable
user
Time taken: 0.153 seconds, Fetched: row(s)
hive> desc testhivedrivertable;
OK
key int
value string
Time taken: 0.184 seconds, Fetched: row(s)
hive> select * from testhivedrivertable;
OK
1 dajiangtai
2 hadoop
3 hive
4 hbase
5 spark
Time taken: 0.346 seconds, Fetched: row(s)

以上就是博主为大家介绍的这一板块的主要内容,这都是博主自己的学习过程,希望能给大家带来一定的指导作用,有用的还望大家点个支持,如果对你没用也望包涵,有错误烦请指出。如有期待可关注博主以第一时间获取更新哦,谢谢! 

版权声明:本文为博主原创文章,未经博主允许不得转载。

												

Hive:JDBC示例的更多相关文章

  1. Hive和Jdbc示例

    重要:在使用 JDBC 开发 Hive 程序时, 必须首先开启 Hive 的远程服务接口.使用下面命令进行开启:hive -service hiveserver & 1). 测试数据 user ...

  2. 三、hive JavaAPI示例

    在上文中https://www.cnblogs.com/lay2017/p/9973370.html 我们通过hive shell去操作hive,本文我们以Java代码的示例去对hive执行加载数据和 ...

  3. Hive JDBC——深入浅出学Hive

    第一部分:搭建Hive JDBC开发环境 搭建:Steps •新建工程hiveTest •导入Hive依赖的包 •Hive  命令行启动Thrift服务 •hive --service hiveser ...

  4. Hive学习之六 《Hive进阶— —hive jdbc》 详解

    接Hive学习五 http://www.cnblogs.com/invban/p/5331159.html 一.配置环境变量 hive jdbc的开发,在开发环境中,配置Java环境变量 修改/etc ...

  5. Hive 8、Hive2 beeline 和 Hive jdbc

    1.Hive2 beeline  Beeline 要与HiveServer2配合使用,支持嵌入模式和远程模式 启动beeline 打开两个Shell窗口,一个启动Hive2 一个beeline连接hi ...

  6. Hive JDBC:java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate anonymous

    今天使用JDBC来操作Hive时,首先启动了hive远程服务模式:hiveserver2 &(表示后台运行),然后到eclipse中运行程序时出现错误: java.sql.SQLExcepti ...

  7. 使用Spring Boot操作Hive JDBC时,启动时报出错误:NoSuchMethodError: org.eclipse.jetty.servlet.ServletMapping.setDef

    使用Spring Boot操作Hive JDBC时,启动时报出错误:NoSuchMethodError: org.eclipse.jetty.servlet.ServletMapping.setDef ...

  8. hive JDBC异常到多租户

    hive jdbc执行select count(*) from test报错. return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedT ...

  9. Hive jdbc连接出现java.sql.SQLException: enabling autocommit is not supported

    1.代码如下 String url = "jdbc:hive2://master135:10000/default"; String user = "root" ...

随机推荐

  1. 获得Azure订阅LoadBalancer的脚本

    有客户希望可以通过一条命令获得一个Azure订阅中所有的负载均衡器. 目前在Azure的powershell中是没有这中命令的.但我们可以通过脚本的方式实现. 下面就是获得所有负载均衡的脚本: par ...

  2. swift-get-nodes简单使用

    在参考http://blog.csdn.net/cywosp/article/details/12850645文章对对象的具体物理磁盘位置进行查找时,发现两个问题: 1. 在使用swift+keyst ...

  3. numpy.ones(shape, dtype=None, order='C')

    Return a new array of given shape and type, filled with ones. Parameters: shape : int or sequence of ...

  4. Ajax效果--个人收藏

    $.ajax({ url: "../../../Tools/WeChatMenu.ashx?action=get_menu", type: "post", da ...

  5. struts2 json返回试验

    <?xml version="1.0" encoding="UTF-8" ?><!DOCTYPE struts PUBLIC "-/ ...

  6. 动态Result配置

    步骤一:建立DynaAction,主要代码如下: package com.asm; public class DynaAction extends ActionSupport { private St ...

  7. Flask02 路由的书写、蓝图、利用蓝图实现url前缀、利用蓝图实现子域名、访问静态文件

    1 书写路由的两种方法 1.1 利用Flask实例对象的 add_url_rule 方法实现 该方法有一个必填参数,两个默认参数 · rule : 请求路径的规则 endpoint : 端点,默认值是 ...

  8. Luogu 2900 [USACO08MAR]土地征用Land Acquisition

    斜率优化dp. 首先发现如果存在$x$和$y$使得$len(x) \geq len(y)$并且$wid(x) \geq wid(y)$,那么$y$直接不考虑就好了,因为在买$x$的时候就把$y$顺便带 ...

  9. 1.从GUI到MVC

    GUI(graphic user interface 用户图形界面).GUI编程的目的是提供交互性,并根据用户的操作实时的更新界面.用户的操作是不可预知的鼠标和键盘事件,我们如何保持同步和更新?在上层 ...

  10. kafka学习之相关命令

    1 分别启动zoo和kafka ./zkServer.sh start 然后需要使用./zkServer.sh status查看状态,会发现一个奇怪得问题,即使start启动的时候表示启动成功,但是s ...