HiveServer

查看/home/hadoop/bigdatasoftware/apache-hive-0.13.1-bin/bin目录文件,其中有hiveserver2

启动hiveserver2,如下图:

打开多一个终端,查看进程

有RunJar进程说明hiveserver正在运行;

beeline

启动beeline

连接到jdbc


!connect jdbc:hive2://hadoop-001:10000 hadoop hadooporg.apache.hive.jdbc.HiveDriver;


然后就可以进行一系列操作了

 IDEA上jdbc客户端操作hive

Using JDBC

You can use JDBC to access data stored in a relational database or other tabular format.

  1. Load the HiveServer2 JDBC driver. As of 1.2.0 applications no longer need to explicitly load JDBC drivers using Class.forName().

    For example:


    Class.forName("org.apache.hive.jdbc.HiveDriver");

    
    
  2. Connect to the database by creating a Connection object with the JDBC driver.

    For example:


    Connection cnct = DriverManager.getConnection("jdbc:hive2://<host>:<port>", "<user>", "<password>");

    
    

    The default <port> is 10000. In non-secure configurations, specify a <user> for the query to run as. The <password> field value is ignored in non-secure mode.


    Connection cnct = DriverManager.getConnection("jdbc:hive2://<host>:<port>", "<user>", "");

    
    

    In Kerberos secure mode, the user information is based on the Kerberos credentials.

  3. Submit SQL to the database by creating a Statement object and using its executeQuery() method.

    For example:


    Statement stmt = cnct.createStatement();
    ResultSet rset = stmt.executeQuery("SELECT foo FROM bar");

    
    
  4. Process the result set, if necessary.

These steps are illustrated in the sample code below.

代码如下:

package com.gec.demo;

import java.sql.SQLException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.DriverManager; public class HiveJdbcClient {
private static final String DRIVERNAME = "org.apache.hive.jdbc.HiveDriver"; /**
* @param args
* @throws SQLException
*/
public static void main(String[] args) throws SQLException {
try {
Class.forName(DRIVERNAME);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(1);
}
//replace "hive" here with the name of the user the queries should run as
Connection con = DriverManager.getConnection("jdbc:hive2://hadoop-001:10000/default", "hadoop", "hadoop");
Statement stmt = con.createStatement();
String tableName = "dept";
// stmt.execute("drop table if exists " + tableName);
// stmt.execute("create table " + tableName + " (key int, value string)");
// show tables
String sql = "show tables '" + tableName + "'";
System.out.println("Running: " + sql);
ResultSet res = stmt.executeQuery(sql);
if (res.next()) {
System.out.println(res.getString(1));
}
// describe table
sql = "describe " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString(1) + "\t" + res.getString(2));
} // load data into table
// NOTE: filepath has to be local to the hive server
// NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line
// String filepath = "/";
// sql = "load data local inpath '" + filepath + "' into table " + tableName;
// System.out.println("Running: " + sql);
// stmt.execute(sql); // select * query
sql = "select * from " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2));
} // regular hive query
sql = "select count(1) from " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString(1));
}
}
}

pom.xml配置文件如下:

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>BigdataStudy</artifactId>
<groupId>com.gec.demo</groupId>
<version>1.0-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion> <artifactId>HiveJdbcClient</artifactId> <name>HiveJdbcClient</name>
<!-- FIXME change it to the project's website -->
<url>http://www.example.com</url> <properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<hadoop.version>2.7.2</hadoop.version>
<!--<hive.version> 0.13.1</hive.version>-->
</properties> <dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency> <dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.8.2</version>
</dependency> <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.2</version>
</dependency> <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>${hadoop.version}</version>
</dependency> <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency> <dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>0.13.1</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>0.13.1</version>
</dependency> </dependencies>
</project>

Hive HiveServer2+beeline+jdbc客户端访问操作的更多相关文章

  1. 3.1 HiveServer2.Beeline JDBC使用

    https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients 一.HiveServer2.Beeline 1.HiveSer ...

  2. JDBC数据库访问操作的动态监测 之 Log4JDBC

    log4jdbc是一个JDBC驱动器,能够记录SQL日志和SQL执行时间等信息.log4jdbc使用SLF4J(Simple Logging Facade)作为日志系统. 特性: 1.支持JDBC3和 ...

  3. JDBC数据库访问操作的动态监测 之 p6spy

    P6spy是一个JDBC Driver的包装工具,p6spy通过对JDBC Driver的封装以达到对SQL语句的监听和分析,以达到各种目的. P6spy1.3 sf.net http://sourc ...

  4. hive JDBC客户端启动

    JDBC客户端操作步骤

  5. [Hive]HiveServer2配置

    HiveServer2(HS2)是一个服务器接口,能使远程客户端执行Hive查询,并且可以检索结果.HiveServer2是HiveServer1的改进版,HiveServer1已经被废弃.HiveS ...

  6. [Hive]HiveServer2概述

    1. HiveServer1 HiveServer是一种可选服务,允许远程客户端可以使用各种编程语言向Hive提交请求并检索结果.HiveServer是建立在Apache ThriftTM(http: ...

  7. [Spark][Hive]Hive的命令行客户端启动:

    [Spark][Hive]Hive的命令行客户端启动: [training@localhost Desktop]$ chkconfig | grep hive hive-metastore 0:off ...

  8. jdbc数据访问技术

    jdbc数据访问技术 1.JDBC如何做事务处理? Con.setAutoCommit(false) Con.commit(); Con.rollback(); 2.写出几个在Jdbc中常用的接口 p ...

  9. Redis的C++与JavaScript访问操作

    上篇简单介绍了Redis及其安装部署,这篇记录一下如何用C++语言和JavaScript语言访问操作Redis 1. Redis的接口访问方式(通用接口或者语言接口) 很多语言都包含Redis支持,R ...

随机推荐

  1. 3-log4j2之输出日志到文件

    一.添加maven依赖 <dependencies> <dependency> <groupId>org.apache.logging.log4j</grou ...

  2. urlencode urldecode

    1.urlencode()函数原理就是首先把中文字符转换为十六进制,然后 在每个字符前面加一个标识符%. urldecode()函数与urlencode()函 数原理相反,用于解码已编码的 URL 字 ...

  3. prinft he sprintf

    四.printf函数 printf函数返回一个格式化后的字符串. 语法:printf(format,arg1,arg2,arg++) 参数 format 是转换的格式,以百分比符号 (“%”) 开始到 ...

  4. python里的函数

    map()是 Python 内置的高阶函数,它接收一个函数 f 和一个 list,并通过把函数 f 依次作用在 list 的每个元素上,得到一个新的 list 并返回. 假设用户输入的英文名字不规范, ...

  5. HTTPS双向认证+USB硬件加密锁(加密狗)配置

    环境:  Ubuntu14.04,apache2.4.7, openssl1.0.1f 安装apache2 apt-get install apache2 -y 一般openssl默认已经安装 开启a ...

  6. Invocation of init method failed; nested exception is java.text.ParseException: '?' can only be specfied for Day-of-Month or Day-of-Week.

    org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cronTrigger' ...

  7. 基于区域的OSPF简单认证

    实验要求:掌握OSPF区域简单认证配置 拓扑如下: 配置如下: R1enable configure terminal interface s0/0/0ip address 192.168.1.1 2 ...

  8. SQL注入之Sqli-labs系列第二十三关(基于过滤的GET注入)

    开始挑战第二十三关(Error Based- no comments) 先尝试下单引号进行报错 再来利用and来测试下,加入注释符#,编码成%23同样的报错 再来试试--+,同样的效果 同样的,先看看 ...

  9. 大数据-12-Spark+Kafka构建实时分析Dashboard

    转自 http://dblab.xmu.edu.cn/post/8274/ 0.案例概述 本案例利用Spark+Kafka实时分析男女生每秒购物人数,利用Spark Streaming实时处理用户购物 ...

  10. Fizz Buzz 问题

    要求: 给你一个整数n. 从 1 到 n 按照下面的规则打印每个数: 如果这个数被3整除,打印fizz. 如果这个数被5整除,打印buzz. 如果这个数能同时被3和5整除,打印fizz buzz. 示 ...