一,Hive数据导入的几种方式

首先列出讲述下面几种导入方式的数据和hive表。

导入:

  1. 本地文件导入到Hive表;
  2. Hive表导入到Hive表;
  3. HDFS文件导入到Hive表;
  4. 创建表的过程中从其他表导入;
  5. 通过sqoop将mysql库导入到Hive表;示例见《通过sqoop进行mysql与hive的导入导出》和《定时从大数据平台同步HIVE数据到oracle

导出:

  1. Hive表导出到本地文件系统;
  2. Hive表导出到HDFS;
  3. 通过sqoop将Hive表导出到mysql库;

Hive表:

创建testA:

CREATE TABLE testA (
id INT,
name string,
area string
) PARTITIONED BY (create_time string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

创建testB:

CREATE TABLE testB (
id INT,
name string,
area string,
code string
) PARTITIONED BY (create_time string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

数据文件(sourceA.txt):

1,fish1,SZ
2,fish2,SH
3,fish3,HZ
4,fish4,QD
5,fish5,SR

数据文件(sourceB.txt):

1,zy1,SZ,1001
2,zy2,SH,1002
3,zy3,HZ,1003
4,zy4,QD,1004
5,zy5,SR,1005

(1)本地文件导入到Hive表

(2)Hive表导入到Hive表

将testB的数据导入到testA表

hive> INSERT INTO TABLE testA PARTITION(create_time='2015-07-11') select id, name, area from testB where id = 1;
...(省略)
OK
Time taken: 14.744 seconds
hive> INSERT INTO TABLE testA PARTITION(create_time) select id, name, area, code from testB where id = 2;
<pre name="code" class="java">...(省略)
OKTime taken: 19.852 secondshive> select * from testA;OK2 zy2 SH 10021 fish1 SZ 2015-07-082 fish2 SH 2015-07-083 fish3 HZ 2015-07-084 fish4 QD 2015-07-085 fish5 SR 2015-07-081 zy1 SZ 2015-07-11Time taken: 0.032 seconds, Fetched: 7 row(s)

说明:

1,将testB中id=1的行,导入到testA,分区为2015-07-11

2,将testB中id=2的行,导入到testA,分区create_time为id=2行的code值。

(3)HDFS文件导入到Hive表

将sourceA.txt和sourceB.txt传到HDFS中,路径分别是/home/hadoop/sourceA.txt和/home/hadoop/sourceB.txt中

hive> LOAD DATA INPATH '/home/hadoop/sourceA.txt' INTO TABLE testA PARTITION(create_time='2015-07-08');
...(省略)
OK
Time taken: 0.237 seconds
hive> LOAD DATA INPATH '/home/hadoop/sourceB.txt' INTO TABLE testB PARTITION(create_time='2015-07-09');
<pre name="code" class="java">...(省略)
OK
Time taken: 0.212 seconds
hive> select * from testA;
OK
1 fish1 SZ 2015-07-08
2 fish2 SH 2015-07-08
3 fish3 HZ 2015-07-08
4 fish4 QD 2015-07-08
5 fish5 SR 2015-07-08
Time taken: 0.029 seconds, Fetched: 5 row(s)
hive> select * from testB;
OK
1 zy1 SZ 1001 2015-07-09
2 zy2 SH 1002 2015-07-09
3 zy3 HZ 1003 2015-07-09
4 zy4 QD 1004 2015-07-09
5 zy5 SR 1005 2015-07-09
Time taken: 0.047 seconds, Fetched: 5 row(s)

/home/hadoop/sourceA.txt'导入到testA表

/home/hadoop/sourceB.txt'导入到testB表

(4)创建表的过程中从其他表导入

hive> create table testC as select name, code from testB;
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1449746265797_0106, Tracking URL = http://hadoopcluster79:8088/proxy/application_1449746265797_0106/
Kill Command = /home/hadoop/apache/hadoop-2.4.1/bin/hadoop job -kill job_1449746265797_0106
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-12-24 16:40:17,981 Stage-1 map = 0%, reduce = 0%
2015-12-24 16:40:23,115 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.11 sec
MapReduce Total cumulative CPU time: 1 seconds 110 msec
Ended Job = job_1449746265797_0106
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://hadoop2cluster/tmp/hive-root/hive_2015-12-24_16-40-09_983_6048680148773453194-1/-ext-10001
Moving data to: hdfs://hadoop2cluster/home/hadoop/hivedata/warehouse/testc
Table default.testc stats: [numFiles=1, numRows=0, totalSize=45, rawDataSize=0]
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 1.11 sec HDFS Read: 297 HDFS Write: 45 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 110 msec
OK
Time taken: 14.292 seconds
hive> desc testC;
OK
name string
code string
Time taken: 0.032 seconds, Fetched: 2 row(s)

二、Hive数据导出的几种方式

(1)导出到本地文件系统

hive> INSERT OVERWRITE LOCAL DIRECTORY '/home/hadoop/output' ROW FORMAT DELIMITED FIELDS TERMINATED by ',' select * from testA;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1451024007879_0001, Tracking URL = http://hadoopcluster79:8088/proxy/application_1451024007879_0001/
Kill Command = /home/hadoop/apache/hadoop-2.4.1/bin/hadoop job -kill job_1451024007879_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-12-25 17:04:30,447 Stage-1 map = 0%, reduce = 0%
2015-12-25 17:04:35,616 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.16 sec
MapReduce Total cumulative CPU time: 1 seconds 160 msec
Ended Job = job_1451024007879_0001
Copying data to local directory /home/hadoop/output
Copying data to local directory /home/hadoop/output
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 1.16 sec HDFS Read: 305 HDFS Write: 110 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 160 msec
OK
Time taken: 16.701 seconds

查看数据结果:

[hadoop@hadoopcluster78 output]$ cat /home/hadoop/output/000000_0
1,fish1,SZ,2015-07-08
2,fish2,SH,2015-07-08
3,fish3,HZ,2015-07-08
4,fish4,QD,2015-07-08
5,fish5,SR,2015-07-08

通过INSERT OVERWRITE LOCAL DIRECTORY将hive表testA数据导入到/home/hadoop目录,众所周知,HQL会启动Mapreduce完成,其实/home/hadoop就是Mapreduce输出路径,产生的结果存放在文件名为:000000_0。

(2)导出到HDFS

导入到HDFS和导入本地文件类似,去掉HQL语句的LOCAL就可以了

hive> INSERT OVERWRITE DIRECTORY '/home/hadoop/output' select * from testA;
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1451024007879_0002, Tracking URL = http://hadoopcluster79:8088/proxy/application_1451024007879_0002/
Kill Command = /home/hadoop/apache/hadoop-2.4.1/bin/hadoop job -kill job_1451024007879_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-12-25 17:08:51,034 Stage-1 map = 0%, reduce = 0%
2015-12-25 17:08:59,313 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.4 sec
MapReduce Total cumulative CPU time: 1 seconds 400 msec
Ended Job = job_1451024007879_0002
Stage-3 is selected by condition resolver.
Stage-2 is filtered out by condition resolver.
Stage-4 is filtered out by condition resolver.
Moving data to: hdfs://hadoop2cluster/home/hadoop/hivedata/hive-hadoop/hive_2015-12-25_17-08-43_733_1768532778392261937-1/-ext-10000
Moving data to: /home/hadoop/output
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 1.4 sec HDFS Read: 305 HDFS Write: 110 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 400 msec
OK
Time taken: 16.667 seconds

查看hfds输出文件:

其他

采用hive的-e和-f参数来导出数据。

参数为: -e 的使用方式,后面接SQL语句。>>后面为输出文件路径

[hadoop@hadoopcluster78 bin]$ ./hive -e "select * from testA" >> /home/hadoop/output/testA.txt
15/12/25 17:15:07 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead Logging initialized using configuration in file:/home/hadoop/apache/hive-0.13.1/conf/hive-log4j.properties
OK
Time taken: 1.128 seconds, Fetched: 5 row(s)
[hadoop@hadoopcluster78 bin]$ cat /home/hadoop/output/testA.txt
1 fish1 SZ 2015-07-08
2 fish2 SH 2015-07-08
3 fish3 HZ 2015-07-08
4 fish4 QD 2015-07-08
5 fish5 SR 2015-07-08

参数为: -f 的使用方式,后面接存放sql语句的文件。>>后面为输出文件路径

SQL语句文件:

[hadoop@hadoopcluster78 bin]$ cat /home/hadoop/output/sql.sql
select * from testA

使用-f参数执行:

[hadoop@hadoopcluster78 bin]$ ./hive -f /home/hadoop/output/sql.sql >> /home/hadoop/output/testB.txt
15/12/25 17:20:52 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead Logging initialized using configuration in file:/home/hadoop/apache/hive-0.13.1/conf/hive-log4j.properties
OK
Time taken: 1.1 seconds, Fetched: 5 row(s)

参看结果:

[hadoop@hadoopcluster78 bin]$ cat /home/hadoop/output/testB.txt
1 fish1 SZ 2015-07-08
2 fish2 SH 2015-07-08
3 fish3 HZ 2015-07-08
4 fish4 QD 2015-07-08
5 fish5 SR 2015-07-08

Hive数据导入导出的几种方式的更多相关文章

  1. Hive数据导入导出的n种方式

    Tutorial-LoadingData Hive加载数据的6种方式 #格式 load data [local] inpath '/op/datas/xxx.txt' [overwrite] into ...

  2. 利用sqoop将hive数据导入导出数据到mysql

    一.导入导出数据库常用命令语句 1)列出mysql数据库中的所有数据库命令  #  sqoop list-databases --connect jdbc:mysql://localhost:3306 ...

  3. 从零自学Hadoop(16):Hive数据导入导出,集群数据迁移上

    阅读目录 序 导入文件到Hive 将其他表的查询结果导入表 动态分区插入 将SQL语句的值插入到表中 模拟数据文件下载 系列索引 本文版权归mephisto和博客园共有,欢迎转载,但须保留此段声明,并 ...

  4. sqoop用法之mysql与hive数据导入导出

    目录 一. Sqoop介绍 二. Mysql 数据导入到 Hive 三. Hive数据导入到Mysql 四. mysql数据增量导入hive 1. 基于递增列Append导入 1). 创建hive表 ...

  5. Hive 实战(1)--hive数据导入/导出基础

    前沿: Hive也采用类SQL的语法, 但其作为数据仓库, 与面向OLTP的传统关系型数据库(Mysql/Oracle)有着天然的差别. 它用于离线的数据计算分析, 而不追求高并发/低延时的应用场景. ...

  6. 数据仓库Hive数据导入导出

    Hive库数据导入导出 1.新建表data hive (ebank)> create table data(id int,name string) > ROW FORMAT DELIMIT ...

  7. 如何利用sqoop将hive数据导入导出数据到mysql

    运行环境  centos 5.6   hadoop  hive sqoop是让hadoop技术支持的clouder公司开发的一个在关系数据库和hdfs,hive之间数据导入导出的一个工具. 上海尚学堂 ...

  8. SQL Server数据导入导出的几种方法

    在涉及到SQL Server编程或是管理时一定会用到数据的导入与导出, 导入导出的方法有多种,结合我在做项目时的经历做一下汇总: 1. SQL Server导入导出向导,这种方式是最方便的. 导入向导 ...

  9. Hive数据导入导出

    Hive三种不同的数据导出的方式 (1)  导出到本地文件系统 insert overwrite local directory '/home/anjianbing/soft/export_data/ ...

随机推荐

  1. Visual Studio 2019 RC

    Visual Studio 2019 RC入门 介绍 在本文中,让我们看看如何开始使用Visual Studio 2019 RC.Microsoft现已发布Visual Studio Release ...

  2. Go Example--关闭通道

    package main import ( "fmt" ) func main() { jobs := make(chan int, 5) done := make(chan bo ...

  3. Centos7部署ntp服务器同步时间以及直接将本地时间同步为北京时间

    一.查看配置 查看时区列表: timedatectl list-timezones|grep Asia 查看当前时间: date 查看当前设置: [root@localhost ~]# timedat ...

  4. loging日志文件

    此文件要放到django 项目中的setting文件夹,可以对文件进行一些配置和修改 # 定义一下log文件存放的位置 BASE_LOG_DIR = os.path.join(BASE_DIR, &q ...

  5. ethr 微软开源的tcp udp http 网络性能测试工具

    ethr 是微软开源的tcp udp http 网络性能测试工具包包含的server 以及 client 我们可以远程测试 同时对于https icmp 的支持也在开发中,tcp 协议支持连接.带宽. ...

  6. 数学 它的内容,方法和意义 第一卷 (A. D. 亚历山大洛夫 著)

    第一章 数学概观 (已看) 1. 数学的特点 2. 算术 3. 几何 4. 算术和几何 5. 初等数学时代 6. 变量的数学 7. 现代数学 8. 数学的本质 9. 数学发展的规律性 第二章 数学分析 ...

  7. Unity 资源的优化管理 学习

  8. ThinkPHP 5.2 出 RC1 版本 RC 是什么意思呢?

    ThinkPHP 5.2 出 RC1 版本 RC 是什么意思呢? RC 的意思是软件候选版本,就是不会有很大的改变,主要还是在除错方面. 来自收集的资料1引用: Alpha:是内部测试版,一般不向外部 ...

  9. 三种方法获取Class对象的区别

    有关反射的内容见 java反射 得到某个类的Class对象有三种方法: 使用“类名.class”取得 Class.forName(String className) 通过该类实例对象的getClass ...

  10. Spring IOC(一)概览

    Spring ioc源码解析这一系列文章会比较枯燥,但是只要坚持下去,总会有收获,一回生二回熟,没有第一次,哪有下一次... 本系列目录: Spring IOC(一)概览 Spring IOC(二)容 ...