SQL里面通常都会用Join来连接两个表,做复杂的关联查询。比如用户表和订单表,能通过join得到某个用户购买的产品;或者某个产品被购买的人群....

Hive也支持这样的操作,而且由于Hive底层运行在hadoop上,因此有很多地方可以进行优化。比如小表到大表的连接操作、小表进行缓存、大表进行避免缓存等等...

下面就来看看hive里面的连接操作吧!其实跟SQL还是差不多的...

数据准备:创建数据-->创建表-->导入数据

首先创建两个原始数据的文件,这两个文件分别有三列,第一列是id、第二列是名称、第三列是另外一个表的id。通过第二列可以明显的看到两个表做连接查询的结果:

  1. [xingoo@localhost tmp]$ cat aa.txt
  2. 1 a 3
  3. 2 b 4
  4. 3 c 1
  5. [xingoo@localhost tmp]$ cat bb.txt
  6. 1 xxx 2
  7. 2 yyy 3
  8. 3 zzz 5

接下来创建两个表,需要注意的是表的字段分隔符为空格,另一个表可以直接基于当前的表创建。

  1. hive> create table aa
  2. > (a string,b string,c string)
  3. > row format delimited
  4. > fields terminated by ' ';
  5. OK
  6. Time taken: 0.19 seconds
  7. hive> create table bb like aa;
  8. OK
  9. Time taken: 0.188 seconds

查看两个表的结构:

  1. hive> describe aa;
  2. OK
  3. a string
  4. b string
  5. c string
  6. Time taken: 0.068 seconds, Fetched: 3 row(s)
  7. hive> describe bb;
  8. OK
  9. a string
  10. b string
  11. c string
  12. Time taken: 0.045 seconds, Fetched: 3 row(s)

下面可以基于本地的文件,导入数据

  1. hive> load data local inpath '/usr/tmp/aa.txt' overwrite into table aa;
  2. Loading data to table test.aa
  3. OK
  4. Time taken: 0.519 seconds
  5. hive> load data local inpath '/usr/tmp/bb.txt' overwrite into table bb;
  6. Loading data to table test.bb
  7. OK
  8. Time taken: 0.321 seconds

内连接

内连接即基于on语句,仅列出表1和表2符合连接条件的数据。

  1. hive> select * from aa a join bb b on a.c=b.a;
  2. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  3. Query ID = root_20160824161233_f9ecefa2-e5d7-416d-8d90-e191937e7313
  4. Total jobs = 1
  5. SLF4J: Class path contains multiple SLF4J bindings.
  6. SLF4J: Found binding in [jar:file:/usr/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  7. SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  8. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  9. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
  10. 2016-08-24 16:12:44 Starting to launch local task to process map join; maximum memory = 518979584
  11. 2016-08-24 16:12:47 Dump the side-table for tag: 0 with group count: 3 into file: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-12-33_145_337836390845333215-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile00--.hashtable
  12. 2016-08-24 16:12:47 Uploaded 1 File to: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-12-33_145_337836390845333215-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile00--.hashtable (332 bytes)
  13. 2016-08-24 16:12:47 End of local task; Time Taken: 3.425 sec.
  14. Execution completed successfully
  15. MapredLocal task succeeded
  16. Launching Job 1 out of 1
  17. Number of reduce tasks is set to 0 since there's no reduce operator
  18. Job running in-process (local Hadoop)
  19. 2016-08-24 16:12:50,222 Stage-3 map = 100%, reduce = 0%
  20. Ended Job = job_local944389202_0007
  21. MapReduce Jobs Launched:
  22. Stage-Stage-3: HDFS Read: 1264 HDFS Write: 90 SUCCESS
  23. Total MapReduce CPU Time Spent: 0 msec
  24. OK
  25. 3 c 1 1 xxx 2
  26. 1 a 3 3 zzz 5
  27. Time taken: 17.083 seconds, Fetched: 2 row(s)

左连接

左连接是显示左边的表的所有数据,如果有右边表与之对应,则显示;否则显示null

  1. ive> select * from aa a left outer join bb b on a.c=b.a;
  2. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  3. Query ID = root_20160824161637_6d540592-13fd-4f59-a2cf-0a91c0fc9533
  4. Total jobs = 1
  5. SLF4J: Class path contains multiple SLF4J bindings.
  6. SLF4J: Found binding in [jar:file:/usr/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  7. SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  8. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  9. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
  10. 2016-08-24 16:16:48 Starting to launch local task to process map join; maximum memory = 518979584
  11. 2016-08-24 16:16:51 Dump the side-table for tag: 1 with group count: 3 into file: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-16-37_813_4572869866822819707-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile11--.hashtable
  12. 2016-08-24 16:16:51 Uploaded 1 File to: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-16-37_813_4572869866822819707-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile11--.hashtable (338 bytes)
  13. 2016-08-24 16:16:51 End of local task; Time Taken: 2.634 sec.
  14. Execution completed successfully
  15. MapredLocal task succeeded
  16. Launching Job 1 out of 1
  17. Number of reduce tasks is set to 0 since there's no reduce operator
  18. Job running in-process (local Hadoop)
  19. 2016-08-24 16:16:53,843 Stage-3 map = 100%, reduce = 0%
  20. Ended Job = job_local1670258961_0008
  21. MapReduce Jobs Launched:
  22. Stage-Stage-3: HDFS Read: 1282 HDFS Write: 90 SUCCESS
  23. Total MapReduce CPU Time Spent: 0 msec
  24. OK
  25. 1 a 3 3 zzz 5
  26. 2 b 4 NULL NULL NULL
  27. 3 c 1 1 xxx 2
  28. Time taken: 16.048 seconds, Fetched: 3 row(s)

右连接

类似左连接,同理。

  1. hive> select * from aa a right outer join bb b on a.c=b.a;
  2. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  3. Query ID = root_20160824162227_5d0f0090-1a9b-4a3f-9e82-e93c4d180f4b
  4. Total jobs = 1
  5. SLF4J: Class path contains multiple SLF4J bindings.
  6. SLF4J: Found binding in [jar:file:/usr/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  7. SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  8. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  9. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
  10. 2016-08-24 16:22:37 Starting to launch local task to process map join; maximum memory = 518979584
  11. 2016-08-24 16:22:40 Dump the side-table for tag: 0 with group count: 3 into file: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-22-27_619_7820027359528638029-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile20--.hashtable
  12. 2016-08-24 16:22:40 Uploaded 1 File to: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-22-27_619_7820027359528638029-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile20--.hashtable (332 bytes)
  13. 2016-08-24 16:22:40 End of local task; Time Taken: 2.368 sec.
  14. Execution completed successfully
  15. MapredLocal task succeeded
  16. Launching Job 1 out of 1
  17. Number of reduce tasks is set to 0 since there's no reduce operator
  18. Job running in-process (local Hadoop)
  19. 2016-08-24 16:22:43,060 Stage-3 map = 100%, reduce = 0%
  20. Ended Job = job_local2001415675_0009
  21. MapReduce Jobs Launched:
  22. Stage-Stage-3: HDFS Read: 1306 HDFS Write: 90 SUCCESS
  23. Total MapReduce CPU Time Spent: 0 msec
  24. OK
  25. 3 c 1 1 xxx 2
  26. NULL NULL NULL 2 yyy 3
  27. 1 a 3 3 zzz 5
  28. Time taken: 15.483 seconds, Fetched: 3 row(s)

全连接

相当于表1和表2的数据都显示,如果没有对应的数据,则显示Null.

  1. hive> select * from aa a full outer join bb b on a.c=b.a;
  2. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  3. Query ID = root_20160824162252_c71b2fae-9768-4b9a-b5ad-c06d7cdb60fb
  4. Total jobs = 1
  5. Launching Job 1 out of 1
  6. Number of reduce tasks not specified. Estimated from input data size: 1
  7. In order to change the average load for a reducer (in bytes):
  8. set hive.exec.reducers.bytes.per.reducer=<number>
  9. In order to limit the maximum number of reducers:
  10. set hive.exec.reducers.max=<number>
  11. In order to set a constant number of reducers:
  12. set mapreduce.job.reduces=<number>
  13. Job running in-process (local Hadoop)
  14. 2016-08-24 16:22:54,111 Stage-1 map = 100%, reduce = 100%
  15. Ended Job = job_local1766586034_0010
  16. MapReduce Jobs Launched:
  17. Stage-Stage-1: HDFS Read: 4026 HDFS Write: 270 SUCCESS
  18. Total MapReduce CPU Time Spent: 0 msec
  19. OK
  20. 3 c 1 1 xxx 2
  21. NULL NULL NULL 2 yyy 3
  22. 1 a 3 3 zzz 5
  23. 2 b 4 NULL NULL NULL
  24. Time taken: 1.689 seconds, Fetched: 4 row(s)

左半开连接

这个比较特殊,SEMI-JOIN仅仅会显示表1的数据,即左边表的数据。但是效率会比左连接快,因为他会先拿到表1的数据,然后在表2中查找,只要查找到结果立马就返回数据。

  1. hive> select * from aa a left semi join bb b on a.c=b.a;
  2. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  3. Query ID = root_20160824162327_e7fc72a7-ef91-4d39-83bc-ff8159ea8816
  4. Total jobs = 1
  5. SLF4J: Class path contains multiple SLF4J bindings.
  6. SLF4J: Found binding in [jar:file:/usr/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  7. SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  8. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  9. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
  10. 2016-08-24 16:23:37 Starting to launch local task to process map join; maximum memory = 518979584
  11. 2016-08-24 16:23:41 Dump the side-table for tag: 1 with group count: 3 into file: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-23-27_008_3026796648107813784-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile31--.hashtable
  12. 2016-08-24 16:23:41 Uploaded 1 File to: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-23-27_008_3026796648107813784-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile31--.hashtable (317 bytes)
  13. 2016-08-24 16:23:41 End of local task; Time Taken: 3.586 sec.
  14. Execution completed successfully
  15. MapredLocal task succeeded
  16. Launching Job 1 out of 1
  17. Number of reduce tasks is set to 0 since there's no reduce operator
  18. Job running in-process (local Hadoop)
  19. 2016-08-24 16:23:43,798 Stage-3 map = 100%, reduce = 0%
  20. Ended Job = job_local521961878_0011
  21. MapReduce Jobs Launched:
  22. Stage-Stage-3: HDFS Read: 1366 HDFS Write: 90 SUCCESS
  23. Total MapReduce CPU Time Spent: 0 msec
  24. OK
  25. 1 a 3
  26. 3 c 1
  27. Time taken: 16.811 seconds, Fetched: 2 row(s)

笛卡尔积

笛卡尔积会针对表1和表2的每条数据做连接...

  1. hive> select * from aa join bb;
  2. Warning: Map Join MAPJOIN[9][bigTable=?] in task 'Stage-3:MAPRED' is a cross product
  3. WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
  4. Query ID = root_20160824162449_20e4b5ec-768f-48cf-a840-7d9ff360975f
  5. Total jobs = 1
  6. SLF4J: Class path contains multiple SLF4J bindings.
  7. SLF4J: Found binding in [jar:file:/usr/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  8. SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  9. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  10. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
  11. 2016-08-24 16:25:00 Starting to launch local task to process map join; maximum memory = 518979584
  12. 2016-08-24 16:25:02 Dump the side-table for tag: 0 with group count: 1 into file: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-24-49_294_2706432574075169306-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile40--.hashtable
  13. 2016-08-24 16:25:02 Uploaded 1 File to: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-24-49_294_2706432574075169306-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile40--.hashtable (305 bytes)
  14. 2016-08-24 16:25:02 End of local task; Time Taken: 2.892 sec.
  15. Execution completed successfully
  16. MapredLocal task succeeded
  17. Launching Job 1 out of 1
  18. Number of reduce tasks is set to 0 since there's no reduce operator
  19. Job running in-process (local Hadoop)
  20. 2016-08-24 16:25:05,677 Stage-3 map = 100%, reduce = 0%
  21. Ended Job = job_local2068422373_0012
  22. MapReduce Jobs Launched:
  23. Stage-Stage-3: HDFS Read: 1390 HDFS Write: 90 SUCCESS
  24. Total MapReduce CPU Time Spent: 0 msec
  25. OK
  26. 1 a 3 1 xxx 2
  27. 2 b 4 1 xxx 2
  28. 3 c 1 1 xxx 2
  29. 1 a 3 2 yyy 3
  30. 2 b 4 2 yyy 3
  31. 3 c 1 2 yyy 3
  32. 1 a 3 3 zzz 5
  33. 2 b 4 3 zzz 5
  34. 3 c 1 3 zzz 5

上面就是hive中的连接查询,其实与SQL一样的。

[Hadoop大数据]——Hive连接JOIN用例详解的更多相关文章

  1. Hive学习:Hive连接JOIN用例详解

    1 准备数据: 1.1 t_1 01 张三 02 李四 03 王五 04 马六 05 小七 06 二狗 1.2 t_2 01 11 03 33 04 44 06 66 07 77 08 88 1.3 ...

  2. [Hadoop大数据]——Hive初识

    Hive出现的背景 Hadoop提供了大数据的通用解决方案,比如存储提供了Hdfs,计算提供了MapReduce思想.但是想要写出MapReduce算法还是比较繁琐的,对于开发者来说,需要了解底层的h ...

  3. 大数据入门第七天——MapReduce详解(一)入门与简单示例

    一.概述 1.map-reduce是什么 Hadoop MapReduce is a software framework for easily writing applications which ...

  4. [Hadoop大数据]——Hive数据的导入导出

    Hive作为大数据环境下的数据仓库工具,支持基于hadoop以sql的方式执行mapreduce的任务,非常适合对大量的数据进行全量的查询分析. 本文主要讲述下hive载cli中如何导入导出数据: 导 ...

  5. [Hadoop大数据]——Hive部署入门教程

    Hive是为了解决hadoop中mapreduce编写困难,提供给熟悉sql的人使用的.只要你对SQL有一定的了解,就能通过Hive写出mapreduce的程序,而不需要去学习hadoop中的api. ...

  6. 大数据入门第七天——MapReduce详解(二)切片源码浅析与自定义patition

    一.mapTask并行度的决定机制 1.概述 一个job的map阶段并行度由客户端在提交job时决定 而客户端对map阶段并行度的规划的基本逻辑为: 将待处理数据执行逻辑切片(即按照一个特定切片大小, ...

  7. 单机,伪分布式,完全分布式-----搭建Hadoop大数据平台

    Hadoop大数据——随着计算机技术的发展,互联网的普及,信息的积累已经到了一个非常庞大的地步,信息的增长也在不断的加快.信息更是爆炸性增长,收集,检索,统计这些信息越发困难,必须使用新的技术来解决这 ...

  8. hadoop大数据平台安全基础知识入门

    概述 以 Hortonworks Data Platform (HDP) 平台为例 ,hadoop大数据平台的安全机制包括以下两个方面: 身份认证 即核实一个使用者的真实身份,一个使用者来使用大数据引 ...

  9. 【HADOOP】| 环境搭建:从零开始搭建hadoop大数据平台(单机/伪分布式)-下

    因篇幅过长,故分为两节,上节主要说明hadoop运行环境和必须的基础软件,包括VMware虚拟机软件的说明安装.Xmanager5管理软件以及CentOS操作系统的安装和基本网络配置.具体请参看: [ ...

随机推荐

  1. JVM内存管理&GC

    一.JVM内存划分 |--------------------|-------------PC寄存器-------| |----方法区 ---------|--------------java 虚拟机 ...

  2. 返水bug-霸世

    NOOK(N) CSBFB(25) off(Y) QQ(2652880032) G(1) off1(Y) QQ1(3479301404) G1(1) off2(Y) QQ2(309235846) G2 ...

  3. 在Javascript中onclick()方法应用

    <html> < head> < script type="text/javascript"> function onclick1(){ ale ...

  4. 学习android 官方文档

    9.29 1. 今天,FQ,看到android studio中文网上有一个FQ工具openVPN,我就使用了. 之前用过一个FQ工具开眼,但由于网速慢,我就弃用了. 2. 现在,我就可以FQ去andr ...

  5. 在DirectX9中使用DXUT定制按钮来控制模型旋转的问题

    使用DXUT中的按钮控件类实现 控制模型旋转的过程如下: 1.创建一个CDXUTDialog对话框,并绑定至CDXUTDialogResourceManager对话框资源管理器. 2.绑定回调函数GU ...

  6. 解决vsftpd的refusing to run with writable root inside chroot错误

    参考 http://www.cnblogs.com/CSGrandeur/p/3754126.html 在Ubuntu下用 vsftpd 配置FTP服务器,配置 “ sudo chmod a-w /h ...

  7. BZOJ1129 : [POI2008]Per

    枚举LCP,假设前$i-1$个都相同.那么后面$n-i$个数可以随意排列,第$i$个位置可以填的方案数为后面小于$a_i$的数字个数,树状数组维护. 同时为了保证本质不同,方案数需要除以每个数字的个数 ...

  8. linux工具

    sudo yum install yum-utils

  9. rpm 看 rpm 包内容

    1.命令 rpm #rpm -qpl packetname

  10. CentOS 简单设置samba服务

    1.安装 yum -y install samba 2.设置配置文件 1) 备份Samba的配置文件:cp  /etc/samba/smb.conf  /etc/samba/smb.conf.bak ...