hive语句执行顺序

msyql语句执行顺序

代码写的顺序:

select ... from... where.... group by... having... order by..
或者
from ... select ...

代码的执行顺序:

from... where...group by... having.... select ... order by...

hive 语句执行顺序

大致顺序
from... where.... select...group by... having ... order by...

explain查看执行计划

hive语句和mysql都可以通过explain查看执行计划,这样就可以查看执行顺序,比如代码
    explain
select city,ad_type,device,sum(cnt) as cnt
from tb_pmp_raw_log_basic_analysis
where day = '2016-05-28' and type = 0 and media = 'sohu' and (deal_id = '' or deal_id = '-' or deal_id is NULL)
group by city,ad_type,device
显示执行计划如下
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 is a root stage STAGE PLANS:
Stage: Stage-1
Map Reduce
Map Operator Tree:
TableScan
alias: tb_pmp_raw_log_basic_analysis
Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE
Filter Operator
predicate: (((deal_id = '') or (deal_id = '-')) or deal_id is null) (type: boolean)
Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: city (type: string), ad_type (type: string), device (type: string), cnt (type: bigint)
outputColumnNames: city, ad_type, device, cnt
Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE
Group By Operator
aggregations: sum(cnt)
keys: city (type: string), ad_type (type: string), device (type: string)
mode: hash
outputColumnNames: _col0, _col1, _col2, _col3
Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string)
sort order: +++
Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string)
Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE
value expressions: _col3 (type: bigint)
Reduce Operator Tree:
Group By Operator
aggregations: sum(VALUE._col0)
keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string)
mode: mergepartial
outputColumnNames: _col0, _col1, _col2, _col3
Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: bigint)
outputColumnNames: _col0, _col1, _col2, _col3
Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe Stage: Stage-0
Fetch Operator
limit: -1
具体介绍如下
**stage1的map阶段**
TableScan:from加载表,描述中有行数和大小等
Filter Operator:where过滤条件筛选数据,描述有具体筛选条件和行数、大小等
Select Operator:筛选列,描述中有列名、类型,输出类型、大小等。
Group By Operator:分组,描述了分组后需要计算的函数,keys描述用于分组的列,outputColumnNames为输出的列名,可以看出列默认使用固定的别名_col0,以及其他信息
Reduce Output Operator:map端本地的reduce,进行本地的计算,然后按列映射到对应的reduce
**stage1的reduce阶段Reduce Operator Tree**
Group By Operator:总体分组,并按函数计算。map计算后的结果在reduce端的合并。描述类似。mode: mergepartial是说合并map的计算结果。map端是hash映射分组
Select Operator:最后过滤列用于输出结果
File Output Operator:输出结果到临时文件中,描述介绍了压缩格式、输出文件格式。
stage0第二阶段没有,这里可以实现limit 100的操作。

总结

1,每个stage都是一个独立的MR,复杂的hql语句可以产生多个stage,可以通过执行计划的描述,看看具体步骤是什么。
2,执行计划有时预测数据量,不是真实运行,可能不准确

group by的MR

hive语句最好写子查询嵌套,这样分阶段的导入数据,可以逐步减少数据量。但可能会浪费时间。所以需要设计好。
group by本身也是一种数据筛选,可以大量减少数据,尤其用于去重等方面,功效显著。但group by产生MR有时不可控,不知道在哪个阶段更好。尤其,map端本地的reduce减少数据有很大作用。 尤其,hadoop的MR不患寡而患不均。数据倾斜将是MR计算的最大瓶颈。hive中可以设置分区、桶、distribute by等来控制分配数据给Reduce。
那么,group by生成MR是否可以优化呢?
下面两端代码,可以对比一下,

代码1

explain
select advertiser_id,crt_id,ad_place_id,channel,ad_type,rtb_type,media,count(1) as cnt
from (
select
split(all,'\\\\|~\\\\|')[41] as advertiser_id,
split(all,'\\\\|~\\\\|')[11] as crt_id,
split(all,'\\\\|~\\\\|')[8] as ad_place_id,
split(all,'\\\\|~\\\\|')[34] as channel,
split(all,'\\\\|~\\\\|')[42] as ad_type,
split(all,'\\\\|~\\\\|')[43] as rtb_type,
split(split(all,'\\\\|~\\\\|')[5],'/')[1] as media
from tb_pmp_raw_log_bid_tmp tb
) a
group by advertiser_id,crt_id,ad_place_id,channel,ad_type,rtb_type,media;

代码2

 explain
select
split(all,'\\\\|~\\\\|')[41] as advertiser_id,
split(all,'\\\\|~\\\\|')[11] as crt_id,
split(all,'\\\\|~\\\\|')[8] as ad_place_id,
split(all,'\\\\|~\\\\|')[34] as channel,
split(all,'\\\\|~\\\\|')[42] as ad_type,
split(all,'\\\\|~\\\\|')[43] as rtb_type,
split(split(all,'\\\\|~\\\\|')[5],'/')[1] as media
from tb_pmp_raw_log_bid_tmp tb
group by split(all,'\\\\|~\\\\|')[41],split(all,'\\\\|~\\\\|')[11],split(all,'\\\\|~\\\\|')[8],split(all,'\\\\|~\\\\|')[34],split(all,'\\\\|~\\\\|')[42],split(all,'\\\\|~\\\\|')[43],split(split(all,'\\\\|~\\\\|')[5],'/')[1]
先进行子查询,然后group by,还是直接group by,两种那个好一点,
我个人测试后认为,数据量小,第一种会好一点,如果数据量大,可能第二种会好。至于数据量多大。TB级以下的都是小数据。 两个执行计划对比如下,可以看出基本执行的步骤的数据分析量差不多。
group by一定要用,但内外,先后执行顺序效果差不多。

代码1

STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 is a root stage STAGE PLANS:
Stage: Stage-1
Map Reduce
Map Operator Tree:
TableScan
alias: tb
Statistics: Num rows: 1126576783 Data size: 112657678336 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: split(all, '\\|~\\|')[41] (type: string), split(all, '\\|~\\|')[11] (type: string), split(all, '\\|~\\|')[8] (type: string), split(all, '\\|~\\|')[34] (type: string), split(all, '\\|~\\|')[42] (type: string), split(all, '\\|~\\|')[43] (type: string), split(split(all, '\\|~\\|')[5], '/')[1] (type: string)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
Statistics: Num rows: 1126576783 Data size: 112657678336 Basic stats: COMPLETE Column stats: NONE
Group By Operator
aggregations: count(1)
keys: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: string), _col5 (type: string), _col6 (type: string)
mode: hash
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7
Statistics: Num rows: 1126576783 Data size: 112657678336 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: string), _col5 (type: string), _col6 (type: string)
sort order: +++++++
Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: string), _col5 (type: string), _col6 (type: string)
Statistics: Num rows: 1126576783 Data size: 112657678336 Basic stats: COMPLETE Column stats: NONE
value expressions: _col7 (type: bigint)
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string), KEY._col3 (type: string), KEY._col4 (type: string), KEY._col5 (type: string), KEY._col6 (type: string)
mode: mergepartial
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7
Statistics: Num rows: 563288391 Data size: 56328839118 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: string), _col5 (type: string), _col6 (type: string), _col7 (type: bigint)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7
Statistics: Num rows: 563288391 Data size: 56328839118 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
Statistics: Num rows: 563288391 Data size: 56328839118 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe Stage: Stage-0
Fetch Operator
limit: -1

代码2

STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 is a root stage STAGE PLANS:
Stage: Stage-1
Map Reduce
Map Operator Tree:
TableScan
alias: tb
Statistics: Num rows: 1126576783 Data size: 112657678336 Basic stats: COMPLETE Column stats: NONE
Select Operator expressions: all (type: string)
outputColumnNames: all
Statistics: Num rows: 1126576783 Data size: 112657678336 Basic stats: COMPLETE Column stats: NONE
Group By Operator keys: split(all, '\\|~\\|')[41] (type: string), split(all, '\\|~\\|')[11] (type: string), split(all, '\\|~\\|')[8] (type: string), split(all, '\\|~\\|')[34] (type: string), split(all, '\\|~\\|')[42] (type: string), split(all, '\\|~\\|')[43] (type: string), split(split(all, '\\|~\\|')[5], '/')[1] (type: string)
mode: hash
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
Statistics: Num rows: 1126576783 Data size: 112657678336 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: string), _col5 (type: string), _col6 (type: string)
sort order: +++++++
Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: string), _col5 (type: string), _col6 (type: string)
Statistics: Num rows: 1126576783 Data size: 112657678336 Basic stats: COMPLETE Column stats: NONE Reduce Operator Tree:
Group By Operator keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string), KEY._col3 (type: string), KEY._col4 (type: string), KEY._col5 (type: string), KEY._col6 (type: string)
mode: mergepartial
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
Statistics: Num rows: 563288391 Data size: 56328839118 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: string), _col5 (type: string), _col6 (type: string)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
Statistics: Num rows: 563288391 Data size: 56328839118 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
Statistics: Num rows: 563288391 Data size: 56328839118 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe Stage: Stage-0
Fetch Operator
limit: -1

hive高阶1--sql和hive语句执行顺序、explain查看执行计划、group by生成MR的更多相关文章

  1. Hive 高阶应用开发示例(一)

    Hive的一些常用的高阶开发 内容    1.开窗函数   2.行转列,列转行,多行转一行,一行转多行   3.分组: 增强型group   4.排序  5.关联 本次的内容: 内容1 和内容2,采用 ...

  2. SQL复杂查询语句-SELECT * FROM cs WHERE score>70 GROUP BY s_id HAVING COUNT(*)>1

    如果同时存在where,group by,的时候的执行顺序应该是这样的: 1,首先where后面添加条件把数据进行了过滤,返回一个结果集 2,然后group by将上面返回的结果集进行分组,返回一个结 ...

  3. MySQL--运行机制,SQL执行顺序,Explain

    MySQL的运行机制是什么?  首先客户端先要发送用户信息去服务器端进行授权认证,当输入正确密码之后可以连接到数据库了,当连接服务器端成功之后就可以正常的执行 SQL 命令了,MySQL 服务器拿到 ...

  4. Hive高阶聚合函数 GROUPING SETS、Cube、Rollup

    -- GROUPING SETS作为GROUP BY的子句,允许开发人员在GROUP BY语句后面指定多个统计选项,可以简单理解为多条group by语句通过union all把查询结果聚合起来结合起 ...

  5. SQL语句的执行顺序 1>优先执行,然后依数字排序

                  1>…From 表       2>…Where 条件       3>…Group by 列       4>…Having 筛选条件       ...

  6. sql执行效率,explain 查询执行效率

    1.对查询进行优化,应尽量避免全表扫描,首先应考虑在 where 及 order by 涉及的列上建立索引. 2.应尽量避免在 where 子句中对字段进行 null 值判断,否则将导致引擎放弃使用索 ...

  7. SQL逻辑查询语句执行顺序

    阅读目录 一 SELECT语句关键字的定义顺序 二 SELECT语句关键字的执行顺序 三 准备表和数据 四 准备SQL逻辑查询测试语句 五 执行顺序分析 一 SELECT语句关键字的定义顺序 SELE ...

  8. SQL执行顺序和coalesce以及case when的用法

    1.mysql的执行顺序 from on join where group by having select distinct union   //UNION 操作符用于合并两个或多个 SELECT ...

  9. SQL 中的语法顺序与执行顺序

    FROM : HOME SQL 是一种声明式语言 SQL 语言是为计算机声明了一个你想从原始数据中获得什么样的结果的一个范例,而不是告诉计算机如何能够得到结果. SQL 语言声明的是结果集的属性,计算 ...

随机推荐

  1. Gold well平台罗琪:叙利亚战火令黄金看涨意愿强烈

    Gold well平台罗琪:叙利亚战火令黄金看涨意愿强烈基本面分析:纸黄金交易通网显示,全球最大黄金上市交易基金(ETF)截至04月14日黄金持仓量较上日持平,当前持仓量为865.89吨,本月止净增持 ...

  2. IOS开发- 访问通讯录,并将通讯录中姓名-头像-手机号码 发给服务器

    现在很多软件都会访问通讯录,并将通讯录的信息取得,发给服务器,然后服务器会返回相应电话号码的用户是否注册. 现在分享一下前两步,访问通讯录并处理通讯录的信息 1.导入框架 #import <Ad ...

  3. Mac OS X磁盘重新分区后 BootCamp Windows启动项丢失

    前言 我有一台Mac,装有OS X和Windows两系统,因Windows和OS X都能读写exFAT分区, 故若在Machintosh HD和Windows HD之间开辟一个exFAT分区,可以作为 ...

  4. [Luogu 2090]数字对

    Description 对于一个数字对(a, b),我们可以通过一次操作将其变为新数字对(a+b, b)或(a, a+b). 给定一正整数n,问最少需要多少次操作可将数字对(1, 1)变为一个数字对, ...

  5. [HNOI2008]明明的烦恼

    Description 自从明明学了树的结构,就对奇怪的树产生了兴趣......给出标号为1到N的点,以及某些点最终的度数,允许在 任意两点间连线,可产生多少棵度数满足要求的树? Input 第一行为 ...

  6. 【BZOJ3573】【HNOI2014】米特运输

    Description 米特是D星球上一种非常神秘的物质,蕴含着巨大的能量.在以米特为主要能源的D星上,这种米特能源的运输和储存一直是一个大问题. D星上有N个城市,我们将其顺序编号为1到N,1号城市 ...

  7. 【USACO】干草金字塔

    题目描述 贝西要用干草包堆出一座金字塔.干草包会从传送带上陆续运来,依次出现 N 包,每包干草可 以看做是一个二维平面上的一个长方形,第 i 包干草的宽度是 W i ,长度统一为 1. 金字塔的修建有 ...

  8. 【bzoj4009 hnoi2015】接水果

    题目描述 风见幽香非常喜欢玩一个叫做 osu!的游戏,其中她最喜欢玩的模式就是接水果.由于她已经DT FC 了The big black, 她觉得这个游戏太简单了,于是发明了一个更加难的版本. 首先有 ...

  9. bzoj1043[HAOI2008]下落的圆盘 计算几何

    1043: [HAOI2008]下落的圆盘 Time Limit: 10 Sec  Memory Limit: 162 MBSubmit: 1598  Solved: 676[Submit][Stat ...

  10. 提高数据库的查询速率及其sql语句的优化问题

    在一个千万级的数据库查寻中,如何提高查询效率? 1)数据库设计方面:  a.对查询进行优化,应尽量避免全表扫描,首先应考虑在 where 及 order by 涉及的列上建立索引. b.应尽量避免在 ...