Hive SQL查询效率提升之Analyze方案的实施
0.简介
Analyze,分析表(也称为计算统计信息)是一种内置的Hive操作,可以执行该操作来收集表上的元数据信息。这可以极大的改善表上的查询时间,因为它收集构成表中数据的行计数,文件计数和文件大小(字节),并在执行之前将其提供给查询计划程序。
1.如何分析表?
- 基础分析语句
ANALYZE TABLE my_database_name.my_table_name COMPUTE STATISTICS;
这是一个基础分析语句,不限制是否存在表分区,如果你是分区表更应该定期执行。
- 分析特定分区
ANALYZE TABLE my_database_name.my_table_name PARTITION (YEAR=2019, MONTH=5, DAY=12) COMPUTE STATISTICS;
这是一个细粒度的分析语句。它收集指定的分区上的元数据,并将该信息存储在Hive Metastore中已进行查询优化。该信息包括每列,不同值的数量,NULL值的数量,列的平均大小,平均值或列中所有值的总和(如果类型为数字)和值的百分数。
- 分析列
ANALYZE TABLE my_database_name.my_table_name COMPUTE STATISTICS FOR column1, column2, column3;
它收集指定列上的元数据,并将该信息存储在Hive Metastore中以进行查询优化。该信息包括每列,不同值的数量,NULL值的数量,列的平均大小,平均值或列中所有值的总和(如果类型为数字)和值的百分数。
- 分析指定分区上的列
ANALYZE TABLE my_database_name.my_table_name PARTITION (YEAR=2019, MONTH=5, DAY=12, HOUR=0) COMPUTE STATISTICS for column1, column2, column3;
ANALYZE TABLE my_database_name.my_table_name PARTITION (YEAR=2019, MONTH=5, DAY=12, HOUR) COMPUTE STATISTICS for column1, column2, column3;
ANALYZE TABLE my_database_name.my_table_name PARTITION (YEAR=2019, MONTH=5, DAY=12, HOUR) COMPUTE STATISTICS FOR COLUMNS;
第一个SQL只会分析单小时分区上的列信息;
第二个SQL会分析单天分区上的列信息;
第三个SQL会分析单天分区上的所有列信息。
2.效果验证
测试案例1
- 数据准备
选取KS3线上数据集、TPC-DS基准测试数据集作为样本。结合Hive表分析操作,对多个文件格式以及压缩算法下的数据查询时间进行比对。
SELECT count(DISTINCT(uuid)) AS script_appentry_30day_uv
FROM test_hive.document_assistant
WHERE dt >= '2019-03-12'
AND dt <= '2019-04-10'
AND p3 = '14'
AND p5 = 'script_appentry'
- 测试结果
测试案例2
- 数据准备(TPC-DS基础测试)
- 美国事务处理效能委员会(TPC,Transaction Processing Performance Council) :是目前最知名的非赢利的数据管理系统评测基准标准化组织。它定义了多组标准测试集用于客观地、可重现地评测数据库的性能。
- TPC-DS测试基准是TPC组织推出的用于替代TPC-H的下一代决策支持系统测试基准:TPC-DS采用星型、雪花型等多维数据模式。它包含7张事实表,17张维度表,平均每张表有18列。
- TPC-DS 特点:
- 共99个测试案例,遵循SQL'99和SQL 2003的语法标准,SQL案例比较复杂;
- 分析的数据量大,并且测试案例是在回答真实的商业问题;
- 测试案例中包含各种业务模型(如分析报告型,迭代式的联机分析型,数据挖掘型等);
- 几乎所有的测试案例都有很高的IO负载和CPU计算需求。
场景:单事实表、多个维表,复杂的Join
Store_Sales表记录数:2,879,987,999
事实表存储大小(GB):Text:390, Parquet(Gzip):116, Orc(Zlib):131
query27.sql:
-- start query 1 in stream 0 using template query27.tpl and seed 2017787633
select i_item_id,
s_state, grouping(s_state) g_state,
avg(ss_quantity) agg1,
avg(ss_list_price) agg2,
avg(ss_coupon_amt) agg3,
avg(ss_sales_price) agg4
from store_sales, customer_demographics, date_dim, store, item
where ss_sold_date_sk = d_date_sk and
ss_item_sk = i_item_sk and
ss_store_sk = s_store_sk and
ss_cdemo_sk = cd_demo_sk and
cd_gender = 'M' and
cd_marital_status = 'U' and
cd_education_status = '2 yr Degree' and
d_year = 2001 and
s_state in ('SD','FL', 'MI', 'LA', 'MO', 'SC')
group by rollup (i_item_id, s_state)
order by i_item_id
,s_state
limit 100;
-- end query 1 in stream 0 using template query27.tpl
query28.sql:
-- start query 1 in stream 0 using template query28.tpl and seed 444293455
select *
from (select avg(ss_list_price) B1_LP
,count(ss_list_price) B1_CNT
,count(distinct ss_list_price) B1_CNTD
from store_sales
where ss_quantity between 0 and 5
and (ss_list_price between 11 and 11+10
or ss_coupon_amt between 460 and 460+1000
or ss_wholesale_cost between 14 and 14+20)) B1,
(select avg(ss_list_price) B2_LP
,count(ss_list_price) B2_CNT
,count(distinct ss_list_price) B2_CNTD
from store_sales
where ss_quantity between 6 and 10
and (ss_list_price between 91 and 91+10
or ss_coupon_amt between 1430 and 1430+1000
or ss_wholesale_cost between 32 and 32+20)) B2,
(select avg(ss_list_price) B3_LP
,count(ss_list_price) B3_CNT
,count(distinct ss_list_price) B3_CNTD
from store_sales
where ss_quantity between 11 and 15
and (ss_list_price between 66 and 66+10
or ss_coupon_amt between 920 and 920+1000
or ss_wholesale_cost between 4 and 4+20)) B3,
(select avg(ss_list_price) B4_LP
,count(ss_list_price) B4_CNT
,count(distinct ss_list_price) B4_CNTD
from store_sales
where ss_quantity between 16 and 20
and (ss_list_price between 142 and 142+10
or ss_coupon_amt between 3054 and 3054+1000
or ss_wholesale_cost between 80 and 80+20)) B4,
(select avg(ss_list_price) B5_LP
,count(ss_list_price) B5_CNT
,count(distinct ss_list_price) B5_CNTD
from store_sales
where ss_quantity between 21 and 25
and (ss_list_price between 135 and 135+10
or ss_coupon_amt between 14180 and 14180+1000
or ss_wholesale_cost between 38 and 38+20)) B5,
(select avg(ss_list_price) B6_LP
,count(ss_list_price) B6_CNT
,count(distinct ss_list_price) B6_CNTD
from store_sales
where ss_quantity between 26 and 30
and (ss_list_price between 28 and 28+10
or ss_coupon_amt between 2513 and 2513+1000
or ss_wholesale_cost between 42 and 42+20)) B6
limit 100;
-- end query 1 in stream 0 using template query28.tpl
query43.sql:
-- start query 1 in stream 0 using template query43.tpl and seed 1819994127
select s_store_name, s_store_id,
sum(case when (d_day_name='Sunday') then ss_sales_price else null end) sun_sales,
sum(case when (d_day_name='Monday') then ss_sales_price else null end) mon_sales,
sum(case when (d_day_name='Tuesday') then ss_sales_price else null end) tue_sales,
sum(case when (d_day_name='Wednesday') then ss_sales_price else null end) wed_sales,
sum(case when (d_day_name='Thursday') then ss_sales_price else null end) thu_sales,
sum(case when (d_day_name='Friday') then ss_sales_price else null end) fri_sales,
sum(case when (d_day_name='Saturday') then ss_sales_price else null end) sat_sales
from date_dim, store_sales, store
where d_date_sk = ss_sold_date_sk and
s_store_sk = ss_store_sk and
s_gmt_offset = -6 and
d_year = 1998
group by s_store_name, s_store_id
order by s_store_name, s_store_id,sun_sales,mon_sales,tue_sales,wed_sales,thu_sales,fri_sales,sat_sales
limit 100;
-- end query 1 in stream 0 using template query43.tpl
query67.sql:
-- start query 1 in stream 0 using template query67.tpl and seed 1819994127
select *
from (select i_category
,i_class
,i_brand
,i_product_name
,d_year
,d_qoy
,d_moy
,s_store_id
,sumsales
,rank() over (partition by i_category order by sumsales desc) rk
from (select i_category
,i_class
,i_brand
,i_product_name
,d_year
,d_qoy
,d_moy
,s_store_id
,sum(coalesce(ss_sales_price*ss_quantity,0)) sumsales
from store_sales
,date_dim
,store
,item
where ss_sold_date_sk=d_date_sk
and ss_item_sk=i_item_sk
and ss_store_sk = s_store_sk
and d_month_seq between 1212 and 1212+11
group by rollup(i_category, i_class, i_brand, i_product_name, d_year, d_qoy, d_moy,s_store_id))dw1) dw2
where rk <= 100
order by i_category
,i_class
,i_brand
,i_product_name
,d_year
,d_qoy
,d_moy
,s_store_id
,sumsales
,rk
limit 100;
-- end query 1 in stream 0 using template query67.tpl
query46.sql:
-- start query 1 in stream 0 using template query46.tpl and seed 803547492
select c_last_name
,c_first_name
,ca_city
,bought_city
,ss_ticket_number
,amt,profit
from
(select ss_ticket_number
,ss_customer_sk
,ca_city bought_city
,sum(ss_coupon_amt) amt
,sum(ss_net_profit) profit
from store_sales,date_dim,store,household_demographics,customer_address
where store_sales.ss_sold_date_sk = date_dim.d_date_sk
and store_sales.ss_store_sk = store.s_store_sk
and store_sales.ss_hdemo_sk = household_demographics.hd_demo_sk
and store_sales.ss_addr_sk = customer_address.ca_address_sk
and (household_demographics.hd_dep_count = 2 or
household_demographics.hd_vehicle_count= 1)
and date_dim.d_dow in (6,0)
and date_dim.d_year in (1998,1998+1,1998+2)
and store.s_city in ('Cedar Grove','Wildwood','Union','Salem','Highland Park')
group by ss_ticket_number,ss_customer_sk,ss_addr_sk,ca_city) dn,customer,customer_address current_addr
where ss_customer_sk = c_customer_sk
and customer.c_current_addr_sk = current_addr.ca_address_sk
and current_addr.ca_city <> bought_city
order by c_last_name
,c_first_name
,ca_city
,bought_city
,ss_ticket_number
limit 100;
-- end query 1 in stream 0 using template query46.tpl
query7.sql:
-- start query 1 in stream 0 using template query7.tpl and seed 1930872976
select i_item_id,
avg(ss_quantity) agg1,
avg(ss_list_price) agg2,
avg(ss_coupon_amt) agg3,
avg(ss_sales_price) agg4
from store_sales, customer_demographics, date_dim, item, promotion
where ss_sold_date_sk = d_date_sk and
ss_item_sk = i_item_sk and
ss_cdemo_sk = cd_demo_sk and
ss_promo_sk = p_promo_sk and
cd_gender = 'F' and
cd_marital_status = 'W' and
cd_education_status = 'Primary' and
(p_channel_email = 'N' or p_channel_event = 'N') and
d_year = 1998
group by i_item_id
order by i_item_id
limit 100;
-- end query 1 in stream 0 using template query7.tpl
query73.sql:
-- start query 1 in stream 0 using template query73.tpl and seed 1971067816
select c_last_name
,c_first_name
,c_salutation
,c_preferred_cust_flag
,ss_ticket_number
,cnt from
(select ss_ticket_number
,ss_customer_sk
,count(*) cnt
from store_sales,date_dim,store,household_demographics
where store_sales.ss_sold_date_sk = date_dim.d_date_sk
and store_sales.ss_store_sk = store.s_store_sk
and store_sales.ss_hdemo_sk = household_demographics.hd_demo_sk
and date_dim.d_dom between 1 and 2
and (household_demographics.hd_buy_potential = '>10000' or
household_demographics.hd_buy_potential = 'unknown')
and household_demographics.hd_vehicle_count > 0
and case when household_demographics.hd_vehicle_count > 0 then
household_demographics.hd_dep_count/ household_demographics.hd_vehicle_count else null end > 1
and date_dim.d_year in (2000,2000+1,2000+2)
and store.s_county in ('Mobile County','Maverick County','Huron County','Kittitas County')
group by ss_ticket_number,ss_customer_sk) dj,customer
where ss_customer_sk = c_customer_sk
and cnt between 1 and 5
order by cnt desc;
-- end query 1 in stream 0 using template query73.tpl
- 测试结果
3.结论
- Hive执行表分析后能大幅加速查询速度
- 查询耗时(压缩算法):None > Snappy > Gzip/Zlib
- 查询耗时(文件格式):Text > Parquet > Orc
- 当前测试场景下,ORC格式查询耗时最低
- Parquet与Orc查询耗时接近
Hive SQL查询效率提升之Analyze方案的实施的更多相关文章
- 关于SQL查询效率,100w数据,查询只要1秒
1.关于SQL查询效率,100w数据,查询只要1秒,与您分享:机器情况p4: 2.4内存: 1 Gos: windows 2003数据库: ms sql server 2000目的: 查询性能测试,比 ...
- 关于SQL查询效率 主要针对sql server
1.关于SQL查询效率,100w数据,查询只要1秒,与您分享:机器情况p4: 2.4内存: 1 Gos: windows 2003数据库: ms sql server 2000目的: 查询性能测试,比 ...
- PostgreSQL LIKE 查询效率提升实验<转>
一.未做索引的查询效率 作为对比,先对未索引的查询做测试 EXPLAIN ANALYZE select * from gallery_map where author = '曹志耘'; QUERY P ...
- 提高SQL查询效率(SQL优化)
要提高SQL查询效率where语句条件的先后次序应如何写 http://blog.csdn.net/sforiz/article/details/5345359 我们要做到不但会写SQL,还要做到 ...
- SQL查询效率:100w数据查询只需要1秒钟
G os: windows 数据库: ms sql server 目的: 查询性能测试,比较两种查询的性能 SQL查询效率 step by step -- setp . -- 建表 create ta ...
- 提高SQL查询效率的常用方法
提高SQL查询效率的常用方法 (1)选择最有效率的表名顺序(只在基于规则的优化器中有效): Oracle的解析器按照从右到左的顺序处理FROM子句中的表名,FROM子句中写在最后的表(基础表 driv ...
- 提高SQL查询效率的30种方法
转载:提高SQL查询效率的30种方法 内容摘录如下: 1.对查询进行优化,应尽量避免全表扫描,首先应考虑在 where 及 order by 涉及的列上建立索引. 2.应尽量避免在 where 子句中 ...
- 查询效率提升10倍!3种优化方案,帮你解决MySQL深分页问题
开发经常遇到分页查询的需求,但是当翻页过多的时候,就会产生深分页,导致查询效率急剧下降. 有没有什么办法,能解决深分页的问题呢? 本文总结了三种优化方案,查询效率直接提升10倍,一起学习一下. 1. ...
- 浅谈PageHelper插件分页实现原理及大数据量下SQL查询效率问题解决
前因:项目一直使用的是PageHelper实现分页功能,项目前期数据量较少一直没有什么问题.随着业务扩增,数据库扩增PageHelper出现了明显的性能问题.几十万甚至上百万的单表数据查询性能缓慢,需 ...
随机推荐
- sentinel备忘
git https://github.com/alibaba/Sentinel https://github.com/dubbo/dubbo-sentinel-supportdubbo http: ...
- Docker-----关于dockerfile
docker build参数说明 --no-cache :创建镜像的过程不使用缓存: --force-rm :设置镜像过程中删除中间容器: --network=host:容器会使用宿主机的网络,容器与 ...
- javascript对象属性和数组的访问
javascript对象属性的访问 假如有对象test:var test = { "a":1, "b":2};直接访问对象test的属性a的值,有两种方法: ...
- 【深入nodejs开发】一、将node项目结合nginx部署到Centos7服务器
一.安装nginx服务器环境 1.使用ssh工具连接服务器 2.安装宝塔面板,方便服务器管理 yum install -y wget && wget -O install.sh htt ...
- 自定义nagios监控脚本---磁盘检测
自定义nagios监控脚本---磁盘检测 1. 在客户端上创建脚本/usr/local/nagios/libexec/check_disk.shvim /usr/local/nagios/libexe ...
- 小D课堂-SpringBoot 2.x微信支付在线教育网站项目实战_5-9.使用JWT生成用户Token回写客户端
笔记 9.使用JWT生成用户Token回写客户端 简介:讲解用户授权登录后,需要生成登录凭证重定向到页面上 1.获取当前页面访问地址 2.根据User基本信息生成token 3.重定向到指定页 ...
- mariadb数据库(1)
一.什么是数据库? 简单的说,数据库就是一个存放数据的仓库,这个仓库是按照一定的数据结构(数据结构是指数据的组织形式或数据之间的联系)来组织,存储的,我们可以通过数据库提供的多种方法来管理数据库里的数 ...
- go 语言 interface{} 的易错点
一,interface 介绍 如果说 goroutine 和 channel 是 go 语言并发的两大基石,那 interface 就是 go 语言类型抽象的关键.在实际项目中,几乎所有的数据结构最底 ...
- 在Spring中配置jdbc为什么不能用${username}问题
楼主在spring中配置jdbc时,引用的是dbcp.jar包,在dataSource.properties配置文件中,有mysql用户名,楼主自然的选择了使用username,密码是root, 然后 ...
- bash-2 httpd服务的源码编译安装脚本
httpd服务的源码编译安装脚本 #!/bin/bash # #******************************************************************** ...