5 Ways to Make Your Hive Queries Run Faster
5 Ways to Make Your Hive Queries Run Faster
Technique #1: Use Tez Hive can use the Apache Tez execution engine instead of the venerable Map-reduce engine. I won’t go into details about the many benefits of using Tez which are mentioned here; instead, I want to make a simple recommendation: if it’s not turned on by default in your environment, use Tez by setting to ‘true’ the following in the beginning of your Hive query: set hive.execution.engine=tez; With the above setting, every HIVE query you execute will take advantage of Tez. Technique #2: Use ORCFile Hive supports ORCfile, a new table storage format that sports fantastic speed improvements through techniques like predicate push-down, compression and more. Using ORCFile for every HIVE table should really be a no-brainer and extremely beneficial to get fast response times for your HIVE queries. As an example, consider two large tables A and B (stored as text files, with some columns not all specified here), and a simple query like: SELECT A.customerID, A.name, A.age, A.address join B.role, B.department, B.salary ON A.customerID=B.customerID; This query may take a long time to execute since tables A and B are both stored as TEXT. Converting these tables to ORCFile format will usually reduce query time significantly: CREATE TABLE A_ORC ( customerID int, name string, age int, address string ) STORED AS ORC tblproperties (“orc.compress" = “SNAPPY”); INSERT INTO TABLE A_ORC SELECT * FROM A; CREATE TABLE B_ORC ( customerID int, role string, salary float, department string ) STORED AS ORC tblproperties (“orc.compress" = “SNAPPY”); INSERT INTO TABLE B_ORC SELECT * FROM B; SELECT A_ORC.customerID, A_ORC.name, A_ORC.age, A_ORC.address join B_ORC.role, B_ORC.department, B_ORC.salary ON A_ORC.customerID=B_ORC.customerID; ORC supports compressed storage (with ZLIB or as shown above with SNAPPY) but also uncompressed storage. Converting base tables to ORC is often the responsibility of your ingest team, and it may take them some time to change the complete ingestion process due to other priorities. The benefits of ORCFile are so tangible that I often recommend a do-it-yourself approach as demonstrated above – convert A into A_ORC and B into B_ORC and do the join that way, so that you benefit from faster queries immediately, with no dependencies on other teams. Technique #3: Use Vectorization Vectorized query execution improves performance of operations like scans, aggregations, filters and joins, by performing them in batches of 1024 rows at once instead of single row each time. Introduced in Hive 0.13, this feature significantly improves query execution time, and is easily enabled with two parameters settings: set hive.vectorized.execution.enabled = true; set hive.vectorized.execution.reduce.enabled = true; Technique #4: cost based query optimization Hive optimizes each query’s logical and physical execution plan before submitting for final execution. These optimizations are not based on the cost of the query – that is, until now. A recent addition to Hive, Cost-based optimization, performs further optimizations based on query cost, resulting in potentially different decisions: how to order joins, which type of join to perform, degree of parallelism and others. To use cost-based optimization (also known as CBO), set the following parameters at the beginning of your query: set hive.cbo.enable=true; set hive.compute.query.using.stats=true; set hive.stats.fetch.column.stats=true; set hive.stats.fetch.partition.stats=true; Then, prepare the data for CBO by running Hive’s “analyze” command to collect various statistics on the tables for which we want to use CBO. For example, in a table tweets we want to collect statistics about the table and about 2 columns: “sender” and “topic”: analyze table tweets compute statistics; analyze table tweets compute statistics for columns sender, topic; With HIVE 0.14 (on HDP 2.2) the analyze command works much faster, and you don’t need to specify each column, so you can just issue: analyze table tweets compute statistics for columns; That’s it. Now executing a query using this table should result in a different execution plan that is faster because of the cost calculation and different execution plan created by Hive. Technique #5: Write good SQL SQL is a powerful declarative language. Like other declarative languages, there is more than one way to write a SQL statement. Although each statement’s functionality is the same, it may have strikingly different performance characteristics. Let’s look at an example. Consider a click-stream event table: CREATE TABLE clicks ( timestamp date, sessionID string, url string, source_ip string ) STORED as ORC tblproperties (“orc.compress” = “SNAPPY”); Each record represents a click event, and we would like to find the latest URL for each sessionID. One might consider the following approach: SELECT clicks.* FROM clicks inner join (select sessionID, max(timestamp) as max_ts from clicks group by sessionID) latest ON clicks.sessionID = latest.sessionID and clicks.timestamp = latest.max_ts; In the above query, we build a sub-query to collect the timestamp of the latest event in each session, and then use an inner join to filter out the rest. While the query is a reasonable solution—from a functional point of view—it turns out there’s a better way to re-write this query as follows: SELECT * FROM (SELECT *, RANK() over (partition by sessionID, order by timestamp desc) as rank FROM clicks) ranked_clicks WHERE ranked_clicks.rank=1; Here we use Hive’s OLAP functionality (OVER and RANK) to achieve the same thing, but without a Join. Clearly, removing an unnecessary join will almost always result in better performance, and when using big data this is more important than ever. I find many cases where queries are not optimal — so look carefully at every query and consider if a rewrite can make it better and faster. Summary Apache Hive is a powerful tool in the hands of data analysts and data scientists, and supports a variety of batch and interactive workloads. In this blog post, I’ve discussed some useful techniques—the ones I use most often and find most useful for my day-to-day work as a data scientist—to make Hive queries run faster. Thankfully, the Hive community is not finished yet. Even between HIVE 0.13 and HIVE 0.14, there are dramatic improvements in ORCFiles, vectorization and CBO and how they positively impact query performance. I’m really excited about Stinger.next, bringing performance improvements to the sub-second range. I can’t wait.
Technique #1: Use Tez Hive can use the Apache Tez execution engine instead of the venerable Map-reduce engine. I won’t go into details about the many benefits of using Tez which are mentioned here; instead, I want to make a simple recommendation: if it’s not turned on by default in your environment, use Tez by setting to ‘true’ the following in the beginning of your Hive query: set hive.execution.engine=tez; With the above setting, every HIVE query you execute will take advantage of Tez. Technique #2: Use ORCFile Hive supports ORCfile, a new table storage format that sports fantastic speed improvements through techniques like predicate push-down, compression and more. Using ORCFile for every HIVE table should really be a no-brainer and extremely beneficial to get fast response times for your HIVE queries. As an example, consider two large tables A and B (stored as text files, with some columns not all specified here), and a simple query like: SELECT A.customerID, A.name, A.age, A.address join B.role, B.department, B.salary ON A.customerID=B.customerID; This query may take a long time to execute since tables A and B are both stored as TEXT. Converting these tables to ORCFile format will usually reduce query time significantly: CREATE TABLE A_ORC ( customerID int, name string, age int, address string ) STORED AS ORC tblproperties (“orc.compress" = “SNAPPY”); INSERT INTO TABLE A_ORC SELECT * FROM A; CREATE TABLE B_ORC ( customerID int, role string, salary float, department string ) STORED AS ORC tblproperties (“orc.compress" = “SNAPPY”); INSERT INTO TABLE B_ORC SELECT * FROM B; SELECT A_ORC.customerID, A_ORC.name, A_ORC.age, A_ORC.address join B_ORC.role, B_ORC.department, B_ORC.salary ON A_ORC.customerID=B_ORC.customerID; ORC supports compressed storage (with ZLIB or as shown above with SNAPPY) but also uncompressed storage. Converting base tables to ORC is often the responsibility of your ingest team, and it may take them some time to change the complete ingestion process due to other priorities. The benefits of ORCFile are so tangible that I often recommend a do-it-yourself approach as demonstrated above – convert A into A_ORC and B into B_ORC and do the join that way, so that you benefit from faster queries immediately, with no dependencies on other teams. Technique #3: Use Vectorization Vectorized query execution improves performance of operations like scans, aggregations, filters and joins, by performing them in batches of 1024 rows at once instead of single row each time. Introduced in Hive 0.13, this feature significantly improves query execution time, and is easily enabled with two parameters settings: set hive.vectorized.execution.enabled = true; set hive.vectorized.execution.reduce.enabled = true; Technique #4: cost based query optimization Hive optimizes each query’s logical and physical execution plan before submitting for final execution. These optimizations are not based on the cost of the query – that is, until now. A recent addition to Hive, Cost-based optimization, performs further optimizations based on query cost, resulting in potentially different decisions: how to order joins, which type of join to perform, degree of parallelism and others. To use cost-based optimization (also known as CBO), set the following parameters at the beginning of your query: set hive.cbo.enable=true; set hive.compute.query.using.stats=true; set hive.stats.fetch.column.stats=true; set hive.stats.fetch.partition.stats=true; Then, prepare the data for CBO by running Hive’s “analyze” command to collect various statistics on the tables for which we want to use CBO. For example, in a table tweets we want to collect statistics about the table and about 2 columns: “sender” and “topic”: analyze table tweets compute statistics; analyze table tweets compute statistics for columns sender, topic; With HIVE 0.14 (on HDP 2.2) the analyze command works much faster, and you don’t need to specify each column, so you can just issue: analyze table tweets compute statistics for columns; That’s it. Now executing a query using this table should result in a different execution plan that is faster because of the cost calculation and different execution plan created by Hive. Technique #5: Write good SQL SQL is a powerful declarative language. Like other declarative languages, there is more than one way to write a SQL statement. Although each statement’s functionality is the same, it may have strikingly different performance characteristics. Let’s look at an example. Consider a click-stream event table: CREATE TABLE clicks ( timestamp date, sessionID string, url string, source_ip string ) STORED as ORC tblproperties (“orc.compress” = “SNAPPY”); Each record represents a click event, and we would like to find the latest URL for each sessionID. One might consider the following approach: SELECT clicks.* FROM clicks inner join (select sessionID, max(timestamp) as max_ts from clicks group by sessionID) latest ON clicks.sessionID = latest.sessionID and clicks.timestamp = latest.max_ts; In the above query, we build a sub-query to collect the timestamp of the latest event in each session, and then use an inner join to filter out the rest. While the query is a reasonable solution—from a functional point of view—it turns out there’s a better way to re-write this query as follows: SELECT * FROM (SELECT *, RANK() over (partition by sessionID, order by timestamp desc) as rank FROM clicks) ranked_clicks WHERE ranked_clicks.rank=1; Here we use Hive’s OLAP functionality (OVER and RANK) to achieve the same thing, but without a Join. Clearly, removing an unnecessary join will almost always result in better performance, and when using big data this is more important than ever. I find many cases where queries are not optimal — so look carefully at every query and consider if a rewrite can make it better and faster. Summary Apache Hive is a powerful tool in the hands of data analysts and data scientists, and supports a variety of batch and interactive workloads. In this blog post, I’ve discussed some useful techniques—the ones I use most often and find most useful for my day-to-day work as a data scientist—to make Hive queries run faster. Thankfully, the Hive community is not finished yet. Even between HIVE 0.13 and HIVE 0.14, there are dramatic improvements in ORCFiles, vectorization and CBO and how they positively impact query performance. I’m really excited about Stinger.next, bringing performance improvements to the sub-second range. I can’t wait.
5 Ways to Make Your Hive Queries Run Faster的更多相关文章
- 关于tez-ui的"All DAGs"和"Hive Queries"页面信息为空的问题解决过程
近段时间发现公司的HDP大数据平台的tez-ui页面不能用了,页面显示为空,导致通过hive提交的sql不能方便地查找到Yarn上对应的applicationId,只能通过beeline的屏幕输出信息 ...
- Optimizing Hive queries for ORC formatted tables
Short Description: Hive configuration settings to optimize your HiveQL when querying ORC formatted t ...
- how to run faster
题目大意: 已知 $$ b_i = \sum_{j=1}^n {(i,j)^d [i,j]^c x_j}$$,给定 $b_i$ 求解 $x_i$ 解法: 考虑 $f(n) = \sum_{d|n}{f ...
- HIVE的几种优化
5 WAYS TO MAKE YOUR HIVE QUERIES RUN FASTER 今天看了一篇[文章] (http://zh.hortonworks.com/blog/5-ways-make-h ...
- 《Programming Hive》读书笔记(一)Hadoop和hive环境搭建
<Programming Hive>读书笔记(一)Hadoop和Hive环境搭建 先把主要的技术和工具学好,才干更高效地思考和工作. Chapter 1.Int ...
- Partitioning & Archiving tables in SQL Server (Part 1: The basics)
Reference: http://blogs.msdn.com/b/felixmar/archive/2011/02/14/partitioning-amp-archiving-tables-in- ...
- Covering Indexes in MySQL, PostgreSQL, and MongoDB
Covering Indexes in MySQL, PostgreSQL, and MongoDB - Orange Matter https://orangematter.solarwinds.c ...
- DeveloperGuide Hive UDAF
Writing GenericUDAFs: A Tutorial User-Defined Aggregation Functions (UDAFs) are an excellent way to ...
- 【大数据系列】apache hive 官方文档翻译
GettingStarted 开始 Created by Confluence Administrator, last modified by Lefty Leverenz on Jun 15, 20 ...
随机推荐
- Git基础篇【转】
转自:https://i.cnblogs.com/EditPosts.aspx?opt=1 1.设置名字与邮箱 $ Git config –global user.name “YourName” $ ...
- JS快速上手-基础Javascript
1.1背景 1.1.1 ECMAScript与javascript ECMAScript是javascript的官方命名.因为java已经是一个商标.如今,一些早前收到过授权的公司,如Moailla, ...
- js-压缩混淆工具 uglifyjs
单个打包文件npm install uglify-js -g 使用uglifyjs压缩js uglifyjs 原始js文件 -m -c -o 压缩后js文件 uglifyjs 原始js文件 -b -c ...
- git使用笔记一:
Get code into Bitbucket fast using the command line Set up your local directory Set up Git on your m ...
- Ubuntu 16.04下使用Wine安装Windows版的微信(不太完美)
说明: 真的不太完美,别试了:除了需要安装额外的输入法之后,无法上传图片和间接性的BUG出现等等问题. 建议安装网页版的微信:http://www.cnblogs.com/EasonJim/p/711 ...
- php curl xml传输和转换
<?php /** * API * User: sgfoot * Date: 2017/3/20 * Time: 18:05 */ class apiCurl { private $config ...
- JS中原型链中的prototype与_proto_的个人理解与详细总结(**************************************************************)
一直认为原型链太过复杂,尤其看过某图后被绕晕了一整子,今天清理硬盘空间(渣电脑),偶然又看到这图,勾起了点回忆,于是索性复习一下原型链相关的内容,表达能力欠缺逻辑混乱别见怪(为了防止新人__(此处指我 ...
- DICOM:DICOM万能编辑工具之Sante DICOM Editor
版权声明:本文为zssure原创文章,转载请注明出处,未经允许不得转载. 目录(?)[-] 背景 DICOM Service的配置 Sante DICOM Editor自启动的服务 PACS查询下 ...
- [Guava源代码阅读笔记]-Basic Utilities篇-1
欢迎訪问:个人博客 写该系列文章的目的是记录Guava源代码中个人感觉不错且值得借鉴的内容. 一.MoreObjects类 //MoreObjects.ToStringHelper类的toString ...
- AngularJS的ng-repeat的内部变量
代码下载:https://files.cnblogs.com/files/xiandedanteng/angularJSng-repeatInnerVariable.rar 代码: <!DOCT ...