ORCFILE IN HDP 2: BETTER COMPRESSION, BETTER PERFORMANCE
ORCFILE IN HDP 2: BETTER COMPRESSION, BETTER PERFORMANCE
Carter Shanklin
The upcoming Hive 0.12 is set to bring some great new advancements in the storage layer in the forms of higher compression and better query performance.
HIGHER COMPRESSION
ORCFile was introduced in Hive 0.11 and offered excellent compression, delivered through a number of techniques including run-length encoding, dictionary encoding for strings and bitmap encoding.
This focus on efficiency leads to some impressive compression ratios. This picture shows the sizes of the TPC-DS dataset at Scale 500 in various encodings. This dataset contains randomly generated data including strings, floating point and integer data.
We’ve already seen customers whose clusters are maxed out from a storage perspective moving to ORCFile as a way to free up space while being 100% compatible with existing jobs.
Data stored in ORCFile can be read or written through HCatalog, so any Pig or Map/Reduce process can play along seamlessly. Hive 12 builds on these impressive compression ratios and delivers deep integration at the Hive and execution layers to accelerate queries, both from the point of view of dealing with larger datasets and lower latencies.
PREDICATE PUSHDOWN
SQL queries will generally have some number of WHERE conditions which can be used to easily eliminate rows from consideration. In older versions of Hive, rows are read out of the storage layer before being later eliminated by SQL processing. There’s a lot of wasteful overhead and Hive 12 optimizes this by allowing predicates to be pushed down and evaluated in the storage layer itself. It’s controlled by the setting hive.optimize.ppd=true
.
This requires a reader that is smart enough to understand the predicates. Fortunately ORC has had the corresponding improvements to allow predicates to be pushed into it, and takes advantages of its inline indexes to deliver performance benefits.
For example if you have a SQL query like:
SELECT COUNT(*) FROM CUSTOMER WHERE CUSTOMER.state = ‘CA’;
The ORCFile reader will now only return rows that actually match the WHERE
predicates and skip customers residing in any other state. The more columns you read from the table, the more data marshaling you avoid and the greater the speedup.
A WORD ON ORCFILE INLINE INDEXES
Before we move to the next section we need to spend a moment talking about how ORCFile breaks rows into row groups and applies columnar compression and indexing within these row groups.
TURNING PREDICATE PUSHDOWN TO 11
ORC’s Predicate Pushdown will consult the Inline Indexes to try to identify when entire blocks can be skipped all at once. Some times your dataset will naturally facilitate this. For instance if your data comes as a time series with a monotonically increasing timestamp, when you put a where condition on this timestamp, ORC will be able to skip a lot of row groups.
In other instances you may need to give things a kick by sorting data. If a column is sorted, relevant records will get confined to one area on disk and the other pieces will be skipped very quickly.
Skipping works for number types and for string types. In both instances it’s done by recording a min and max value inside the inline index and determining if the lookup value falls outside that range.
Sorting can lead to very nice speedups. There is a trade-off in that you need to decide what columns to sort on in advance. The decision making process is somewhat similar to deciding what columns to index in traditional SQL systems. The best payback is when you have a column that is frequently used and accessed with very specific conditions and is used in a lot of queries. Remember that you can force Hive to sort on a column by using the SORT BY
keyword when creating the table and setting hive.enforce.sorting
to true before inserting into the table.
ORCFile is an important piece of our Stinger Initiative to improve Hive performance 100x. To show the impact we ran a modified TPC-DS Query 27 query with a modified data schema. Query 27 does a star schema join on a large fact table, accessing 4 separate dimension tables. In the modified schema, the state in which the sale is made is denormalized into the fact table and the resulting table is sorted by state. In this way, when the query scans the fact table, it can skip entire blocks of rows because the query filters based on the state. This results in some incremental speedup as you can see from the chart below.
This feature gives you the best bang for the buck when:
- You frequently filter a large fact table in a precise way on a column with moderate to large cardinality.
- You select a large number of columns, or wide columns. The more data marshaling you save, the greater your speedup will be.
USING ORCFILE
Using ORCFile or converting existing data to ORCFile is simple. To use it just add STORED AS orc
to the end of your create table statements like this:
CREATE TABLE mytable (
...
) STORED AS orc;
To convert existing data to ORCFile create a table with the same schema as the source table plus stored as orc, then you can use issue a query like:
INSERT INTO TABLE orctable SELECT * FROM oldtable;
Hive will handle all the details of conversion to ORCFile and you are free to delete the old table to free up loads of space.
When you create an ORC table there are a number of table properties you can use to further tune the way ORC works.
Key | Default | Notes |
orc.compress |
ZLIB |
Compression to use in addition to columnar compression (one of NONE, ZLIB, SNAPPY) |
orc.compress.size |
262,144 (= 256KiB) |
Number of bytes in each compression chunk |
orc.stripe.size | 268,435,456 (=256 MiB) |
Number of bytes in each stripe |
orc.row.index.stride |
10,000 |
Number of rows between index entries (must be >= 1,000) |
orc.create.index |
true |
Whether to create inline indexes |
For example let’s say you wanted to use snappy compression instead of zlib compression. Here’s how:
CREATE TABLE mytable (
...
) STORED AS orc tblproperties ("orc.compress"="SNAPPY");
TRY IT OUT
All these features are available in our HDP 2 Beta and we encourage you to download, try them out and give us your feedback.
ORCFILE IN HDP 2: BETTER COMPRESSION, BETTER PERFORMANCE的更多相关文章
- 译:ORCFILE IN HDP 2:更好的压缩,更高的性能
原文地址: https://hortonworks.com/blog/orcfile-in-hdp-2-better-compression-better-performance/ ORCFILE I ...
- MongoDB 3.0 WiredTiger Compression and Performance
MongoDB3.0中的压缩选项 在MongoDB 3.0中,WiredTiger为集合提供三个压缩选项: 无压缩 Snappy(默认启用) – 很不错的压缩,有效利用资源 zlib(类似gzip) ...
- SolrPerformanceFactors--官方文档
原文地址:http://wiki.apache.org/solr/SolrPerformanceFactors Contents Schema Design Considerations indexe ...
- 官方文档 恢复备份指南六 Configuring the RMAN Environment: Advanced Topics
RMAN高级设置. 本章内容: Configuring Advanced Channel Options 高级通道选项 Configuring Advanced Backup Options 高级备 ...
- Linux中ext2文件系统的结构
1.ext2产生的历史 最早的Linux内核是从MINIX系统过渡发展而来的.Linux最早的文件系统就是MINIX文件系统.MINIX文件系统几乎到处都是bug,采用的是16bit偏移量,最大容量为 ...
- oracle 表压缩技术
压缩表是我们维护管理中常常会用到的.以下我们看都oracle给我们提供了哪些压缩方式. 文章摘自"Oracle® Database Administrator's Guide11g Rele ...
- mongodb压缩——snappy、zlib块压缩,btree索引前缀压缩
MongoDB 3.0 WiredTiger Compression and Performance One of the most exciting developments over the li ...
- 3.4-3.6 Hive Storage Format
一.file format ORCFile在HDP 2:更好的压缩,更好的性能: https://zh.hortonworks.com/blog/orcfile-in-hdp-2-better-com ...
- HIVE的几种优化
5 WAYS TO MAKE YOUR HIVE QUERIES RUN FASTER 今天看了一篇[文章] (http://zh.hortonworks.com/blog/5-ways-make-h ...
随机推荐
- Postgresql ODBC驱动,用sqlserver添加dblink跨库访问postgresql数据库
在同样是SQLserver数据库跨库访问时,只需要以下方法 declare @rowcount int set @rowcount =(select COUNT(*) from sys.servers ...
- C# 使用 PerformanceCounter 获取 CPU 和 硬盘的使用率
C# 使用 PerformanceCounter 获取 CPU 和 硬盘的使用率: 先看界面: 建一个 Windows Form 桌面程序,代码如下: using System; using Sys ...
- Spring,SpringMvc配置常见的坑,注解的使用注意事项,applicationContext.xml和spring.mvc.xml配置注意事项,spring中的事务失效,事务不回滚原因
1.Spring中的applicationContext.xml配置错误导致的异常 异常信息: org.apache.ibatis.binding.BindingException: Invalid ...
- 安装python3
由于centos已经自带了python,但是没有python3,楼楼接下来自己去安装下.截止到本文python的最新版本为3.7.1,官网的地址为:https://www.python.org/ 下载 ...
- 性能监控(4)–linux下的pidstat命令
pidstat是一个可以监控到线程的监控工具,可以使用-p指定进程ID. pidstat–p <PID> [delay] [times] –u –t 可以监控线程的CPU使用率 当某一个线 ...
- 小tips:JS之浅拷贝与深拷贝
浅拷贝: function extendCopy(p) { var c = {}; for (var i in p) { c[i] = p[i]; } return c; } 深拷贝: functio ...
- webstorm编辑器相关
1.怎么去掉webstorm中间那条线? 如图: 2.webstorm 常见快捷键 1.代码导航和用法查询:只需要按着Ctrl键点击函数或者变量等,就能直接跳转到定义:可以全项目查找函数或者变量,还可 ...
- Vivox9怎么录制屏幕
手机怎么录屏是很多手机党一直提出的问题,而且经常发生录制的视频没有声音的现象,现在就给大家推荐一款软件,不仅能完美的录制视频,而且还可以完整的将视频声音录制下来,下面看看Vivox9怎么录制屏幕吧! ...
- Mysql LIMIT 分页
格式: LIMIT index, size // index:从哪一行(第几条)开始查,size:多少条 分页: LIMIT (currentPage-1)*pageSize, pageSiz ...
- iOS------Xcode 的clang 扫描器可以检测出所有的内存泄露吗
在苹果没有出ARC(自动内存管理机制)时,我们几乎有一半的开发时间都耗费在这么管理内存上.后来苹果很人性的出了ARC,虽然在很大程度上,帮助我们开发者节省了精力和时间.但是我们在开发过程中,由于种种原 ...