译:ORCFILE IN HDP 2:更好的压缩,更高的性能
原文地址:
https://hortonworks.com/blog/orcfile-in-hdp-2-better-compression-better-performance/
ORCFILE IN HDP 2: BETTER COMPRESSION, BETTER PERFORMANCE
Carter Shanklin
ORCFILE IN HDP 2:更好的压缩,更高的性能
The upcoming Hive 0.12 is set to bring some great new advancements in the storage layer in the forms of higher compression and better query performance.
HIGHER COMPRESSION
ORCFile was introduced in Hive 0.11 and offered excellent compression, delivered through a number of techniques including run-length encoding, dictionary encoding for strings and bitmap encoding.
This focus on efficiency leads to some impressive compression ratios. This picture shows the sizes of the TPC-DS dataset at Scale 500 in various encodings. This dataset contains randomly generated data including strings, floating point and integer data.
We’ve already seen customers whose clusters are maxed out from a storage perspective moving to ORCFile as a way to free up space while being 100% compatible with existing jobs.
Data stored in ORCFile can be read or written through HCatalog, so any Pig or Map/Reduce process can play along seamlessly. Hive 12 builds on these impressive compression ratios and delivers deep integration at the Hive and execution layers to accelerate queries, both from the point of view of dealing with larger datasets and lower latencies.
PREDICATE PUSHDOWN
SQL queries will generally have some number of WHERE conditions which can be used to easily eliminate rows from consideration. In older versions of Hive, rows are read out of the storage layer before being later eliminated by SQL processing. There’s a lot of wasteful overhead and Hive 12 optimizes this by allowing predicates to be pushed down and evaluated in the storage layer itself. It’s controlled by the setting hive.optimize.ppd=true
.
This requires a reader that is smart enough to understand the predicates. Fortunately ORC has had the corresponding improvements to allow predicates to be pushed into it, and takes advantages of its inline indexes to deliver performance benefits.
For example if you have a SQL query like:
SELECT COUNT(*) FROM CUSTOMER WHERE CUSTOMER.state = ‘CA’;
The ORCFile reader will now only return rows that actually match the WHERE
predicates and skip customers residing in any other state. The more columns you read from the table, the more data marshaling you avoid and the greater the speedup.
A WORD ON ORCFILE INLINE INDEXES
Before we move to the next section we need to spend a moment talking about how ORCFile breaks rows into row groups and applies columnar compression and indexing within these row groups.
TURNING PREDICATE PUSHDOWN TO 11
ORC’s Predicate Pushdown will consult the Inline Indexes to try to identify when entire blocks can be skipped all at once. Some times your dataset will naturally facilitate this. For instance if your data comes as a time series with a monotonically increasing timestamp, when you put a where condition on this timestamp, ORC will be able to skip a lot of row groups.
In other instances you may need to give things a kick by sorting data. If a column is sorted, relevant records will get confined to one area on disk and the other pieces will be skipped very quickly.
Skipping works for number types and for string types. In both instances it’s done by recording a min and max value inside the inline index and determining if the lookup value falls outside that range.
Sorting can lead to very nice speedups. There is a trade-off in that you need to decide what columns to sort on in advance. The decision making process is somewhat similar to deciding what columns to index in traditional SQL systems. The best payback is when you have a column that is frequently used and accessed with very specific conditions and is used in a lot of queries. Remember that you can force Hive to sort on a column by using the SORT BY
keyword when creating the table and setting hive.enforce.sorting
to true before inserting into the table.
ORCFile is an important piece of our Stinger Initiative to improve Hive performance 100x. To show the impact we ran a modified TPC-DS Query 27 query with a modified data schema. Query 27 does a star schema join on a large fact table, accessing 4 separate dimension tables. In the modified schema, the state in which the sale is made is denormalized into the fact table and the resulting table is sorted by state. In this way, when the query scans the fact table, it can skip entire blocks of rows because the query filters based on the state. This results in some incremental speedup as you can see from the chart below.
This feature gives you the best bang for the buck when:
- You frequently filter a large fact table in a precise way on a column with moderate to large cardinality.
- You select a large number of columns, or wide columns. The more data marshaling you save, the greater your speedup will be.
USING ORCFILE
Using ORCFile or converting existing data to ORCFile is simple. To use it just add STORED AS orc
to the end of your create table statements like this:
CREATE TABLE mytable (
...
) STORED AS orc;
To convert existing data to ORCFile create a table with the same schema as the source table plus stored as orc, then you can use issue a query like:
INSERT INTO TABLE orctable SELECT * FROM oldtable;
Hive will handle all the details of conversion to ORCFile and you are free to delete the old table to free up loads of space.
When you create an ORC table there are a number of table properties you can use to further tune the way ORC works.
Key | Default | Notes |
orc.compress |
ZLIB |
Compression to use in addition to columnar compression (one of NONE, ZLIB, SNAPPY) |
orc.compress.size |
262,144 (= 256KiB) |
Number of bytes in each compression chunk |
orc.stripe.size | 268,435,456 (=256 MiB) |
Number of bytes in each stripe |
orc.row.index.stride |
10,000 |
Number of rows between index entries (must be >= 1,000) |
orc.create.index |
true |
Whether to create inline indexes |
For example let’s say you wanted to use snappy compression instead of zlib compression. Here’s how:
CREATE TABLE mytable (
...
) STORED AS orc tblproperties ("orc.compress"="SNAPPY");
TRY IT OUT
All these features are available in our HDP 2 Beta and we encourage you to download, try them out and give us your feedback.
译:ORCFILE IN HDP 2:更好的压缩,更高的性能的更多相关文章
- ORCFILE IN HDP 2: BETTER COMPRESSION, BETTER PERFORMANCE
ORCFILE IN HDP 2: BETTER COMPRESSION, BETTER PERFORMANCE by Carter Shanklin The upcoming Hive 0.12 ...
- 价格更低、SLA 更强的全新 Azure SQL 数据库服务等级将于 9 月正式发布
继上周公告之后,很高兴向大家宣布更多好消息,作为我们更广泛的数据平台的一部分, 我们将在 Azure 上提供丰富的在线数据服务.9 月,我们将针对 Azure SQL 数据库推出新的服务等级.Azur ...
- Quality Over Quantity: 更少一些,更好一些_第1页_福布斯中文网
Quality Over Quantity: 更少一些,更好一些_第1页_福布斯中文网 Quality Over Quantity: 更少一些,更好一些 2013年04月09日 ...
- Clear Linux 为脚本语言提供更高的性能
导读 Clear Linux的领先性能不仅限于C/C++应用程序,而且PHP,R和Python等脚本语言也有很大的提升速度.在一篇新的博客文章中,英特尔的一位开发人员概述了他们对Python的一些性能 ...
- IntelliJ IDEA 2019.2最新解读:性能更好,体验更优,细节处理更完美!
idea 2019.2 准备 idea 2019.2正式版是在2019年7月24号发布的,本篇文章,我将根据官方博客以及自己的理解来进行说明,总体就是:性能更好,体验更优,细节处理更完美! 支持jdk ...
- vue3.0和2.0的区别,Vue-cli3.0于 8月11日正式发布,更快、更小、更易维护、更易于原生、让开发者更轻松
vue3.0和2.0的区别Vue-cli3.0于 8月11日正式发布,看了下评论,兼容性不是很好,命令有不少变化,不是特别的乐观vue3.0 的发布与 vue2.0 相比,优势主要体现在:更快.更小. ...
- Nvidia发布更快、功耗更低的新一代图形加速卡
导读 不出意外的,Nvidia在其举行的Supercomputing 19大会上公布了很多新闻,这些我们将稍后提到.但被忽略的一条或许是其中最有趣的:一张更快.功耗更低的新一代图形加速卡. 多名与会者 ...
- 玩转 .NET Core 3.0:逐浪CMS新版发布,建站更简单、网站更安全
2019年11月11日,在大家都忙于网上体会“双11 ”的热闹气氛的时候,逐浪CMS开发者团队正在做着新版本发布的最后工作.此次更新是基本于 .NET Core 3.0开发,也是全国首个基于 .NET ...
- 会议更流畅,表情更生动!视频生成编码 VS 国际最新 VVC 标准
阿里云视频云的标准与实现团队与香港城市大学联合开发了基于 AI 生成的人脸视频压缩体系,相比于 VVC 标准,两者质量相当时可以取得 40%-65% 的码率节省,旨在用最前沿的技术,普惠视频通话.视频 ...
随机推荐
- python数据类型之简单数据类型
变量使用注意事项 慎用小写字母l和大写字母O,因为它们可能被人看成数值1和0. 应使用小写的python变量名. 字符串 在python中,用引号括起来的都是字符串,其中的引号可以是单引号和双引号. ...
- Dllmain的作用
DllMain函数是DLL模块的默认入口点.当Windows加载DLL模块时调用这一函数.系统首先调用全局对象的构造函数,然后调用全局函数 DLLMain.DLLMain函数不仅在将DLL链接加载到进 ...
- Python写随机发红包的原理流程
首先来说说要用到的知识点,第一个要说的是扩展包random,random模块一般用来生成一个随机数 今天要用到ramdom中unifrom的方法用于生成一个指定范围的随机浮点数通过下面的图简单看下: ...
- windows下生成上传git时需要用的SSH密钥
参考:Windows上传代码到Github 打开“Git Bash” 输入 ssh-keygen -C "your email" -t rsa 出现如下结果: 成功后,信息里会显示 ...
- 转:导出csv文件数字会自动变科学计数法的解决方法
导出csv文件数字会自动变科学计数法的解决方法 其实这个问题跟用什么语言导出csv文件没有关系.Excel显示数字时,如果数字大于12位,它会自动转化为科学计数法:如果数字大于15位,它不仅用于科 ...
- 手动解除联合的ArcGIS Server
ArcGIS Server可以通过和Portal联合,组建WebGIS系统. 假如已经联合的ArcGIS Server已经无法访问,例如服务器宕机了,或者网络断开了.需要手动解除联合的ArcGIS S ...
- Python 导出导入安装包
python导出安装包 pip freeze > requirements.txt python导入安装包 pip install -r requirements.txt
- python学习:数据类型检查
函数调用时可能会出现数据类型不匹配的问题,为了保证代码的鲁棒性,最好加上数据类型检查. 应用举例: if not isinstance(x, (int, float)): raise Typ ...
- leetCode题解之Array Partition I
1.题目描述 2.分析 按照题目要求,主要就是对数组进行排序 3.代码 int arrayPairSum(vector<int>& nums) { ; sort( nums.beg ...
- 优化REST Framework 的 路由 APIView 和ViewSetMixin
APIview: 我们经常写的是view 这个APIview继承了我们的view,并且对请求进来的信息进行设置, 在APIView这个例子中,调用了drf本身的serializer以及Respons ...