Command-line tools can be 235x faster than your Hadoop cluster
原文链接:http://aadrake.com/command-line-tools-can-be-235x-faster-than-your-hadoop-cluster.html
Introduction
As I was browsing the web and catching up on some sites I visit periodically, I found a cool article from Tom Haydenabout using Amazon Elastic Map Reduce (EMR) and mrjob in order to compute some statistics on win/loss ratios for chess games he downloaded from the millionbase archive, and generally have fun with EMR. Since the data volume was only about 1.75GB containing around 2 million chess games, I was skeptical of using Hadoop for the task, but I can understand his goal of learning and having fun with mrjob and EMR. Since the problem is basically just to look at the result lines of each file and aggregate the different results, it seems ideally suited to stream processing with shell commands. I tried this out, and for the same amount of data I was able to use my laptop to get the results in about 12 seconds (processing speed of about 270MB/sec), while the Hadoop processing took about 26 minutes (processing speed of about 1.14MB/sec).
After reporting that the time required to process the data with 7 c1.medium machine in the cluster took 26 minutes, Tom remarks
"This is probably better than it would take to run serially on my machine but probably not as good as if I did some kind of clever multi-threaded application locally."
This is absolutely correct, although even serial processing may beat 26 minutes. Although Tom was doing the project for fun, often people use Hadoop and other so-called Big Data (tm) tools for real-world processing and analysis jobs that can be done faster with simpler tools and different techniques.
One especially under-used approach for data processing is using standard shell tools and commands. The benefits of this approach can be massive, since creating a data pipeline out of shell commands means that all the processing steps can be done in parallel. This is basically like having your own Storm cluster on your local machine. Even the concepts of Spouts, Bolts, and Sinks transfer to shell pipes and the commands between them. You can pretty easily construct a stream processing pipeline with basic commands that will have extremely good performance compared to many modern Big Data (tm) tools.
An additional point is the batch versus streaming analysis approach. Tom mentions in the beginning of the piece that after loading 10000 games and doing the analysis locally, that he gets a bit short on memory. This is because all game data is loaded into RAM for the analysis. However, considering the problem for a bit, it can be easily solved with streaming analysis that requires basically no memory at all. The resulting stream processing pipeline we will create will be over 235 times faster than the Hadoop implementation and use virtually no memory.
Learn about the data
The first step in the pipeline is to get the data out of the PGN files. Since I had no idea what kind of format this was, I checked it out on Wikipedia.
[Event "F/S Return Match"]
[Site "Belgrade, Serbia Yugoslavia|JUG"]
[Date "1992.11.04"]
[Round "29"]
[White "Fischer, Robert J."]
[Black "Spassky, Boris V."]
[Result "1/2-1/2"]
(moves from the game follow...)
We are only interested in the results of the game, which only have 3 real outcomes. The 1-0 case means that white won, the 0-1 case means that black won, and the 1/2-1/2 case means the game was a draw. There is also a - case meaning the game is ongoing or cannot be scored, but we ignore that for our purposes.
Acquire sample data
The first thing to do is get a lot of game data. This proved more difficult than I thought it would be, but after some looking around online I found a git repository on GitHub from rozim that had plenty of games. I used this to compile a set of 3.46GB of data, which is about twice what Tom used in his test. The next step is to get all that data into our pipeline.
Build a processing pipeline
If you are following along and timing your processing, don't forget to clear your OS page cache as otherwise you won't get valid processing times.
Shell commands are great for data processing pipelines because you get parallelism for free. For proof, try a simple example in your terminal.
1 |
sleep 3 | echo "Hello world." |
Intuitively it may seem that the above will sleep for 3 seconds and then print "Hello world" but in fact both steps are done at the same time. This basic fact is what can offer such great speedups for simple non-IO-bound processing systems capable of running on a single machine.
Before starting the analysis pipeline, it is good to get a reference for how fast it could be and for this we can simply dump the data to /dev/null.
In this case, it takes about 13 seconds to go through the 3.46GB, which is about 272MB/sec. This would be a kind of upper-bound on how quickly data could be processed on this system due to IO constraints.
Now we can start on the analysis pipeline, the first step of which is using cat
to generate the stream of data.
1 |
cat *.pgn |
Since only the result lines in the files are interesting, we can simply scan through all the data files, and pick out the lines containing 'Results' with grep
.
1 |
cat *.pgn | grep "Result" |
This will give us only the Result
lines from the files. Now if we want, we can simply use the sort
and uniq
commands in order to get a list of all the unique items in the file along with their counts.
1 |
cat *.pgn | grep "Result" | sort | uniq -c |
This is a very straightforward analysis pipeline, and gives us the results in about 70 seconds. While we can certainly do better, assuming linear scaling this would have taken the Hadoop cluster approximately 52 minutes to process.
In order to reduce the speed further, we can take out the sort | uniq
steps from the pipeline, and replace them with AWK, which is a wonderful tool/language for event-based data processing.
1 |
cat *.pgn | grep "Result" | awk '{ split($0, a, "-"); res = substr(a[1], length(a[1]), 1); if (res == 1) white++; if (res == 0) black++; if (res == 2) draw++;} END { print white+black+draw, white, black, draw }' |
This will take each result record, split it on the hyphen, and take the character immediately to the left, which will be a 0 in the case of a win for black, a 1 in the case of a win for white, or a 2 in the case of a draw. Note that $0
is a built-in variable that represents the entire record.
This reduces the running time to approximately 65 seconds, and since we're processing twice as much data this is a speedup of around 47 times.
So even at this point we already have a speedup of around 47 with a naive local solution. Additionally, the memory usage is effectively zero since the only data stored is the actual counts, and incrementing 3 integers is almost free in memory space terms. However, looking at htop
while this is running shows that grep
is currently the bottleneck with full usage of a single CPU core.
Parallelize the bottlenecks
This problem of unused cores can be fixed with the wonderful xargs
command, which will allow us to parallelize the grep
. Since xargs
expects input in a certain way, it is safer and easier to use find
with the -print0
argument in order to make sure that each file name being passed to xargs
is null-terminated. The corresponding -0
tellsxargs
to expected null-terminated input. Additionally, the -n
how many inputs to give each process and the -P
indicates the number of processes to run in parallel. Also important to be aware of is that such a parallel pipeline doesn't guarantee delivery order, but this isn't a problem if you are used to dealing with distributed processing systems. The -F
for grep
indicates that we are only matching on fixed strings and not doing any fancy regex, and can offer a small speedup, which I did not notice in my testing.
1 |
find . -type f -name '*.pgn' -print0 | xargs -0 -n1 -P4 grep -F "Result" | gawk '{ split($0, a, "-"); res = substr(a[1], length(a[1]), 1); if (res == 1) white++; if (res == 0) black++; if (res == 2) draw++;} END { print NR, white, black, draw }' |
This results in a run time of about 38 seconds, which is an additional 40% or so reduction in processing time from parallelizing the grep
step in our pipeline. This gets us up to approximately 77 times faster than the Hadoop implementation.
Although we have improved the performance dramatically by parallelizing the grep step in our pipeline, we can actually remove this entirely by having awk filter the input records (lines in this case) and only operate on those containing the string "Result".
1 |
find . -type f -name '*.pgn' -print0 | xargs -0 -n1 -P4 awk '/Result/ { split($0, a, "-"); res = substr(a[1], length(a[1]), 1); if (res == 1) white++; if (res == 0) black++; if (res == 2) draw++;} END { print white+black+draw, white, black, draw }' |
You may think that would be the correct solution, but this will output the results of each file individually, when we want to aggregate them all together. The resulting correct implementation is conceptually very similar to what the MapReduce implementation would be.
1 |
time find . -type f -name '*.pgn' -print0 | xargs -0 -n4 -P4 awk '/Result/ { split($0, a, "-"); res = substr(a[1], length(a[1]), 1); if (res == 1) white++; if (res == 0) black++; if (res == 2) draw++ } END { print white+black+draw, white, black, draw }' | awk '{games += $1; white += $2; black += $3; draw += $4; } END { print games, white, black, draw }' |
By adding the second awk step at the end, we obtain the aggregated game information as desired.
This further improves the speed dramatically, achieving a running time of about 18 seconds, or about 174 times faster than the Hadoop implementation.
However, we can make it a bit faster still by using mawk, which is often a drop-in replacement for gawk
and can offer better performance.
1 |
find . -type f -name '*.pgn' -print0 | xargs -0 -n4 -P4 mawk '/Result/ { split($0, a, "-"); res = substr(a[1], length(a[1]), 1); if (res == 1) white++; if (res == 0) black++; if (res == 2) draw++ } END { print white+black+draw, white, black, draw }' | mawk '{games += $1; white += $2; black += $3; draw += $4; } END { print games, white, black, draw }' |
This find | xargs mawk | mawk
pipeline gets us down to a runtime of about 12 seconds, or about 270MB/sec, which is around 235 times faster than the Hadoop implementation.
Conclusion
Hopefully this has illustrated some points about using and abusing tools like Hadoop for data processing tasks that can better be accomplished on a single machine with simple shell commands and tools. If you have a huge amount of data or really need distributed processing, then tools like Hadoop may be required, but more often than not these days I see Hadoop used where a traditional relational database or other solutions would be far better in terms of performance, cost of implementation, and ongoing maintenance.
Command-line tools can be 235x faster than your Hadoop cluster的更多相关文章
- Xcode 8.X Command Line Tools
Summary Step 1. Upgrade Your System to macOS Sierra Step 2. Open the Terminal Application Step 3. Is ...
- logoff remote desktop sessions via command line tools
This trick I learned from my one of ex-college. In Windows servers, only two remote desktop session ...
- xcode5下面安装Command Line Tools
运行命令 sudo xcode-select --install 就会显示一行文字,并且弹出一个对话框,确认安装后他就会自己下载来安装了. 至此,Command Line Tools就能够重新复活了
- MAC 命令行工具(Command Line Tools)安装
不过升级后安装命令行工具(Command Line Tools)时发现官网没有clt的下载安装包了,原来改了,使用命令在线安装. 打开终端,输入命令:xcode-select --install 选择 ...
- Mac appium.dmg. Xcode Command Line Tools
You need to install the command line tools as marked in your message: ✖ Xcode Command Line Tools are ...
- appium----【已解决】【Mac】环境配置提示“Xcode Command Line Tools are NOT installed!"
报错问题提示截图如下: 报错原因 :根据给出的信息很明显可以看到是"Xcode Command Line Tools"此工具没有安装 解决措施: 打开终端直接执行:xcode-se ...
- Xcode command line tools
1.Xcode command line tools 安装 如果你不是一名 iOS 或 OS X 开发者,可以跳过安装 XCode 的过程,直接安装 Xcode command line tools. ...
- the command line tools
PhpStorm 10.0.2 http://stackoverflow.com/questions/22572861/error-cant-use-subversion-command-line-c ...
- stderr: xcode-select: error: tool 'xcodebuild' requires Xcode, but active developer directory '/Library/Developer/CommandLineTools' is a command line tools instance
错误提示: (1). stderr: xcode-select: error: tool 'xcodebuild' requires Xcode, but active developer direc ...
随机推荐
- 将centos 7改造为LINUX桌面系统
http://www.3566t.com/news/ckan/1410774.html CentOS 桌面版安装配置(以CentOS 7为例) http://blog.csdn.net/zhanghu ...
- EL表达式---关系运算符
近来公司做html5页面的数据展现,发现集中使用了El表达式,而对于EL表达式,发现自己对于关系运算符的运用还存在很多不足,特此 查阅以前的书籍资料和从网上看一些大牛的笔记,总结如下: 首先El关系运 ...
- 跨站脚本攻击(XSS)
跨站脚本攻击(XSS) XSS发生在目标网站中目标用户的浏览器层面上,当用户浏览器渲染整个HTML文档的过程中就出现了不被预期的脚本执行. 跨站脚本的重点不是在“跨站”上,而应该在“脚本上” 简单例子 ...
- C#学习笔记7:多态是面向对象的三大特征(封装、继承、多态)之一
多态: 多态是面向对象的三大特征(封装.继承.多态)之一. 什么是多态? 一个对象表现出多种状态. 多态的实现方法: 1.虚方法: 2.抽象方法: 3.接口. PS:New 关键词可以隐藏父类的方法. ...
- Java实现SSO
摘要:单点登录(SSO)的技术被越来越广泛地运用到各个领域的软件系统当中.本文从业务的角度分析了单点登录的需求和应用领域:从技术本身的角度分析了单点登录技术的内部机制和实现手段,并且给出Web-SSO ...
- js获取jsp中的变量值
js获取jsp中的变量值,有两种方式: 1.jsp标签获取属性 var message = '<%=request.getAttribute("message")%>' ...
- 【开发】Form 表单 Linkbutton 禁用
在权限判定中,对于无权限操作的按钮可直接隐藏($.hide()). HTML 定义 <a id="btnPreAssign_GeneralTasks" class=" ...
- Linux中的磁盘
Linux的磁盘管理 (很重要请注意高能预警) 硬盘:几个盘片,双面,磁性颗粒, 处理速率不同步:借助于一个中间层 文件系统(FileSystem) 可以实现对磁盘行的文件进行读写 文 ...
- mysql 的rmp安装
新文档/* GitHub stylesheet for MarkdownPad (http://markdownpad.com) *//* Author: Nicolas Hery - http:// ...
- mysql---索引及explain的作用
索引:是一种数据结构,以增加存储开销和减慢DML(增.删.改)操作来提高查询速度. 常见的索引结构:btree索引(myisam,innodb,memory,heap),hash索引(memory,h ...