Top N的MapReduce程序MapReduce for Top N items
In this post we'll see how to count the top-n items of a dataset; we'll again use the flatland book we used in a previous post: in that example we used the WordCount program to count the occurrences of every single word forming the book; now we want to find which are the top-n words used in the book.
Let's start with the mapper:
public static class TopNMapper extends Mapper<object, text,="" intwritable=""> { private final static IntWritable one = new IntWritable(1);
private Text word = new Text(); @Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String cleanLine = value.toString().toLowerCase().replaceAll("[_|$#<>\\^=\\[\\]\\*/\\\\,;,.\\-:()?!\"']", " ");
StringTokenizer itr = new StringTokenizer(cleanLine);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken().trim());
context.write(word, one);
}
}
}
The mapper is really straightforward : the TopNMapper class defines an IntWritable set to 1 and a Text object; its map() method, like in the previous post, splits every line of the book into an array of single words and send to the reducers every word with the value of 1.
The reducer is more interesting:
public static class TopNReducer extends Reducer<text, intwritable,="" text,="" intwritable=""> { private Map<text, intwritable=""> countMap = new HashMap<>(); @Override
public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { // computes the number of occurrences of a single word
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
} // puts the number of occurrences of this word into the map.
countMap.put(key, new IntWritable(sum));
} @Override
protected void cleanup(Context context) throws IOException, InterruptedException { Map<text, intwritable=""> sortedMap = sortByValues(countMap); int counter = 0;
for (Text key: sortedMap.keySet()) {
if (counter ++ == 20) {
break;
}
context.write(key, sortedMap.get(key));
}
}
}
We override two methods: reduce() and cleanup(). Let's examine the reduce() method.
As we've seen in the mapper's code, the keys the reducer receive are every single word contained in the book; at the beginning of the method, we compute the sum of all the values received from the mappers for this key, which is the number of occurrences of this word inside the book; then we put the word and the number of occurrences into a HashMap. Note that we're not directly putting into the map the Text object that contains the word because that instance is reused many times by Hadoop for performance issues; instead, we put a new Text object based on the received one.
To output the top-n values, we have to compute the number of occurrences of every word, sort the words by the number of occurrences and then extract the first n. In the reduce() method we don't write any value to the output, because we can sort the words only after that we collect them all; the cleanup() method is called by Hadoop after the reducer has received all its data, so we override this method to be sure that our HashMap is filled up with all the words.
Let's look at the method: first we sort the HashMap by values (using code from this post); then we loop over the keyset and output the first 20 items.
The complete code is available on my github.
The output of the reducer gives us the 20 most used words in Flatland:
the 2286
of 1634
and 1098
to 1088
a 936
i 735
in 713
that 499
is 429
you 419
my 334
it 330
as 322
by 317
not 317
or 299
but 279
with 273
for 267
be 252
Predictably, the most used words in the book are articles, conjunctions, adjectives, prepositions and personal pronouns.
This MapReduce program is not very efficient: the mappers will transfer to the reducers a lot of data; every single word of the book will be emitted to reducers together with the number "1", causing a very high network load; the phase in which mappers send data to the reducers is called "Shuffle and sort" and is explained in more detail in the free chapter of the "Hadoop, the definitive guide" by Tom White.
In the next posts we'll see how to improve the performances of the Shuffle and sort phase.
from: http://andreaiacono.blogspot.com/2014/03/mapreduce-for-top-n-items.html
Top N的MapReduce程序MapReduce for Top N items的更多相关文章
- Top N之MapReduce程序加强版Enhanced MapReduce for Top N items
In the last post we saw how to write a MapReduce program for finding the top-n items of a dataset. T ...
- 攻城狮在路上(陆)-- 配置hadoop本地windows运行MapReduce程序环境
本文的目的是实现在windows环境下实现模拟运行Map/Reduce程序.最终实现效果:MapReduce程序不会被提交到实际集群,但是运算结果会写入到集群的HDFS系统中. 一.环境说明: ...
- windows环境下Eclipse开发MapReduce程序遇到的四个问题及解决办法
按此文章<Hadoop集群(第7期)_Eclipse开发环境设置>进行MapReduce开发环境搭建的过程中遇到一些问题,饶了一些弯路,解决办法记录在此: 文档目的: 记录windows环 ...
- 编写简单的Mapreduce程序并部署在Hadoop2.2.0上运行
今天主要来说说怎么在Hadoop2.2.0分布式上面运行写好的 Mapreduce 程序. 可以在eclipse写好程序,export或用fatjar打包成jar文件. 先给出这个程序所依赖的Mave ...
- 如何在Hadoop的MapReduce程序中处理JSON文件
简介: 最近在写MapReduce程序处理日志时,需要解析JSON配置文件,简化Java程序和处理逻辑.但是Hadoop本身似乎没有内置对JSON文件的解析功能,我们不得不求助于第三方JSON工具包. ...
- hadoop——在命令行下编译并运行map-reduce程序 2
hadoop map-reduce程序的编译需要依赖hadoop的jar包,我尝试javac编译map-reduce时指定-classpath的包路径,但无奈hadoop的jar分布太散乱,根据自己 ...
- hadoop-初学者写map-reduce程序中容易出现的问题 3
1.写hadoop的map-reduce程序之前所必须知道的基础知识: 1)hadoop map-reduce的自带的数据类型: Hadoop提供了如下内容的数据类型,这些数据类型都实现了Writab ...
- mapreduce程序编写(WordCount)
折腾了半天.终于编写成功了第一个自己的mapreduce程序,并通过打jar包的方式运行起来了. 运行环境: windows 64bit eclipse 64bit jdk6.0 64bit 一.工程 ...
- 基于Maven管理的Mapreduce程序下载依赖包到LIB目录
1.Mapreduce程序需要打包作为作业提交到Hadoop集群环境运行,但是程序中有相关的依赖包,如果没有一起打包,会出现xxxxClass Not Found . 2.在pom.xml文件< ...
随机推荐
- Kibana安装及简单使用
Kibana安装 参照官方文档即可,这里只做相关操作记录: wget https://artifacts.elastic.co/downloads/kibana/kibana-5.5.0-linux- ...
- PHP 统计数据合并
将不同的统计结果整合在一起,如图,根据年级统计出不同成绩段人数(此处只为举例),然后写了一个方法来处理这些统计数组 <?php /** * 合并统计数据 * @param $key_column ...
- Android中Xposed框架篇-微信实现本地视频发布到朋友圈功能
微信非常庞大,还好有一些强大的工具,下面就来总结收获的知识. 一.使用adb shell dumpsys activity top命令快速定位页面 二.使用Jadx进行方法跟踪时候如果发现没有结果,可 ...
- java中int和Integer比较
java中int和Integer比较 一,类型区别 我们知道java中由两种数据类型,即基本类型和对象类型,int就是基本数据类型,而Integer是一个class,也习惯把Integer叫做int的 ...
- rabbitmq学习(二) —— helloword!
rabbitmq学习当然是跟着官网走最好了,官网的教程写的很好,跟着官网教程走一遍就会有个初步了解了 下面的教程转自http://cmsblogs.com/?p=2768,该博客对官网的翻译还不错 介 ...
- Javascript中call方法和apply方法用法和区别
第一次在博客园上面写博客,知识因为看书的时候发现了一些有意思的知识,顺便查了一下资料,就发到博客上来了,希望对大家有点帮助. 连续几天阅读<javascript高级程序设计>这本书了,逐渐 ...
- 二分搜索之C++实现
二分搜索之C++实现 一.源代码:BinarySearch.cpp #include<iostream> using namespace std; /*定义输出一维数组的函数*/ void ...
- 洛谷.4238.[模板]多项式求逆(NTT)
题目链接 设多项式\(f(x)\)在模\(x^n\)下的逆元为\(g(x)\) \[f(x)g(x)\equiv 1\ (mod\ x^n)\] \[f(x)g(x)-1\equiv 0\ (mod\ ...
- BZOJ 4605 崂山白花蛇草水(权值线段树+KD树)
[题目链接] http://www.lydsy.com/JudgeOnline/problem.php?id=4605 [题目大意] 操作 1 x y k 表示在点(x,y)上放置k个物品, 操作 2 ...
- C/C++ 之输入输出
因为C++向下兼容C,所以有多种输入输出的方式,cin/cout十分简洁,但个人觉得不如scanf/printf来的强大,而且在做算法题时,后者运行速度也快些. scanf/printf #inclu ...