Top N的MapReduce程序MapReduce for Top N items
In this post we'll see how to count the top-n items of a dataset; we'll again use the flatland book we used in a previous post: in that example we used the WordCount program to count the occurrences of every single word forming the book; now we want to find which are the top-n words used in the book.
Let's start with the mapper:
public static class TopNMapper extends Mapper<object, text,="" intwritable=""> { private final static IntWritable one = new IntWritable(1);
private Text word = new Text(); @Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String cleanLine = value.toString().toLowerCase().replaceAll("[_|$#<>\\^=\\[\\]\\*/\\\\,;,.\\-:()?!\"']", " ");
StringTokenizer itr = new StringTokenizer(cleanLine);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken().trim());
context.write(word, one);
}
}
}
The mapper is really straightforward : the TopNMapper class defines an IntWritable set to 1 and a Text object; its map() method, like in the previous post, splits every line of the book into an array of single words and send to the reducers every word with the value of 1.
The reducer is more interesting:
public static class TopNReducer extends Reducer<text, intwritable,="" text,="" intwritable=""> { private Map<text, intwritable=""> countMap = new HashMap<>(); @Override
public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { // computes the number of occurrences of a single word
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
} // puts the number of occurrences of this word into the map.
countMap.put(key, new IntWritable(sum));
} @Override
protected void cleanup(Context context) throws IOException, InterruptedException { Map<text, intwritable=""> sortedMap = sortByValues(countMap); int counter = 0;
for (Text key: sortedMap.keySet()) {
if (counter ++ == 20) {
break;
}
context.write(key, sortedMap.get(key));
}
}
}
We override two methods: reduce() and cleanup(). Let's examine the reduce() method.
As we've seen in the mapper's code, the keys the reducer receive are every single word contained in the book; at the beginning of the method, we compute the sum of all the values received from the mappers for this key, which is the number of occurrences of this word inside the book; then we put the word and the number of occurrences into a HashMap. Note that we're not directly putting into the map the Text object that contains the word because that instance is reused many times by Hadoop for performance issues; instead, we put a new Text object based on the received one.
To output the top-n values, we have to compute the number of occurrences of every word, sort the words by the number of occurrences and then extract the first n. In the reduce() method we don't write any value to the output, because we can sort the words only after that we collect them all; the cleanup() method is called by Hadoop after the reducer has received all its data, so we override this method to be sure that our HashMap is filled up with all the words.
Let's look at the method: first we sort the HashMap by values (using code from this post); then we loop over the keyset and output the first 20 items.
The complete code is available on my github.
The output of the reducer gives us the 20 most used words in Flatland:
the 2286
of 1634
and 1098
to 1088
a 936
i 735
in 713
that 499
is 429
you 419
my 334
it 330
as 322
by 317
not 317
or 299
but 279
with 273
for 267
be 252
Predictably, the most used words in the book are articles, conjunctions, adjectives, prepositions and personal pronouns.
This MapReduce program is not very efficient: the mappers will transfer to the reducers a lot of data; every single word of the book will be emitted to reducers together with the number "1", causing a very high network load; the phase in which mappers send data to the reducers is called "Shuffle and sort" and is explained in more detail in the free chapter of the "Hadoop, the definitive guide" by Tom White.
In the next posts we'll see how to improve the performances of the Shuffle and sort phase.
from: http://andreaiacono.blogspot.com/2014/03/mapreduce-for-top-n-items.html
Top N的MapReduce程序MapReduce for Top N items的更多相关文章
- Top N之MapReduce程序加强版Enhanced MapReduce for Top N items
In the last post we saw how to write a MapReduce program for finding the top-n items of a dataset. T ...
- 攻城狮在路上(陆)-- 配置hadoop本地windows运行MapReduce程序环境
本文的目的是实现在windows环境下实现模拟运行Map/Reduce程序.最终实现效果:MapReduce程序不会被提交到实际集群,但是运算结果会写入到集群的HDFS系统中. 一.环境说明: ...
- windows环境下Eclipse开发MapReduce程序遇到的四个问题及解决办法
按此文章<Hadoop集群(第7期)_Eclipse开发环境设置>进行MapReduce开发环境搭建的过程中遇到一些问题,饶了一些弯路,解决办法记录在此: 文档目的: 记录windows环 ...
- 编写简单的Mapreduce程序并部署在Hadoop2.2.0上运行
今天主要来说说怎么在Hadoop2.2.0分布式上面运行写好的 Mapreduce 程序. 可以在eclipse写好程序,export或用fatjar打包成jar文件. 先给出这个程序所依赖的Mave ...
- 如何在Hadoop的MapReduce程序中处理JSON文件
简介: 最近在写MapReduce程序处理日志时,需要解析JSON配置文件,简化Java程序和处理逻辑.但是Hadoop本身似乎没有内置对JSON文件的解析功能,我们不得不求助于第三方JSON工具包. ...
- hadoop——在命令行下编译并运行map-reduce程序 2
hadoop map-reduce程序的编译需要依赖hadoop的jar包,我尝试javac编译map-reduce时指定-classpath的包路径,但无奈hadoop的jar分布太散乱,根据自己 ...
- hadoop-初学者写map-reduce程序中容易出现的问题 3
1.写hadoop的map-reduce程序之前所必须知道的基础知识: 1)hadoop map-reduce的自带的数据类型: Hadoop提供了如下内容的数据类型,这些数据类型都实现了Writab ...
- mapreduce程序编写(WordCount)
折腾了半天.终于编写成功了第一个自己的mapreduce程序,并通过打jar包的方式运行起来了. 运行环境: windows 64bit eclipse 64bit jdk6.0 64bit 一.工程 ...
- 基于Maven管理的Mapreduce程序下载依赖包到LIB目录
1.Mapreduce程序需要打包作为作业提交到Hadoop集群环境运行,但是程序中有相关的依赖包,如果没有一起打包,会出现xxxxClass Not Found . 2.在pom.xml文件< ...
随机推荐
- 程序设计实习MOOC / 程序设计与算法(三)第一周测验
作业题: 7. 填空(2分)简单的swap 通过码是 ( 请参考公告中的“关于编程作业的说明”完成编程作业(请注意,编程题都要求提交通过码,在openjudge上提交了程序并且通过以后,就可以下载到通 ...
- 【LOJ】#2027. 「SHOI2016」黑暗前的幻想乡
题解 我一开始写的最小表示法写的插头dp,愉快地TLE成60分 然后我觉得我就去看正解了! 发现是容斥 + 矩阵树定理 矩阵树定理对于有重边的图只要邻接矩阵的边数设置a[u][v]表示u,v之间有几条 ...
- mysql (主从复制)(proxy , Amoeba)
原址如下: http://heylinux.com/archives/1004.html Mysql作为目前世界上使用最广泛的免费数据库,相信所有从事系统运维的工程师都一定接触过.但在实际的生产环境中 ...
- Scala入门2(特质与叠加在一起的特质)
一.介绍 参考http://luchunli.blog.51cto.com/2368057/1705025 我们知道,如果几个类有某些共通的方法或者字段,那么从它们多重继承时,就会出现麻烦.所以Jav ...
- Android 最基础生命周期及旋转屏幕问题
public class MainActivity extends Activity { private static final String TAG ="MainActivity&quo ...
- Could not get constructor for org.hibernate.persister.entity.SingleTableEntityPersister报错解决办法
在做Hibernate框架数据库的关联关系映射练习中出现了Could not get constructor for org.hibernate.persister.entity.SingleTabl ...
- JAVAEE——宜立方商城07:Linux上搭建Solr服务、数据库导入索引库、搜索功能的实现
1. 学习计划 1.Solr服务搭建 2.Solrj使用测试 3.把数据库中的数据导入索引库 4.搜索功能的实现 2. Solr服务搭建 2.1. Solr的环境 Solr是java开发. 需要安装j ...
- JAVAEE——BOS物流项目02:学习计划、动态添加选项卡、ztree、项目底层代码构建
1 学习计划 1.jQuery easyUI中动态添加选项卡 2.jquery ztree插件使用 n 下载ztree n 基于标准json数据构造ztree n 基于简单json数据构造ztree( ...
- Revit二次开发示例:APIAppStartup
下面介绍一个在Revit启动和关闭时调用外部程序的例子. Revit调用的dll主程序: using System; using System.Collections.Generic; using ...
- Vue-router浅识
一.router-link及router-view :用来做导航,通过传入to属性来指定链接 :用来做路由出口,路由匹配到的组件都会渲染在这里 const router = new VueRouter ...