While in the SQL-world is very easy combining two or more datasets - we just need to use the JOIN keyword - with MapReduce things becomes a little harder. Let's get into it. 
Suppose we have two distinct datasets, one for users of a forum and the other for the posts in the forum (data is in TSV - Tab Separated Values - format).
Users dataset:

id   name  reputation
0102 alice 32
0511 bob 27
...

Posts dataset:

id      type      subject   body                                   userid
0028391 question test "Hi, what is.." 0102
0073626 comment bug "Guys, I've found.." 0511
0089234 comment bug "Nope, it's not that way.." 0734
0190347 answer info "In my opinion it's worth the time.." 1932
...

What we'd like to do is to combine the reputation of each user to the number of question he/she posted, to see if we can relate one to the other.

The main idea behind combining the two datasets is to leverage the shuffle and sort phase: this process groups together values with the same key, so if we define the user id as the key, we can send to the reducer both the user reputation and the number of his/her posts, because they're attached to the same key (the user id). 
Let's see how. 
We start with the mapper:

public static class JoinMapper extends Mapper<object, text,="" intwritable=""> {

        private final static IntWritable one = new IntWritable(1);

        @Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException { // gets filename of the input file for this record
FileSplit fileSplit = (FileSplit) context.getInputSplit();
String filename = fileSplit.getPath().getName(); // creates an array with all the fields of the row we're reading now
String[] fields = value.toString().split(("\t")); // if we're reading the posts file
if (filename.equals("forum_nodes_no_lf.tsv")) {
// retrieves the author ID and the type of the post
String type = fields[1];
if (type.equals("question")) {
String authorId = fields[4];
context.write(new Text(authorId), one);
}
}
// if we're reading the users file
else {
String authorId = fields[0];
String reputation = fields[2]; // we add two to the reputation, because we want the minimum value to be greater than 1,
// not to be confused with the "one" passed by the other branch of the if
int reputationValue = Integer.parseInt(reputation) + 2;
context.write(new Text(authorId), new IntWritable(reputationValue));
}
}
}

First of all, this code assumes that in the directory Hadoop in looking in for data, there are two files: the users file and the posts file; we use the FileSplit class to obtain which filename Hadoop is now reading: in this way we can know if we're dealing the users file or the posts file. Then, if is the posts file, things get a little trickier. For every user, we're passing to the reducer a "1" for every question he/she posted on the forum; since we want to pass also reputation of the user (that can be a "0" or a "1"), we have to be careful not to mix up the values. To do this, we add 2 to the reputation, so that, even if it is "0", the value passed to the reducer will be greater or equal to two. In this way, we know that when the reducer will receive a "1" it will be for counting a question posted on the forum, while when it will receive a value greater than "1", it will be the reputation of the user. 
Let's now look at the reducer:

 public static class JoinReducer extends Reducer<text, intwritable,="" text,="" text=""> {

        @Override
public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { int postsNumber = 0;
int reputation = 0;
String authorId = key.toString(); for (IntWritable value : values) { int intValue = value.get();
if (intValue == 1) {
postsNumber ++;
}
else {
// we subtract two for having the exact reputation value (see the mapper)
reputation = intValue -2;
}
} context.write(new Text(authorId), new Text(reputation + "\t" + postsNumber));
}
}

As stated before, the reducer will now receive two kinds of data: "1" if related to the number of posts of the user, and a value greater than one for the reputation. The code in in the reducer, checks exactly this: if receives a "1" increaes the number of posts of this user, otherwise sets his/her reputation. At the end of the method, we tell the reducer to output the authorId, his/her reputation and how many posts has posted on the forum:

userid  reputation  posts#
0102 55 23
0511 05 11
0734 00 89
1932 19 32
...

and we're ready to analyze these data.

from: http://andreaiacono.blogspot.com/2014/09/combining-datasets-with-mapreduce.html

结合MapReduce和数据集Combining datasets with MapReduce的更多相关文章

  1. MapReduce编程(一) Intellij Idea配置MapReduce编程环境

    介绍怎样在Intellij Idea中通过创建mavenproject配置MapReduce的编程环境. 一.软件环境 我使用的软件版本号例如以下: Intellij Idea 2017.1 Mave ...

  2. 【Big Data - Hadoop - MapReduce】hadoop 学习笔记:MapReduce框架详解

    开始聊MapReduce,MapReduce是Hadoop的计算框架,我学Hadoop是从Hive开始入手,再到hdfs,当我学习hdfs时候,就感觉到hdfs和mapreduce关系的紧密.这个可能 ...

  3. 【Big Data - Hadoop - MapReduce】初学Hadoop之图解MapReduce与WordCount示例分析

    Hadoop的框架最核心的设计就是:HDFS和MapReduce.HDFS为海量的数据提供了存储,MapReduce则为海量的数据提供了计算. HDFS是Google File System(GFS) ...

  4. 第2节 mapreduce深入学习:14、mapreduce数据压缩-使用snappy进行压缩

    第2节 mapreduce深入学习:14.mapreduce数据压缩-使用snappy进行压缩 文件压缩有两大好处,节约磁盘空间,加速数据在网络和磁盘上的传输. 方式一:在代码中进行设置压缩 代码: ...

  5. 第2节 mapreduce深入学习:7、MapReduce的规约过程combiner

    第2节 mapreduce深入学习:7.MapReduce的规约过程combiner 每一个 map 都可能会产生大量的本地输出,Combiner 的作用就是对 map 端的输出先做一次合并,以减少在 ...

  6. 第2节 mapreduce深入学习:6、MapReduce当中的计数器

    第2节 mapreduce深入学习:6. MapReduce当中的计数器 计数器是收集作业统计信息的有效手段之一,用于质量控制或应用级统计.计数器还可辅助诊断系统故障.如果需要将日志信息传输到map ...

  7. Hadoop MapReduce编程 API入门系列之MapReduce多种输出格式分析(十九)

    不多说,直接上代码. 假如这里有一份邮箱数据文件,我们期望统计邮箱出现次数并按照邮箱的类别,将这些邮箱分别输出到不同文件路径下. 代码版本1 package zhouls.bigdata.myMapR ...

  8. Hadoop MapReduce编程 API入门系列之MapReduce多种输入格式(十七)

    不多说,直接上代码. 代码 package zhouls.bigdata.myMapReduce.ScoreCount; import java.io.DataInput; import java.i ...

  9. 【Hadoop】MapReduce笔记(四):MapReduce优化策略总结

    Cloudera 提供给客户的服务内容之一就是调整和优化MapReduce job执行性能.MapReduce和HDFS组成一个复杂的分布式系统,并且它们运行着各式各样用户的代码,这样导致没有一个快速 ...

随机推荐

  1. Observer 观察者

    意图 定义对象间的一种一对多的依赖关系 ,当一个对象的状态发生改变时, 所有依赖于它的对象都得到通知并被自动更新. 动机 一致性,松耦合 需要维护相关对象间的一致性.我们不希望为了维持一致性而使各类紧 ...

  2. 【51nod】1709 复杂度分析

    题解 考虑朴素的暴力,相当于枚举u点的每个祖先f,然后统计一下这个点f除了某个儿子里有u的那个子树之外的节点个数,乘上f到u距离的二进制1的个数 那么我们用倍增来实现这个东西,每次枚举二进制的最高位j ...

  3. 【LOJ】#2306. 「NOI2017」蔬菜

    题解 从后往前递推 如果我们知道了第i天的最优方案和第i天选择的蔬菜,加入第i天选择的蔬菜数量为S,我们只需要减去最小的S - (i - 1) * M 个蔬菜即可 所以我们只要求出最后一天的蔬菜选择 ...

  4. 20169211《Linux内核原理及分析》第十二周作业

    Collabtive 系统 SQL 注入实验 实验介绍 SQL注入漏洞的代码注入技术,利用web应用程序和数据库服务器之间的接口.通过把SQL命令插入到Web表单提交或输入域名或页面请求的查询字符串, ...

  5. 机器学习之路:python k近邻回归 预测波士顿房价

    python3 学习机器学习api 使用两种k近邻回归模型 分别是 平均k近邻回归 和 距离加权k近邻回归 进行预测 git: https://github.com/linyi0604/Machine ...

  6. windows下thrift的使用(C++)

    thrift cpp环境搭建: 1.  安装boost_1_53_0,注意,使用vs2010版本时,使用二进制的boost安装版本,生成的lib有可能是,在后续操作会出问题.在源码目录中,运行boot ...

  7. 订单超时、活动过期解决方案:php监听redis key失效触发回调事件

    Redis 的 2.8.0 版本之后可用,键空间消息(Redis Keyspace Notifications),配合 2.0.0 版本之后的 SUBSCRIBE 就能完成这个定时任务的操作了,定时的 ...

  8. Javascript:window.close()不起作用?

    一般的窗口关闭的JS如下写法: window.close() 但是呢,chrome,firefox等中有时候会不起作用. 改为下面的写法: window.open("about:blank& ...

  9. luoguP4715 [英语]Z语言 平衡树+hash

    显然只能有$hash$来做.... 我们需要一个东西来维护$\sum i * seed^{rank[i]}$ 很自然地联想到平衡树 如果以序列下标建立一棵平衡树,那么无法处理 因此,可以以权值为下标建 ...

  10. 转 MySQL连接超时

    在负载较重的MySQL服务器上,有时你偶尔会看到一些连接超时的错误,诸如: Can’t connect to MySQL server on ‘mydb’(110).如果当时你有多个连接请求,你会发现 ...