While in the SQL-world is very easy combining two or more datasets - we just need to use the JOIN keyword - with MapReduce things becomes a little harder. Let's get into it. 
Suppose we have two distinct datasets, one for users of a forum and the other for the posts in the forum (data is in TSV - Tab Separated Values - format).
Users dataset:

id   name  reputation
0102 alice 32
0511 bob 27
...

Posts dataset:

id      type      subject   body                                   userid
0028391 question test "Hi, what is.." 0102
0073626 comment bug "Guys, I've found.." 0511
0089234 comment bug "Nope, it's not that way.." 0734
0190347 answer info "In my opinion it's worth the time.." 1932
...

What we'd like to do is to combine the reputation of each user to the number of question he/she posted, to see if we can relate one to the other.

The main idea behind combining the two datasets is to leverage the shuffle and sort phase: this process groups together values with the same key, so if we define the user id as the key, we can send to the reducer both the user reputation and the number of his/her posts, because they're attached to the same key (the user id). 
Let's see how. 
We start with the mapper:

public static class JoinMapper extends Mapper<object, text,="" intwritable=""> {

        private final static IntWritable one = new IntWritable(1);

        @Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException { // gets filename of the input file for this record
FileSplit fileSplit = (FileSplit) context.getInputSplit();
String filename = fileSplit.getPath().getName(); // creates an array with all the fields of the row we're reading now
String[] fields = value.toString().split(("\t")); // if we're reading the posts file
if (filename.equals("forum_nodes_no_lf.tsv")) {
// retrieves the author ID and the type of the post
String type = fields[1];
if (type.equals("question")) {
String authorId = fields[4];
context.write(new Text(authorId), one);
}
}
// if we're reading the users file
else {
String authorId = fields[0];
String reputation = fields[2]; // we add two to the reputation, because we want the minimum value to be greater than 1,
// not to be confused with the "one" passed by the other branch of the if
int reputationValue = Integer.parseInt(reputation) + 2;
context.write(new Text(authorId), new IntWritable(reputationValue));
}
}
}

First of all, this code assumes that in the directory Hadoop in looking in for data, there are two files: the users file and the posts file; we use the FileSplit class to obtain which filename Hadoop is now reading: in this way we can know if we're dealing the users file or the posts file. Then, if is the posts file, things get a little trickier. For every user, we're passing to the reducer a "1" for every question he/she posted on the forum; since we want to pass also reputation of the user (that can be a "0" or a "1"), we have to be careful not to mix up the values. To do this, we add 2 to the reputation, so that, even if it is "0", the value passed to the reducer will be greater or equal to two. In this way, we know that when the reducer will receive a "1" it will be for counting a question posted on the forum, while when it will receive a value greater than "1", it will be the reputation of the user. 
Let's now look at the reducer:

 public static class JoinReducer extends Reducer<text, intwritable,="" text,="" text=""> {

        @Override
public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { int postsNumber = 0;
int reputation = 0;
String authorId = key.toString(); for (IntWritable value : values) { int intValue = value.get();
if (intValue == 1) {
postsNumber ++;
}
else {
// we subtract two for having the exact reputation value (see the mapper)
reputation = intValue -2;
}
} context.write(new Text(authorId), new Text(reputation + "\t" + postsNumber));
}
}

As stated before, the reducer will now receive two kinds of data: "1" if related to the number of posts of the user, and a value greater than one for the reputation. The code in in the reducer, checks exactly this: if receives a "1" increaes the number of posts of this user, otherwise sets his/her reputation. At the end of the method, we tell the reducer to output the authorId, his/her reputation and how many posts has posted on the forum:

userid  reputation  posts#
0102 55 23
0511 05 11
0734 00 89
1932 19 32
...

and we're ready to analyze these data.

from: http://andreaiacono.blogspot.com/2014/09/combining-datasets-with-mapreduce.html

结合MapReduce和数据集Combining datasets with MapReduce的更多相关文章

  1. MapReduce编程(一) Intellij Idea配置MapReduce编程环境

    介绍怎样在Intellij Idea中通过创建mavenproject配置MapReduce的编程环境. 一.软件环境 我使用的软件版本号例如以下: Intellij Idea 2017.1 Mave ...

  2. 【Big Data - Hadoop - MapReduce】hadoop 学习笔记:MapReduce框架详解

    开始聊MapReduce,MapReduce是Hadoop的计算框架,我学Hadoop是从Hive开始入手,再到hdfs,当我学习hdfs时候,就感觉到hdfs和mapreduce关系的紧密.这个可能 ...

  3. 【Big Data - Hadoop - MapReduce】初学Hadoop之图解MapReduce与WordCount示例分析

    Hadoop的框架最核心的设计就是:HDFS和MapReduce.HDFS为海量的数据提供了存储,MapReduce则为海量的数据提供了计算. HDFS是Google File System(GFS) ...

  4. 第2节 mapreduce深入学习:14、mapreduce数据压缩-使用snappy进行压缩

    第2节 mapreduce深入学习:14.mapreduce数据压缩-使用snappy进行压缩 文件压缩有两大好处,节约磁盘空间,加速数据在网络和磁盘上的传输. 方式一:在代码中进行设置压缩 代码: ...

  5. 第2节 mapreduce深入学习:7、MapReduce的规约过程combiner

    第2节 mapreduce深入学习:7.MapReduce的规约过程combiner 每一个 map 都可能会产生大量的本地输出,Combiner 的作用就是对 map 端的输出先做一次合并,以减少在 ...

  6. 第2节 mapreduce深入学习:6、MapReduce当中的计数器

    第2节 mapreduce深入学习:6. MapReduce当中的计数器 计数器是收集作业统计信息的有效手段之一,用于质量控制或应用级统计.计数器还可辅助诊断系统故障.如果需要将日志信息传输到map ...

  7. Hadoop MapReduce编程 API入门系列之MapReduce多种输出格式分析(十九)

    不多说,直接上代码. 假如这里有一份邮箱数据文件,我们期望统计邮箱出现次数并按照邮箱的类别,将这些邮箱分别输出到不同文件路径下. 代码版本1 package zhouls.bigdata.myMapR ...

  8. Hadoop MapReduce编程 API入门系列之MapReduce多种输入格式(十七)

    不多说,直接上代码. 代码 package zhouls.bigdata.myMapReduce.ScoreCount; import java.io.DataInput; import java.i ...

  9. 【Hadoop】MapReduce笔记(四):MapReduce优化策略总结

    Cloudera 提供给客户的服务内容之一就是调整和优化MapReduce job执行性能.MapReduce和HDFS组成一个复杂的分布式系统,并且它们运行着各式各样用户的代码,这样导致没有一个快速 ...

随机推荐

  1. 【hihoCoder】#1513 : 小Hi的烦恼

    题解 我会五维数点辣 只要用个bitset乱搞就好了 记录一下rk[i][j]表示第j科排名为i的是谁 用30000 * 5个大小为30000的bitset s[i][j]是一个bitset表示第j科 ...

  2. Action(8):Error-26608:HTTP Status-Code=504(Gateway Time-out)

    Action(8):Error-26608:HTTP Status-Code=504(Gateway Time-out) 若出现如下图问题, 1.在Vuser Generator中的Tools---& ...

  3. GTK, GTK+, Qt, KDE, GNOME, Unity的区别与联系

    GTK,GTK+, Qt是图形界面开发库(GUI Toolkit),用户可以使用这些开发库编写GUI应用,其中GTK+是GTK的升级版. KDE,GNOME,Unity是linux下的桌面环境(Des ...

  4. linux 下rocketmq安装

    一.解压mq(/data下)tar -zxvf Rocketmq-3.5.8.tar.gz 二.修改配置文件vi /etc/profileexport rocketmq=/data/alibaba-r ...

  5. Adobe Photoshop CC 2017-18.0安装教程

    Adobe Photoshop CC 2017-18.0安装教程 注:下载链接在文章后面 第一步:首先请将电脑的网络断开,很简单:禁用本地连接或者拔掉网线,这样就可以免除登录Creative Clou ...

  6. @NamedEntityGraphs --JPA按实体类对象参数中的字段排序问题得解决方法

    JPA按实体类对象参数中的字段排序问题得解决方法@Entity @Table(name="complaints") @NamedEntityGraphs({ @NamedEntit ...

  7. [漏洞复现] CVE-2017-11882 通杀所有Office版本

    此漏洞是由Office软件里面的 [公式编辑器] 造成的,由于编辑器进程没有对名称长度进行校验,导致缓冲区溢出,攻击者通过构造特殊的字符,可以实现任意代码执行. 举个例子,如果黑客利用这个漏洞,构造带 ...

  8. Date日期

    当我们只需要一个日期时,或从系统取得,或从数据库查询,都可以放入一个Date对象. 当我们需要对Date进行详细分析,获取其中的年月日分秒各个部分的信息,用Calendar类. 当我们需要对一个字符串 ...

  9. PAGELATCH_EX Contention on 2:1:103

    This blog post is meant to help people troubleshoot page latch contention on 2:1:103. If that’s what ...

  10. php 获取所有常量

    有的时候想得到某个完整路径,看看都定义了哪些常量,可以这样做,即把所有的常量都打印出来,然后看看有没有自己想要的,感觉挺方便 官方给的原型: array get_defined_constants ( ...