While in the SQL-world is very easy combining two or more datasets - we just need to use the JOIN keyword - with MapReduce things becomes a little harder. Let's get into it. 
Suppose we have two distinct datasets, one for users of a forum and the other for the posts in the forum (data is in TSV - Tab Separated Values - format).
Users dataset:

id   name  reputation
0102 alice 32
0511 bob 27
...

Posts dataset:

id      type      subject   body                                   userid
0028391 question test "Hi, what is.." 0102
0073626 comment bug "Guys, I've found.." 0511
0089234 comment bug "Nope, it's not that way.." 0734
0190347 answer info "In my opinion it's worth the time.." 1932
...

What we'd like to do is to combine the reputation of each user to the number of question he/she posted, to see if we can relate one to the other.

The main idea behind combining the two datasets is to leverage the shuffle and sort phase: this process groups together values with the same key, so if we define the user id as the key, we can send to the reducer both the user reputation and the number of his/her posts, because they're attached to the same key (the user id). 
Let's see how. 
We start with the mapper:

public static class JoinMapper extends Mapper<object, text,="" intwritable=""> {

        private final static IntWritable one = new IntWritable(1);

        @Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException { // gets filename of the input file for this record
FileSplit fileSplit = (FileSplit) context.getInputSplit();
String filename = fileSplit.getPath().getName(); // creates an array with all the fields of the row we're reading now
String[] fields = value.toString().split(("\t")); // if we're reading the posts file
if (filename.equals("forum_nodes_no_lf.tsv")) {
// retrieves the author ID and the type of the post
String type = fields[1];
if (type.equals("question")) {
String authorId = fields[4];
context.write(new Text(authorId), one);
}
}
// if we're reading the users file
else {
String authorId = fields[0];
String reputation = fields[2]; // we add two to the reputation, because we want the minimum value to be greater than 1,
// not to be confused with the "one" passed by the other branch of the if
int reputationValue = Integer.parseInt(reputation) + 2;
context.write(new Text(authorId), new IntWritable(reputationValue));
}
}
}

First of all, this code assumes that in the directory Hadoop in looking in for data, there are two files: the users file and the posts file; we use the FileSplit class to obtain which filename Hadoop is now reading: in this way we can know if we're dealing the users file or the posts file. Then, if is the posts file, things get a little trickier. For every user, we're passing to the reducer a "1" for every question he/she posted on the forum; since we want to pass also reputation of the user (that can be a "0" or a "1"), we have to be careful not to mix up the values. To do this, we add 2 to the reputation, so that, even if it is "0", the value passed to the reducer will be greater or equal to two. In this way, we know that when the reducer will receive a "1" it will be for counting a question posted on the forum, while when it will receive a value greater than "1", it will be the reputation of the user. 
Let's now look at the reducer:

 public static class JoinReducer extends Reducer<text, intwritable,="" text,="" text=""> {

        @Override
public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { int postsNumber = 0;
int reputation = 0;
String authorId = key.toString(); for (IntWritable value : values) { int intValue = value.get();
if (intValue == 1) {
postsNumber ++;
}
else {
// we subtract two for having the exact reputation value (see the mapper)
reputation = intValue -2;
}
} context.write(new Text(authorId), new Text(reputation + "\t" + postsNumber));
}
}

As stated before, the reducer will now receive two kinds of data: "1" if related to the number of posts of the user, and a value greater than one for the reputation. The code in in the reducer, checks exactly this: if receives a "1" increaes the number of posts of this user, otherwise sets his/her reputation. At the end of the method, we tell the reducer to output the authorId, his/her reputation and how many posts has posted on the forum:

userid  reputation  posts#
0102 55 23
0511 05 11
0734 00 89
1932 19 32
...

and we're ready to analyze these data.

from: http://andreaiacono.blogspot.com/2014/09/combining-datasets-with-mapreduce.html

结合MapReduce和数据集Combining datasets with MapReduce的更多相关文章

  1. MapReduce编程(一) Intellij Idea配置MapReduce编程环境

    介绍怎样在Intellij Idea中通过创建mavenproject配置MapReduce的编程环境. 一.软件环境 我使用的软件版本号例如以下: Intellij Idea 2017.1 Mave ...

  2. 【Big Data - Hadoop - MapReduce】hadoop 学习笔记:MapReduce框架详解

    开始聊MapReduce,MapReduce是Hadoop的计算框架,我学Hadoop是从Hive开始入手,再到hdfs,当我学习hdfs时候,就感觉到hdfs和mapreduce关系的紧密.这个可能 ...

  3. 【Big Data - Hadoop - MapReduce】初学Hadoop之图解MapReduce与WordCount示例分析

    Hadoop的框架最核心的设计就是:HDFS和MapReduce.HDFS为海量的数据提供了存储,MapReduce则为海量的数据提供了计算. HDFS是Google File System(GFS) ...

  4. 第2节 mapreduce深入学习:14、mapreduce数据压缩-使用snappy进行压缩

    第2节 mapreduce深入学习:14.mapreduce数据压缩-使用snappy进行压缩 文件压缩有两大好处,节约磁盘空间,加速数据在网络和磁盘上的传输. 方式一:在代码中进行设置压缩 代码: ...

  5. 第2节 mapreduce深入学习:7、MapReduce的规约过程combiner

    第2节 mapreduce深入学习:7.MapReduce的规约过程combiner 每一个 map 都可能会产生大量的本地输出,Combiner 的作用就是对 map 端的输出先做一次合并,以减少在 ...

  6. 第2节 mapreduce深入学习:6、MapReduce当中的计数器

    第2节 mapreduce深入学习:6. MapReduce当中的计数器 计数器是收集作业统计信息的有效手段之一,用于质量控制或应用级统计.计数器还可辅助诊断系统故障.如果需要将日志信息传输到map ...

  7. Hadoop MapReduce编程 API入门系列之MapReduce多种输出格式分析(十九)

    不多说,直接上代码. 假如这里有一份邮箱数据文件,我们期望统计邮箱出现次数并按照邮箱的类别,将这些邮箱分别输出到不同文件路径下. 代码版本1 package zhouls.bigdata.myMapR ...

  8. Hadoop MapReduce编程 API入门系列之MapReduce多种输入格式(十七)

    不多说,直接上代码. 代码 package zhouls.bigdata.myMapReduce.ScoreCount; import java.io.DataInput; import java.i ...

  9. 【Hadoop】MapReduce笔记(四):MapReduce优化策略总结

    Cloudera 提供给客户的服务内容之一就是调整和优化MapReduce job执行性能.MapReduce和HDFS组成一个复杂的分布式系统,并且它们运行着各式各样用户的代码,这样导致没有一个快速 ...

随机推荐

  1. 【UOJ】#37. 【清华集训2014】主旋律

    题解 一道,神奇的题= = 我们考虑正难则反,我们求去掉这些边后有多少图不是强连通的 怎么求呢,不是强连通的图缩点后一定是一个DAG,并且这个DAG里面有两个点 我们想一下,如果我们把1当成入度为0的 ...

  2. spring调用方法(接口和多个实现类的情况)

    以spring框架注入bean说明接口TestService 有2个实现类 TestServiceImp1 @Service("TestService1") ,TestServic ...

  3. tp5总结(二)

    1.简化路由[一方面可以更好的让搜索引擎抓取:另一方面简化路由,利于记忆] 1-1.在[public/]下面创建admin.php 复制index.php 然后绑定前后台模块define('BIND_ ...

  4. 码云Android项目构建注意事项(转载)

    1.ant项目 build.xml必须位于项目根目录. 2.maven项目 pom.xml必须位于项目根目录. 3.gradle项目 由于gradle的配置灵活,我们做了一些规范,并且增加了一下机制来 ...

  5. 59. 螺旋矩阵 II

    给定一个正整数 n,生成一个包含 1 到 n2 所有元素,且元素按顺时针顺序螺旋排列的正方形矩阵. 示例: 输入: 3 输出: [ [ 1, 2, 3 ], [ 8, 9, 4 ], [ 7, 6, ...

  6. Oracle数据库多表查询,子查询,集合运算

    记得自己要敲o~~~ select * from bonus; select * from salgrade; from dual; --笛卡尔积:两张表的乘积 select * from emp,d ...

  7. Pearls in a Row CodeForces 620C 水题

    题目:http://codeforces.com/problemset/problem/620/C 文章末有一些测试数据仅供参考 题目大意 给你一个数字串,然后将分成几个部分,要求每个部分中必须有一对 ...

  8. CentOS下的日志切割

    在Linux下,日志会不停的增长,为了防止日志文件过大,导致我们无法在日志中快速找到想要的信息,我们会定时对日志文件进行切割.在这里我将使用logrotate切割日志. (1).logrotate的配 ...

  9. insert ignore duplicate key

    Insert into T1select * from T2  where NOT EXISTS (select 1 from T1 X where X.GUID=T2.GUID);

  10. Android ListView 自动加载更多

    Android ListView下拉刷新 ListView是我们经常用来展示数据的一个控件,但是由于我们手机的性能和流量的问题,往往我们从服务器中取数据,不能一次性将数据取出来,比如一个新闻的手机AP ...