关于reduce边join,其最重要的是使用MultipleInputs.addInputPath这个api对不同的表使用不同的Map,然后在每个Map里做一下该表的标识,最后到了Reduce端再根据标识区分对应的表!

Reduce Side Join Example

User and comment join

In thisexample, we’ll be using theusers and comments tables from the StackOverflow dataset. Storing data in this matter makessense, as storingrepetitive user data witheach comment is unnecessary. Thiswould also makeupdating user information diffi‐ cult. However,having disjoint data sets posesproblems when it comes to associating a comment with the user who wroteit. Through the use of a reduceside join, thesetwo data sets canbe merged together using the userID as the foreign key. In this example, we’ll perform an inner, outer, and antijoin. The choice of which join to execute is set during job configuration.

Hadoop supportsthe ability to use multipleinput data typesat once, allowingyou to create a mapper classand input formatfor each inputsplit from different data sources. This is extremely helpful, because you don’t have to code logic for two different data inputs in the samemap implementation. In the following example, two mapperclasses are created: one for the user data and one for the comments. Each mapper classoutputs the user ID as the foreignkey, and the entire record as the value along with a single character to flag whichrecord came fromwhat set. Thereducer then copiesall values for eachgroup in memory, keepingtrack of whichrecord came fromwhat data set.The records are then joined togetherand output.

The following descriptions of eachcode section explainthe solution to the problem.

Problem: Given a set of user information and a list of user’s comments, enrich each comment with the information about the userwho created thecomment.

Drivercode.The job configurationis slightly different from the standard configuration due to the user of themultiple input utility. We also set the join type in the jobconfig‐ uration to args[2] so it can be used in the reducer. The relevant piece of the drivercode to use the MultipleInput follows:

...

// Use MultipleInputs to set which inputuses what mapper

// This will keep parsingof each dataset separate froma logical standpoint

// The firsttwo elements of theargs arrayare the two inputs

MultipleInputs.addInputPath(job, new Path(args[0]), TextInputFormat.class,UserJoinMapper.class);

MultipleInputs.addInputPath(job,newPath(args[1]), TextInputFormat.class, CommentJoinMapper.class);

job.getConfiguration()..set("join.type", args[2]);

...

User mappercode.This mapper parseseach input lineof user dataXML. It grabs theuser ID associated with each record and outputs it along with the entire input  value. It prepends the letter A in front of theentire value. This allows the reducer to know which values came from what data  set.

public static class UserJoinMapper extendsMapper<Object, Text, Text, Text> {

private Text outkey =newText();

private Text outvalue =newText();

public void map(Object key, Text value, Context context) throwsIOException, InterruptedException {

// Parse the input stringinto a nice map

Map<String, String> parsed = MRDPUtils.transformXmlToMap(value.toString());

String userId = parsed.get("Id");

// The foreign join keyis the userID

outkey.set(userId);

// Flag this record for the reducerand then outputoutvalue.set("A" + value.toString()); context.write(outkey, outvalue);

}

}

When you output the value from the map side, the entire record doesn’t have to be sent. This is an opportunity to optimize the join by keepingonly the fields of data you want to join together. It requiresmore pro‐ cessing on the map side, but is worthit in the long run. Also, sincethe foreign key is in the map output key, you don’t need to keep that in the value, either.

Comment mapper code.This mapper parseseach input line of commentXML. Very sim‐ ilar to the UserJoinMapper,it too grabs the user ID associated with each record and outputs it along with the entire inputvalue. The only different here is that the XML attribute UserId representsthe user that posted to comment, where as Id in theuser  data set is the user ID. Here, this mapper prepends the letter B in front ofthe entire value.

public static class CommentJoinMapper extends Mapper<Object, Text, Text, Text> {

private Text outkey =newText();

private Text outvalue =newText();

public void map(Object key, Text value, Context context)

throws IOException, InterruptedException {

Map<String, String> parsed = transformXmlToMap(value.toString());

// The foreign join keyis the userID

outkey.set( parsed.get("UserId"));

// Flag this record for the reducerand then outputoutvalue.set("B" + value.toString()); context.write(outkey, outvalue);

}

}

Reducer code.The reducer code iterates through all thevalues of each group and looks atwhat each record is tagged with and then puts the record in one of two lists.After all values are binned in either list, the actual join logic is executedusing the two lists. The join logic differs slightly based on the type of join,but always involves iterating through both lists and writing to the Context object.The type of join is pulled from the job configuration in the setup method. Let’s look at the main reduce method before looking at the join logic.

public static class UserJoinReducer extendsReducer<Text, Text, Text, Text> {

private staticfinal Text EMPTY_TEXT = Text("");

private Text tmp =newText();

private ArrayList<Text> listA =newArrayList<Text>();

private ArrayList<Text> listB =newArrayList<Text>();

private String joinType =null;

public void setup(Context context) {

// Get the type of join fromour configuration

joinType=context.getConfiguration().get("join.type");

}

public void reduce(Text key, Iterable<Text> values, Context context)

throws IOException, InterruptedException {

// Clear ourlists listA.clear(); listB.clear();

// iterate throughall our values,binning each recordbased on what

// it was tagged with.Make sure to remove thetag!

while(values.hasNext()) { tmp=values.next();

if (tmp.charAt(0) == 'A') {

listA.add(new Text(tmp.toString().substring(1)));

} else if (tmp.charAt('0') == 'B') {

listB.add(new Text(tmp.toString().substring(1)));

}

}

// Execute our join logicnow that the lists are filled

executeJoinLogic(context);

}

private void executeJoinLogic(Context context)

throws IOException, InterruptedException {

...

}

The input data types tothe reducer are two Text objects. The input key isthe foreign join key, which in this example is the user’s ID. The input values associated with the foreign key contain one record from the “users” data set tagged with ‘B’, as well as all the comments the user posted tagged with ‘B’. Any type of data formatting you would want toperform should be done here prior to outputting. For simplicity, the raw XML value from the left data set (users)is output as the key and the raw XML value from the rightdata set (comments) is output as the value.

Next, let’s look at each of the join types. First up is an inner join. If both the lists are not empty, simply performtwo nested forloops and joineach of thevalues together.

if (joinType.equalsIgnoreCase("inner")) {

// If both lists are not empty,join A with B

if (!listA.isEmpty() && !listB.isEmpty()) {

for (Text A : listA) {

for(Text B : listB) { context.write(A, B);

}

}

}

}...

For aleft outer join,if the right list is not empty, join A with B.If the right list is empty, outputeach record of A with an empty string.

... else if(joinType.equalsIgnoreCase("leftouter")) {

// For each entry in A,

for (Text A : listA) {

// If list B is not empty,join A andB

if (!listB.isEmpty()) {

for(Text B : listB) { context.write(A, B);

}

}else{

// Else, outputA by itself

context.write(A, EMPTY_TEXT);

}

}

}...

A rightouter join is very similar, except switching from the check for empty elements fromBto A. If the left list is empty, write records from B withan empty output key.

...else if (joinType.equalsIgnoreCase("rightouter")) {

// For each entry in B,

for (Text B : listB) {

// If list A is not empty,join A andB

if (!listA.isEmpty()) {

for(Text A : listA) { context.write(A, B);

}

} else {

// Else, outputB by itself

context.write(EMPTY_TEXT, B);

}

}

}...

A fullouter join is more complex, in that we want to keep allrecords, ensuring thatwe join records whereappropriate. If list A is not empty, then for everyelement inA, join withB whenthe B listis not empty, or output A by itself. IfA isempty, then just output B.

... else if (joinType.equalsIgnoreCase("fullouter")) {

// If list A is not empty

if (!listA.isEmpty()) {

// For each entry in A

for (Text A : listA) {

// If list B is not empty,join A with B

if (!listB.isEmpty()) {

for(Text B : listB) { context.write(A, B);

}

}else {

// Else, outputA by itself

context.write(A, EMPTY_TEXT);

}

}

} else {

// If list A is empty, just output B

for (Text B : listB) { context.write(EMPTY_TEXT, B);

}

}

}...

For anantijoin, if at least one of the lists is empty, output the recordsfrom the non- empty list with an empty Text object.

... else if(joinType.equalsIgnoreCase("anti")) {

// If list A is empty and B is empty or vice versa

if (listA.isEmpty() ^ listB.isEmpty()) {

// Iterate both A and B with null values

// The previous XOR checkwill make sure exactly one of

// these lists is emptyand therefore the list will be skipped

for (Text A : listA) { context.write(A, EMPTY_TEXT);

}

for (Text B : listB) { context.write(EMPTY_TEXT, B);

}

}

Reduce Side Join实现的更多相关文章

  1. hadoop 多表join:Map side join及Reduce side join范例

    最近在准备抽取数据的工作.有一个id集合200多M,要从另一个500GB的数据集合中抽取出所有id集合中包含的数据集.id数据集合中每一个行就是一个id的字符串(Reduce side join要在每 ...

  2. hadoop的压缩解压缩,reduce端join,map端join

    hadoop的压缩解压缩 hadoop对于常见的几种压缩算法对于我们的mapreduce都是内置支持,不需要我们关心.经过map之后,数据会产生输出经过shuffle,这个时候的shuffle过程特别 ...

  3. Map Reduce Application(Join)

    We are going to explain how join works in MR , we will focus on reduce side join and map side join. ...

  4. MapReduce的Reduce side Join

    1. 简单介绍 reduce side  join是全部join中用时最长的一种join,可是这样的方法可以适用内连接.left外连接.right外连接.full外连接和反连接等全部的join方式.r ...

  5. Map/Reduce中Join查询实现

    张表,分别较data.txt和info.txt,字段之间以/t划分. data.txt内容如下: 201001    1003    abc 201002    1005    def 201003  ...

  6. 0 MapReduce实现Reduce Side Join操作

    一.准备两张表以及对应的数据 (1)m_ys_lab_jointest_a(以下简称表A) 建表语句: create table if not exists m_ys_lab_jointest_a ( ...

  7. MapReudce中常见join的方案

    两表join在业务开发中是经常用到,了解了大数据join的原理,对于开发有很大的好处. 1.reduce side join reduce side join是一种简单的join的方法,具体思想如下: ...

  8. HIVE: Map Join Vs Common Join, and SMB

    HIVE  Map Join is nothing but the extended version of Hash Join of SQL Server - just extending Hash ...

  9. Hadoop的Map侧join

    写了关于Hadoop下载地址的Map侧join 和Reduce的join,今天我们就来在看另外一种比较中立的Join. SemiJoin,一般称为半链接,其原理是在Map侧过滤掉了一些不需要join的 ...

随机推荐

  1. 计蒜客蓝桥杯模拟赛 后缀字符串:STL_map+贪心

    问题描述 一天蒜头君得到 n 个字符串 si​,每个字符串的长度都不超过 10. 蒜头君在想,在这 n 个字符串中,以 si​ 为后缀的字符串有多少个呢? 输入格式 第一行输入一个整数 n. 接下来  ...

  2. C++ 学习笔记之——STL 库 vector

    vector 是一种顺序容器,可以看作是可以改变大小的数组. 就像数组一样,vector 占用连续的内存地址来存储元素,因此可以像数组一样用偏移量来随机访问,但是它的大小可以动态改变,容器会自动处理内 ...

  3. popen()与system()

    一.popen() 用途:执行shell命令(并读取其输出或向其发送一些输入) 特点:通过管道来与shell命令进行通信 二.system()

  4. 20145214实验三 敏捷开发与XP实践

    20145214实验三 敏捷开发与XP实践 XP准则 沟通 :XP认为项目成员之间的沟通是项目成功的关键,并把沟通看作项目中间协调与合作的主要推动因素. 简单 :XP假定未来不能可靠地预测,在现在考虑 ...

  5. windows批处理学习---01

    一. 标记符号: CR(0D) 命令行结束符 Escape(1B) ANSI转义字符引导符 Space() 常用的参数界定符 Tab() ; = 不常用的参数界定符 + COPY命令文件连接符 * ? ...

  6. spring cloud 之 客户端负载均衡 Ribbon

    一.负载均衡 负载均衡(Load Balance): 建立在现有网络结构之上,它提供了一种廉价有效透明的方法扩展网络设备和服务器的带宽.增加吞吐量.加强网络数据处理能力.提高网络的灵活性和可用性.其意 ...

  7. centos7 nginx端口转发出现502的其中一种原因

    在排查了一系列可能的原因后仍无法解决,经资料查阅可能是SELinux造成,SELinux很强大但若配置不当也会造成很多组件无法正常使用,这里直接将其关闭: //打开配置文件 vi /etc/selin ...

  8. linux路由表的配置

    linux路由表的配置 一.原理说明 1.路由表(table)从0到255进行编号,每个编号可以对应一个别名,编号和别名的对应关系在linux下放在/etc/iproute2/rt_tables这个文 ...

  9. [OS] 操作系统-进程线程-经典面试笔试题

    题目转自:http://blog.csdn.net/morewindows/article/details/7392749 ·线程的基本概念.线程的基本状态及状态之间的关系? 线程,有时称为轻量级进程 ...

  10. 浅析Docker容器的应用场景

    本文来自网易云社区 作者:娄超 过去几年开源界以openstack为代表的云计算持续火了好久,这两年突然又冒出一个叫Docker的容器技术,其发展之迅猛远超预料.网上介绍Docker容器的文章已经很多 ...