Map Reduce Application(Join)
We are going to explain how join works in MR , we will focus on reduce side join and map side join.
Reduce Side Join
Assuming we have 2 datasets , one is user information(id, name...) , the other is comments made by users(user id, content, date...). We want to join the 2 datasets to select the username and comment they posted. So, this is a typical join example. You can implement all types of join including innter join/outer join/full outer join... As the name indicates, the join is done in reducer.
- We use 2/n mappers for each dataset(table in RDBMS). So, we set this with code below.
MultipleInputs.addInputPath(job,filePath,TextInputFormat.class,UserMapper.class)
MultipleInputs.addInputPath(job,filePath,TextInputFormat.class,CommentsMapper.class)
3 ....
4 MultipleInputs.addInputPath(job,filePath,TextInputFormat.class,OtherMapper.class)
.... - In each mapper, we just need to output the key/value pairs as the job is most done in reducer. In reduce function, when it iterators the values for a given key, reduce function needs to know the value is from which dataset to perform the join. Reducer itself may not be able to distinguish which value is from which mapper(UserMapper or CommentsMapper) for a given key. So, in the map function, we have a chance to mark the value like prefix the value with the mapper name something like that.
outkey.set(userId);
//mark this value so reduce function knows
outvalue.set("UserMapper"+value.toString);
context.write(outkey,outvalue) - In reducer, we get the join type from configuration, perform the join. there can be multiple reducers and with multiple threads.
public void setup(Context context){
joinType = context.getConfiguration().get("joinType");
}
public void reduce(Text text, Iterable<Text> values, Context context)
throws Exception {
listUser.clear();
listComments.clear();
for (Text t: values){
if(isFromUserMapper(t)){
listUser.add(realContent(t));
}else if (isFromCommentsMapper(t)){
listUser.add(realContent(t));
}
}
doJoin(context);
}
private void doJoin(Context context) throws Exception{
if (joinType.equals("inner")){
if(both are not empty){
for (Text user:listUser){
for (Text comm: listComments){
context.write(user,comm);
}
}
}
}else if (){
}.....
}
In reducer side join, all data will be sent to reducer side, so, the overall network bandwith is required.
Map Side Join/Replicated Join
As the name indicates , the join operation is done in map side . So, there is no reducer. It is very suitable for join datasets which has only 1 large dataset and others are small dataset and can be read into small memory in a single machine. It is faster than reduce side join (as no reduce phase, no intermediate output, no network transfer)
We still use the sample example that is to join user(small) and comments(large) datasets. How to implement it?
- Set the number of reduce to 0.
job.setNumReduceTasks(0);
- Add the small datasets to hadoop distribute cache.The first one is deprecated.
DistributedCache.addCacheFile(new Path(args[]).toUri(),job.getConfiguration)
job.addCacheFile(new Path(filename).toUri());
- In mapper setup function, get the cache by code below. The first one is deprecated. Read the file and put the the key / value in an instance variable like HashMap. This is single thread, so it is safe.
Path[] localPaths = context.getLocalCacheFiles();
URI[] uris = context.getCacheFiles()
- In the mapper function, since, you have the entire user data set in the HashMap, you can try to get the key(comes from the split of comment dataset) from the HashMap. If it exists, you get a match. Because only one split of comments dataset goes into each mapper task, you can only perform an inner join or a left outer join.
What is Hadoop Distributed Cache?
"DistributedCache is a facility provided by the Map-Reduce framework to cache files needed by applications. Once you cache a file for your job, hadoop framework will make it available on(or broadcast to) each and every data nodes (in file system, not in memory) where you map/reduce tasks are running. Then you can access the cache file as local file in your Mapper Or Reducer job. Now you can easily read the cache file and populate some collection (e.g Array, Hashmap etc.) in your code" The cache will be removed once the job is done as they are temporary files.
The size of the cache can be configured in mapred-site.xml.
How to use Distributed Cache(the API has changed)?
- Add cache in driver.
Note the # sign in the URI. Before it, you specify the absolute data path in HDFS. After it, you set a name(symlink) to specify the local file path in your mapper/reducer.
job.addCacheFile(new URI("/user/ricky/user.txt#user"));
job.addCacheFile(new URI("/user/ricky/org.txt#org")); return job.waitForCompletion(true) ? 0 : 1;
- Read cache in your task(mapper/reduce), probably in setup function.
@Override
protected void setup(
Mapper<LongWritable, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
if (context.getCacheFiles() != null
&& context.getCacheFiles().length > 0) { File some_file = new File("user");
File other_file = new File("org");
}
super.setup(context);
}
Reference:
https://www.youtube.com/user/pramodnarayana/videos
https://stackoverflow.com/questions/19678412/number-of-mappers-and-reducers-what-it-means
Map Reduce Application(Join)的更多相关文章
- Map Reduce Application(Partitioninig/Binning)
Map Reduce Application(Partitioninig/Group data by a defined key) Assuming we want to group data by ...
- Map Reduce Application(Top 10 IDs base on their value)
Top 10 IDs base on their value First , we need to set the reduce to 1. For each map task, it is not ...
- Map/Reduce中Join查询实现
张表,分别较data.txt和info.txt,字段之间以/t划分. data.txt内容如下: 201001 1003 abc 201002 1005 def 201003 ...
- hadoop 多表join:Map side join及Reduce side join范例
最近在准备抽取数据的工作.有一个id集合200多M,要从另一个500GB的数据集合中抽取出所有id集合中包含的数据集.id数据集合中每一个行就是一个id的字符串(Reduce side join要在每 ...
- hadoop的压缩解压缩,reduce端join,map端join
hadoop的压缩解压缩 hadoop对于常见的几种压缩算法对于我们的mapreduce都是内置支持,不需要我们关心.经过map之后,数据会产生输出经过shuffle,这个时候的shuffle过程特别 ...
- HIVE 的MAP/REDUCE
对于 JOIN 操作: Map: 以 JOIN ON 条件中的列作为 Key,如果有多个列,则 Key 是这些列的组合 以 JOIN 之后所关心的列作为 Value,当有多个列时,Value 是这些列 ...
- mapreduce: 揭秘InputFormat--掌控Map Reduce任务执行的利器
随着越来越多的公司采用Hadoop,它所处理的问题类型也变得愈发多元化.随着Hadoop适用场景数量的不断膨胀,控制好怎样执行以及何处执行map任务显得至关重要.实现这种控制的方法之一就是自定义Inp ...
- 基于python的《Hadoop权威指南》一书中气象数据下载和map reduce化数据处理及其可视化
文档内容: 1:下载<hadoop权威指南>中的气象数据 2:对下载的气象数据归档整理并读取数据 3:对气象数据进行map reduce进行处理 关键词:<Hadoop权威指南> ...
- Reduce Side Join实现
关于reduce边join,其最重要的是使用MultipleInputs.addInputPath这个api对不同的表使用不同的Map,然后在每个Map里做一下该表的标识,最后到了Reduce端再根据 ...
随机推荐
- [videos系列]日本的videos视频让男人产生了哪些误解?
转载自:[videos系列]日本的videos视频让男人产生了哪些误解? 日本的videos视频是每个男人成长过程中都会看的启蒙教育片,也是男人在成年后调剂生活的必需品,但是由于影视作品是艺术的,是属 ...
- #leetcode刷题之路37-解数独
编写一个程序,通过已填充的空格来解决数独问题.一个数独的解法需遵循如下规则:数字 1-9 在每一行只能出现一次.数字 1-9 在每一列只能出现一次.数字 1-9 在每一个以粗实线分隔的 3x3 宫内只 ...
- C++分享笔记:5X5单词字谜游戏设计
笔者在大学二年级刚学完C++程序设计后,做过一次课程设计,题目是:5X5单词字谜游戏设计.为了设计算法并编写程序,笔者在当时颇费了一番心力,最后还是成功地完成了.设计中不乏有精妙之处.该程序设计完全是 ...
- 附件上传——mysql blob类型的数据(springboot)1
作为一个初出茅庐的菜鸟,这几天做了一下附件的上传与下载,附件文件存储在mysql中,数据类型为blob.在此做一下总结.望指正. 一.先总结附件的上传.(实质是将文件传到controller,后处理成 ...
- 解决微信小程序用 SpringMVC 处理http post时请求报415错误
解决微信小程序用 SpringMVC 处理http post时请求返回415错误 写微信小程序时遇到的问题,这个坑硬是让我整了半天 wx.request请求跟ajax类似处理方法一致 小程序端请求代码 ...
- Java中枚举的相关应用
package example6; import org.junit.Test;/*1.什么是枚举? * 需要在颐堤港范围内取值,这个值只能是这个范围内的一个 * 使用枚举关键字enum * 枚举里也 ...
- vue的实例
vue的实例 创建一个vue实例的写法和创建一个变量一样 var vm = new Vue{{ //我们一般用vm来接收vue的实例,vm是 ViewModel的缩写 }} 然后,我们就可以给vue实 ...
- Hive(4)-Hive的数据类型
一. 基本数据类型 Hive数据类型 Java数据类型 长度 例子 TINYINT byte 1byte有符号整数 20 SMALINT short 2byte有符号整数 20 INT int 4by ...
- 爬虫-爬虫介绍及Scrapy简介
在编写案例之前首先理解几个问题,1:什么是爬虫2:为什么说python是门友好的爬虫语言?3:选用哪种框架编写爬虫程序 一:什么是爬虫? 爬虫 webSpider 也称之为网络蜘蛛,是使用一段编写好的 ...
- 洛谷P4526 【模板】自适应辛普森法2(simpson积分)
题目描述 计算积分 保留至小数点后5位.若积分发散,请输出"orz". 输入输出格式 输入格式: 一行,包含一个实数,为a的值 输出格式: 一行,积分值或orz 输入输出样例 输入 ...