In the last post we saw how to run a MapReduce job on Hadoop. Now we're going to analyze how a MapReduce program works. And, if you don't know what MapReduce is, the short answer is "MapReduce is a programming model for processing large data sets with a parallel, distributed algorithm on a cluster" (from Wikipedia).

Let's take a look at the source code: we can find a Java main method that is called from Hadoop, and two inner static classes, the mapper and the reducer. The code for the mapper is:

public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {

        private final static IntWritable one = new IntWritable(1);
private Text word = new Text(); @Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}

As we can see, this class extends Mapper, which - as its JavaDoc says - maps input key/value pairs to a set of intermediate key/value pairs; when the job starts, the Hadoop framework passes to the mapper a chunk of data (a subset of the whole dataset) to process. The output of the mapper will be the input of the reducers (it's not the complete story, but we'll arrive there in another post). The Mapper uses Java generics to specify what kind of data will process; in this example, we use a class that extends Mapper and specifies Object and Text as the classes of key/value pairs in input, and Text and IntWritable as the classes of key/value pairs for the output to the reducers (we'll see the details of those classes in a moment). 
Let's examine the code: there's only one overridden method, the map() that takes the key/value pair as arguments and the Hadoop context; every time this method is called by Hadoop, the method receives an offset of the file where the value is as the key, and a line of the text file we're reading as the value. 
Hadoop has some basic types that ore optimized for network serialization; here is a table with a few of them:

Java type Hadoop type
Integer IntWritable
Long LongWritable
Double DoubleWritable
String TextWritable
Map MapWritable
Array ArrayWritable

Now it's easy to understand what this method does: for every line of the book it receives, it uses a StringTokenizer to split the line into every single word; then it sets the word in the Textobject and maps it the the value of 1; then writes it to the mappers via the Hadoop context.

Let's now look at the reducer:

public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable(); @Override
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}

This time we have the first two arguments of the overridden method reduce that are the same type of the last two of the TokenizerMapper class; that's because - as we said - the mapper outputs the data that the reducer will use as an input. The Hadoop framework takes care of calling this method for every key that comes from the mappers; as we saw before, the keys are the words of the file we're counting the words of. 
The reduce method now has to sum all the occurrences of every single word, so it initializes a sum variable to 0 and then loops over all the values for that specific key that it receives from the mappers. For every word it updates the sum variable with the value mapped to that key. At the end of the loop, when all the occurrences of that word are counted, the method sets the value obtained into an IntWritable object and gives it to the Hadoop context to be outputted to the user.

We're now at the main method of the class, which is the one that is called by Hadoop when it's executed as a JAR file.

public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

In the method, we first setup a Configuration object, then we check for the number of arguments passed to it; If the number of arguments is correct, we create a Job object and we set a few values for making it work. Let's dive into the details:

  • setJarByClass: sets the Jar by finding where a given class came from; this needs an explanation: Hadoop distributes the code to execute to the cluster as a JAR file; instead of specifying the name of the JAR, we tell Hadoop the name of the class that every instance on the cluster has to look for inside its classpath
  • setMapperClass: sets the class that will be executed as the mapper
  • setCombinerClass: sets the class that will be executed as the combiner (we'll explain what is a combiner in a future post)
  • setReducerClass: sets the class that will be executed as the reducer
  • setOutputKeyClass: sets the class that will be used as the key for outputting data to the user
  • setOutputValueClass: sets the class that will be used as the value for outputting data to the user

Then we say to Hadoop where it can find the input with the FileInputFormat.addInputPath() method and where it has to write the output with the FileOutputFormat.setOutputPath()method. The last method call is the waitForCompletion(), that submits the job to the cluster and waits for it to finish.

Now that the mechanism of a MapReduce job is more clear, we can start playing with it.

from: http://andreaiacono.blogspot.com/2014/02/mapreduce-job-explained.html

MapReduce任务分析与讨论MapReduce job explained的更多相关文章

  1. MapReduce教程(一)基于MapReduce框架开发<转>

    1 MapReduce编程 1.1 MapReduce简介 MapReduce是一种编程模型,用于大规模数据集(大于1TB)的并行运算,用于解决海量数据的计算问题. MapReduce分成了两个部分: ...

  2. Migrating from MapReduce 1 (MRv1) to MapReduce 2 (MRv2, YARN)...

    This is a guide to migrating from Apache MapReduce 1 (MRv1) to the Next Generation MapReduce (MRv2 o ...

  3. 使用Cloudera Manager搭建MapReduce集群及MapReduce HA

    使用Cloudera Manager搭建MapReduce集群及MapReduce HA 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任.   一.通过CM部署MapReduce On ...

  4. 【MapReduce】一、MapReduce简介与实例

    (一)MapReduce介绍 1.MapReduce简介   MapReduce是Hadoop生态系统的一个重要组成部分,与分布式文件系统HDFS.分布式数据库HBase一起合称为传统Hadoop的三 ...

  5. hadoop2.2编程:从default mapreduce program 来理解mapreduce

    下面写一个default mapreduce 的程序: import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapr ...

  6. Top N之MapReduce程序加强版Enhanced MapReduce for Top N items

    In the last post we saw how to write a MapReduce program for finding the top-n items of a dataset. T ...

  7. Python实现MapReduce,wordcount实例,MapReduce实现两表的Join

    Python实现MapReduce 下面使用mapreduce模式实现了一个简单的统计日志中单词出现次数的程序: from functools import reduce from multiproc ...

  8. yarn/mapreduce工作机制及mapreduce客户端代码编写

    首先需要知道的就是在老版本的hadoop中是没有yarn的,mapreduce既负责资源分配又负责业务逻辑处理.为了解耦,把资源分配这块抽了出来,形成了yarn,这样不仅mapreudce可以用yar ...

  9. 【MapReduce】三、MapReduce运行机制

      通过前面对map端.reduce端以及整个shuffle端工作流程的介绍,我们已经了解了MapReduce的并行运算模型,基本可以使用MapReduce进行编程,那么MapRecude究竟是如何执 ...

随机推荐

  1. 【转】TCP建立连接三次握手和释放连接四次握手

    在谈及TCP建立连接和释放连接过程,先来简单认识一下TCP报文段首部格式的的几个名词(这里只是简单说明,具体请查看相关教程) 序列号seq:占4个字节,用来标记数据段的顺序,TCP把连接中发送的所有数 ...

  2. Django实战(14):让页面联动起来

    上一节我们实现了一个”能看不能用“的购物车,现在我们来使用这个购物车. 首先是产品目录界面中的”加入购物车“链接,我们希望点击这个按钮后,在购物车中添加该产品(添加的规则是如果购物车中已经有该产品就增 ...

  3. 备份恢复-----system表空间损坏

    无法进行关库,报错如下 SQL> shutdown immediate ORA-01122: database file 1 failed verification checkORA-01110 ...

  4. 使用Generator(小黑鸟)反向生成Java项目(IDEA + Maven)

    一.生成Maven项目 二.配置pom.xml文件 通用代码 <properties> <!-- 设置项目编码编码 --> <project.build.sourceEn ...

  5. JAVAEE——宜立方商城03:Nginx负载均衡高可用、Keepalived+Nginx实现主备

    1 nginx负载均衡高可用 1.1 什么是负载均衡高可用 nginx作为负载均衡器,所有请求都到了nginx,可见nginx处于非常重点的位置,如果nginx服务器宕机后端web服务将无法提供服务, ...

  6. JAVAEE——宜立方商城01:电商行业的背景、商城系统架构、后台工程搭建、SSM框架整合

    1. 学习计划 第一天: 1.电商行业的背景. 2.宜立方商城的系统架构 a) 功能介绍 b) 架构讲解 3.工程搭建-后台工程 a) 使用maven搭建工程 b) 使用maven的tomcat插件启 ...

  7. Rsync服务部署使用

    rsync服务搭建过程(daemon模式) 配置服务 在/etc/rsyncd.conf文件中写入相应的配置: uid = root gid = root use chroot = no max co ...

  8. 获取Android apk的包名

    Read the package name of an Android APK aapt dump badging <path-to-apk> | grep package:\ name

  9. interrupt_control

    中断的概念CPU在处理过程中,经常需要同外部设备进行交互,交互的方式由“轮询方式”“中断方式” 轮询方式: 方式:在同外设进行交互的过程中,CPU每隔一定的时间状态就去查询相关的状态位,所以在交互期间 ...

  10. JavaScript 数据类型 (续)

    JavaScript 对象 对象由花括号分隔.在括号内部,对象的属性以名称和值对的形式 (name : value) 来定义.属性由逗号分隔: var person={firstname:" ...