1、项目名称:

2、程序代码:

package com.dedup;

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser; public class Dedup {
//map将输入中的value复制到输出数据的key上,并直接输出,注意参数类型和个数
public static class Map extends Mapper<Object, Text, Text, Text>{
public static Text line = new Text();
//注意参数类型和个数
public void map(Object key , Text value , Context context) throws IOException,InterruptedException{
System.out.println("mapper.......");
System.out.println("key:"+key+" value:"+value);
line = value;
context.write(line, new Text(" "));
System.out.println("line:"+ line +" value"+ value +" context:" + context);
}
}
//reduce将输入中的key复制到输出数据的key上,并直接输出,注意参数类型和个数
public static class Reduce extends Reducer<Text, Text, Text, Text>{
//注意参数类型和个数
public void reduce(Text key , Iterable<Text> values, Context context)throws IOException,InterruptedException{
System.out.println("reducer.......");
System.out.println("key:"+key+" values:"+values);
context.write(key, new Text(" "));
System.out.println("key:"+key+" values"+values+" context:"+context);
}
} public static void main(String [] args)throws Exception{
Configuration conf = new Configuration();
String otherArgs[] = new GenericOptionsParser(conf,args).getRemainingArgs();
if(otherArgs.length!=2){
System.out.println("Usage:dedup <in> <out>");
System.exit(2);
}
Job job = new Job(conf,"Data Deduplication");
job.setJarByClass(Dedup.class); job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class); job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true)? 0 : 1 );
}
}

3、测试数据:

file1:
2006-6-9 a
2006-6-10 b
2006-6-11 c
2006-6-12 d
2006-6-13 a
2006-6-14 b
2006-6-15 c
2006-6-11 c
 
file2:
2006-6-9 b
2006-6-10 a
2006-6-11 b
2006-6-12 d
2006-6-13 a
2006-6-14 c
2006-6-15 d
2006-6-11 c
 
4、运行过程:
14/09/21 16:51:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/09/21 16:51:16 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/09/21 16:51:16 INFO input.FileInputFormat: Total input paths to process : 2
14/09/21 16:51:16 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/21 16:51:16 INFO mapred.JobClient: Running job: job_local_0001
14/09/21 16:51:16 INFO util.ProcessTree: setsid exited with exit code 0
14/09/21 16:51:16 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2e9aa770
14/09/21 16:51:16 INFO mapred.MapTask: io.sort.mb = 100
14/09/21 16:51:16 INFO mapred.MapTask: data buffer = 79691776/99614720
14/09/21 16:51:16 INFO mapred.MapTask: record buffer = 262144/327680
mapper.......
key:0  value:2006-6-9 a
line:2006-6-9 a value2006-6-9 a  context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:11  value:2006-6-10 b
line:2006-6-10 b value2006-6-10 b  context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:23  value:2006-6-11 c
line:2006-6-11 c value2006-6-11 c  context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:35  value:2006-6-12 d
line:2006-6-12 d value2006-6-12 d  context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:47  value:2006-6-13 a
line:2006-6-13 a value2006-6-13 a  context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:59  value:2006-6-14 b
line:2006-6-14 b value2006-6-14 b  context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:71  value:2006-6-15 c
line:2006-6-15 c value2006-6-15 c  context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:83  value:2006-6-11 c
line:2006-6-11 c value2006-6-11 c  context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
14/09/21 16:51:16 INFO mapred.MapTask: Starting flush of map output
14/09/21 16:51:16 INFO mapred.MapTask: Finished spill 0
14/09/21 16:51:16 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
14/09/21 16:51:17 INFO mapred.JobClient:  map 0% reduce 0%
14/09/21 16:51:19 INFO mapred.LocalJobRunner:
14/09/21 16:51:19 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
14/09/21 16:51:19 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3697e580
14/09/21 16:51:19 INFO mapred.MapTask: io.sort.mb = 100
14/09/21 16:51:19 INFO mapred.MapTask: data buffer = 79691776/99614720
14/09/21 16:51:19 INFO mapred.MapTask: record buffer = 262144/327680
mapper.......
key:0  value:2006-6-9 b
line:2006-6-9 b value2006-6-9 b  context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:11  value:2006-6-10 a
line:2006-6-10 a value2006-6-10 a  context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:23  value:2006-6-11 b
line:2006-6-11 b value2006-6-11 b  context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:35  value:2006-6-12 d
line:2006-6-12 d value2006-6-12 d  context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:47  value:2006-6-13 a
line:2006-6-13 a value2006-6-13 a  context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:59  value:2006-6-14 c
line:2006-6-14 c value2006-6-14 c  context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:71  value:2006-6-15 d
line:2006-6-15 d value2006-6-15 d  context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:83  value:2006-6-11 c
line:2006-6-11 c value2006-6-11 c  context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
14/09/21 16:51:19 INFO mapred.MapTask: Starting flush of map output
14/09/21 16:51:19 INFO mapred.MapTask: Finished spill 0
14/09/21 16:51:19 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
14/09/21 16:51:20 INFO mapred.JobClient:  map 100% reduce 0%
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
14/09/21 16:51:22 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
14/09/21 16:51:22 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3c844c07
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
14/09/21 16:51:22 INFO mapred.Merger: Merging 2 sorted segments
14/09/21 16:51:22 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 258 bytes
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
reducer.......
key:2006-6-10 a  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-10 a  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-10 b  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-10 b  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-11 b  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-11 b  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-11 c  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-11 c  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-12 d  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-12 d  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-13 a  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-13 a  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-14 b  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-14 b  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-14 c  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-14 c  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-15 c  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-15 c  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-15 d  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-15 d  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-9 a  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-9 a  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-9 b  values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-9 b  valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78  context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
14/09/21 16:51:22 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
14/09/21 16:51:22 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
14/09/21 16:51:22 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9000/user/hadoop/dedup_output
14/09/21 16:51:25 INFO mapred.LocalJobRunner: reduce > reduce
14/09/21 16:51:25 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
14/09/21 16:51:26 INFO mapred.JobClient:  map 100% reduce 100%
14/09/21 16:51:26 INFO mapred.JobClient: Job complete: job_local_0001
14/09/21 16:51:26 INFO mapred.JobClient: Counters: 22
14/09/21 16:51:26 INFO mapred.JobClient:   Map-Reduce Framework
14/09/21 16:51:26 INFO mapred.JobClient:     Spilled Records=32
14/09/21 16:51:26 INFO mapred.JobClient:     Map output materialized bytes=266
14/09/21 16:51:26 INFO mapred.JobClient:     Reduce input records=16
14/09/21 16:51:26 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
14/09/21 16:51:26 INFO mapred.JobClient:     Map input records=16
14/09/21 16:51:26 INFO mapred.JobClient:     SPLIT_RAW_BYTES=232
14/09/21 16:51:26 INFO mapred.JobClient:     Map output bytes=222
14/09/21 16:51:26 INFO mapred.JobClient:     Reduce shuffle bytes=0
14/09/21 16:51:26 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
14/09/21 16:51:26 INFO mapred.JobClient:     Reduce input groups=12
14/09/21 16:51:26 INFO mapred.JobClient:     Combine output records=0
14/09/21 16:51:26 INFO mapred.JobClient:     Reduce output records=12
14/09/21 16:51:26 INFO mapred.JobClient:     Map output records=16
14/09/21 16:51:26 INFO mapred.JobClient:     Combine input records=0
14/09/21 16:51:26 INFO mapred.JobClient:     CPU time spent (ms)=0
14/09/21 16:51:26 INFO mapred.JobClient:     Total committed heap usage (bytes)=813170688
14/09/21 16:51:26 INFO mapred.JobClient:   File Input Format Counters
14/09/21 16:51:26 INFO mapred.JobClient:     Bytes Read=190
14/09/21 16:51:26 INFO mapred.JobClient:   FileSystemCounters
14/09/21 16:51:26 INFO mapred.JobClient:     HDFS_BYTES_READ=475
14/09/21 16:51:26 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=122061
14/09/21 16:51:26 INFO mapred.JobClient:     FILE_BYTES_READ=1665
14/09/21 16:51:26 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=166
14/09/21 16:51:26 INFO mapred.JobClient:   File Output Format Counters
14/09/21 16:51:26 INFO mapred.JobClient:     Bytes Written=166
5、运行结果:
2006-6-10 a    
2006-6-10 b    
2006-6-11 b    
2006-6-11 c    
2006-6-12 d    
2006-6-13 a    
2006-6-14 b    
2006-6-14 c    
2006-6-15 c    
2006-6-15 d    
2006-6-9 a    
2006-6-9 b

MapReduce编程系列 — 3:数据去重的更多相关文章

  1. [Python] 文科生零基础学编程系列三——数据运算符的基本类别

    上一篇:[Python] 文科生零基础学编程系列二--数据类型.变量.常量的基础概念 下一篇: ※ 程序的执行过程,就是对数据进行运算的过程. 不同的数据类型,可以进行不同的运算, 按照数据运算类型的 ...

  2. 学习ASP.NET Core Blazor编程系列八——数据校验

    学习ASP.NET Core Blazor编程系列一--综述 学习ASP.NET Core Blazor编程系列二--第一个Blazor应用程序(上) 学习ASP.NET Core Blazor编程系 ...

  3. 【原创】MapReduce编程系列之二元排序

    普通排序实现 普通排序的实现利用了按姓名的排序,调用了默认的对key的HashPartition函数来实现数据的分组.partition操作之后写入磁盘时会对数据进行排序操作(对一个分区内的数据作排序 ...

  4. MapReduce编程系列 — 5:单表关联

    1.项目名称: 2.项目数据: chile    parentTom    LucyTom    JackJone    LucyJone    JackLucy    MaryLucy    Ben ...

  5. MapReduce编程系列 — 2:计算平均分

    1.项目名称: 2.程序代码: package com.averagescorecount; import java.io.IOException; import java.util.Iterator ...

  6. 【原创】MapReduce编程系列之表连接

    问题描述 需要连接的表如下:其中左边是child,右边是parent,我们要做的是找出grandchild和grandparent的对应关系,为此需要进行表的连接. Tom Lucy Tom Jim ...

  7. MapReduce编程系列 — 6:多表关联

    1.项目名称: 2.程序代码: 版本一(详细版): package com.mtjoin; import java.io.IOException; import java.util.Iterator; ...

  8. MapReduce编程系列 — 4:排序

    1.项目名称: 2.程序代码: package com.sort; import java.io.IOException; import org.apache.hadoop.conf.Configur ...

  9. MapReduce编程系列 — 1:计算单词

    1.代码: package com.mrdemo; import java.io.IOException; import java.util.StringTokenizer; import org.a ...

随机推荐

  1. windows 程序设计 SetPolyFillMode关于ALTERNATE、WINDING的详细解释

    看windows程序第五章GDI编程部分.一直卡壳在这里了. 下面我来说下自己的想法.看是否对您有帮助. 首先我们来看一个图. SetPolyFillMode(ALTERNATE);  // 系统默认 ...

  2. HTML5如何重塑O2O用户体验

    低频次垂直O2O服务应该继续开发原生APP吗?大家有没有发现做一个APP的推广成本和获取用户的成本越来越高?第二,用户安装APP之后,用户并不是经常点击使用APP的,那这是为什么?数据表明90%的O2 ...

  3. linux centos 安装

    本着学习的目的,在自己的电脑上进行 centos 7 安装,记录下这步骤以备忘. 一.Centos 下载 centos 官方(https://www.centos.org/)下载ISO镜像(这是我的下 ...

  4. [译]深入理解JVM

    深入理解JVM 原文链接:http://www.cubrid.org/blog/dev-platform/understanding-jvm-internals 每个使用Java的开发者都知道Java ...

  5. MySQL的varchar定义长度到底是字节还是字符

    相信这个问题也会困扰不少人,尤其是使用过其它数据库(如Oracle)的人,之前我也没有太在意这个问题,再加上一些书籍和网上的文章讲的不够细致,又没测试过,导致我一直理解错误.下面通过实例来解释,在开始 ...

  6. 国产CPU研究单位及现状

    1.国产CPU主要研制单位 (1)高性能通用CPU(“大CPU”,主要应用于高性能计算及服务器等) 主要研发单位:中国科学院计算所.北大众志.国防科技大学.上海高性能集成电路设计中心 (2)安全适用计 ...

  7. mysql mysqldump只导出表结构或只导出数据的实现方法

    mysql mysqldump只导出表结构或只导出数据的实现方法,需要的朋友可以参考下. mysql mysqldump 只导出表结构 不导出数据 复制代码代码如下: mysqldump --opt ...

  8. mybatis + postgresql 遇到的问题

    org.postgresql.util.PSQLException: ERROR: relation "circlefence" does not exist  这个问题是数据库表 ...

  9. (转载)Unity3d摄像机Camera参数详解

    1. Clear Flags:清除标记.决定屏幕的哪部分将被清除.一般用户使用对台摄像机来描绘不同游戏对象的情况,有3中模式选择: Skybox:天空盒.默认模式.在屏幕中的空白部分将显示当前摄像机的 ...

  10. Unity3d之Socket UDP协议

    原文地址:http://blog.csdn.net/dingkun520wy/article/details/49201245 (一)Socket(套接字)UDP协议的特点 1.是基于无连接的协议,没 ...