1、项目名称:

2、程序代码:

  1. package com.sort;
  2.  
  3. import java.io.IOException;
  4. import org.apache.hadoop.conf.Configuration;
  5. import org.apache.hadoop.fs.Path;
  6. import org.apache.hadoop.io.IntWritable;
  7. import org.apache.hadoop.io.Text;
  8. import org.apache.hadoop.mapreduce.Job;
  9. import org.apache.hadoop.mapreduce.Mapper;
  10. import org.apache.hadoop.mapreduce.Reducer;
  11. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  12. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  13. import org.apache.hadoop.util.GenericOptionsParser;
  14.  
  15. public class Sort {
  16. //map将输入中的value化成IntWritable类型,作为输出的key
  17. public static class Map extends Mapper<Object, Text , IntWritable, IntWritable>{
  18. public static IntWritable data = new IntWritable();
  19.  
  20. public void map(Object key , Text value, Context context) throws IOException,InterruptedException{
  21. System.out.println("Mapper.................");
  22. System.out.println("key:"+key+" value:"+value);
  23.  
  24. String line = value.toString();
  25. data.set(Integer.parseInt(line));
  26. context.write(data, new IntWritable(1));
  27. System.out.println("data:"+data+" context:"+context);
  28. }
  29. }
  30.  
  31. //reduce将输入的key复制到输出的value上,然后根据输入的value-list中元素的个数决定key的输出次数
  32. //用全局linenum来代表key的位次
  33. public static class Reduce extends Reducer<IntWritable , IntWritable, IntWritable, IntWritable >{
  34. public static IntWritable linenum = new IntWritable(1);
  35.  
  36. public void reduce(IntWritable key, Iterable<IntWritable> values , Context context)throws IOException,InterruptedException{
  37. System.out.println("Reducer.................");
  38. System.out.println("key:"+key+" value:"+values);
  39.  
  40. for(IntWritable val : values){
  41. context.write(linenum, key);
  42. System.out.println("linenum:" + linenum +" key:"+key+" context:"+context);
  43. linenum = new IntWritable(linenum.get()+1);
  44.  
  45. }
  46. }
  47. }
  48. public static void main(String [] args) throws Exception{
  49. Configuration conf = new Configuration();
  50. String [] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs();
  51. if(otherArgs.length != 2){
  52. System.out.println("Usage: sort<in><out>");
  53. System.exit(2);
  54. }
  55. Job job = new Job(conf,"sort");
  56. job.setJarByClass(Sort.class);
  57. job.setMapperClass(Map.class);
  58. job.setReducerClass(Reduce.class);
  59.  
  60. job.setOutputKeyClass(IntWritable.class);
  61. job.setOutputValueClass(IntWritable.class);
  62.  
  63. FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
  64. FileOutputFormat.setOutputPath(job,new Path(otherArgs[1]));
  65.  
  66. System.exit(job.waitForCompletion(true)? 0 : 1);
  67. }
  68. }
3、测试数据:
file1:
2
32
654
32
15
756
65223
 
file2:
5956
22
650
92
 
file3:
26
54
6
 
4、运行过程:
14/09/21 17:44:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/09/21 17:44:27 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/09/21 17:44:28 INFO input.FileInputFormat: Total input paths to process : 3
14/09/21 17:44:28 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/21 17:44:28 INFO mapred.JobClient: Running job: job_local_0001
14/09/21 17:44:28 INFO util.ProcessTree: setsid exited with exit code 0
14/09/21 17:44:28 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@365f3cec
14/09/21 17:44:28 INFO mapred.MapTask: io.sort.mb = 100
14/09/21 17:44:28 INFO mapred.MapTask: data buffer = 79691776/99614720
14/09/21 17:44:28 INFO mapred.MapTask: record buffer = 262144/327680
Mapper.................
key:0  value:2
data:2 context:org.apache.hadoop.mapreduce.Mapper$Context@40804be
Mapper.................
key:2  value:32
data:32 context:org.apache.hadoop.mapreduce.Mapper$Context@40804be
Mapper.................
key:5  value:654
data:654 context:org.apache.hadoop.mapreduce.Mapper$Context@40804be
Mapper.................
key:9  value:32
data:32 context:org.apache.hadoop.mapreduce.Mapper$Context@40804be
Mapper.................
key:12  value:15
data:15 context:org.apache.hadoop.mapreduce.Mapper$Context@40804be
Mapper.................
key:15  value:756
data:756 context:org.apache.hadoop.mapreduce.Mapper$Context@40804be
Mapper.................
key:19  value:65223
data:65223 context:org.apache.hadoop.mapreduce.Mapper$Context@40804be
14/09/21 17:44:28 INFO mapred.MapTask: Starting flush of map output
14/09/21 17:44:28 INFO mapred.MapTask: Finished spill 0
14/09/21 17:44:28 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
14/09/21 17:44:29 INFO mapred.JobClient:  map 0% reduce 0%
14/09/21 17:44:31 INFO mapred.LocalJobRunner:
14/09/21 17:44:31 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
14/09/21 17:44:31 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5c72877c
14/09/21 17:44:31 INFO mapred.MapTask: io.sort.mb = 100
14/09/21 17:44:31 INFO mapred.MapTask: data buffer = 79691776/99614720
14/09/21 17:44:31 INFO mapred.MapTask: record buffer = 262144/327680
Mapper.................
key:0  value:5956
data:5956 context:org.apache.hadoop.mapreduce.Mapper$Context@5c0134fb
Mapper.................
key:5  value:22
data:22 context:org.apache.hadoop.mapreduce.Mapper$Context@5c0134fb
Mapper.................
key:8  value:650
data:650 context:org.apache.hadoop.mapreduce.Mapper$Context@5c0134fb
Mapper.................
key:12  value:92
data:92 context:org.apache.hadoop.mapreduce.Mapper$Context@5c0134fb
14/09/21 17:44:31 INFO mapred.MapTask: Starting flush of map output
14/09/21 17:44:31 INFO mapred.MapTask: Finished spill 0
14/09/21 17:44:31 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
14/09/21 17:44:32 INFO mapred.JobClient:  map 100% reduce 0%
14/09/21 17:44:34 INFO mapred.LocalJobRunner:
14/09/21 17:44:34 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
14/09/21 17:44:34 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5c88c5d3
14/09/21 17:44:34 INFO mapred.MapTask: io.sort.mb = 100
14/09/21 17:44:34 INFO mapred.MapTask: data buffer = 79691776/99614720
14/09/21 17:44:34 INFO mapred.MapTask: record buffer = 262144/327680
Mapper.................
key:0  value:26
data:26 context:org.apache.hadoop.mapreduce.Mapper$Context@36a05d78
Mapper.................
key:3  value:54
data:54 context:org.apache.hadoop.mapreduce.Mapper$Context@36a05d78
Mapper.................
key:6  value:6
data:6 context:org.apache.hadoop.mapreduce.Mapper$Context@36a05d78
14/09/21 17:44:34 INFO mapred.MapTask: Starting flush of map output
14/09/21 17:44:34 INFO mapred.MapTask: Finished spill 0
14/09/21 17:44:34 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
14/09/21 17:44:37 INFO mapred.LocalJobRunner:
14/09/21 17:44:37 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
14/09/21 17:44:37 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3c521e5d
14/09/21 17:44:37 INFO mapred.LocalJobRunner:
14/09/21 17:44:37 INFO mapred.Merger: Merging 3 sorted segments
14/09/21 17:44:37 INFO mapred.Merger: Down to the last merge-pass, with 3 segments left of total size: 146 bytes
14/09/21 17:44:37 INFO mapred.LocalJobRunner:
Reducer.................
key:2  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:1  key:2 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:6  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:2  key:6 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:15  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:3  key:15 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:22  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:4  key:22 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:26  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:5  key:26 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:32  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:6  key:32 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
linenum:7  key:32 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:54  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:8  key:54 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:92  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:9  key:92 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:650  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:10  key:650 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:654  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:11  key:654 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:756  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:12  key:756 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:5956  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:13  key:5956 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
Reducer.................
key:65223  value:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@38839cf7
linenum:14  key:65223 context:org.apache.hadoop.mapreduce.Reducer$Context@23475bbf
14/09/21 17:44:37 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
14/09/21 17:44:37 INFO mapred.LocalJobRunner:
14/09/21 17:44:37 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
14/09/21 17:44:37 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9000/user/hadoop/sort_output
14/09/21 17:44:40 INFO mapred.LocalJobRunner: reduce > reduce
14/09/21 17:44:40 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
14/09/21 17:44:41 INFO mapred.JobClient:  map 100% reduce 100%
14/09/21 17:44:41 INFO mapred.JobClient: Job complete: job_local_0001
14/09/21 17:44:41 INFO mapred.JobClient: Counters: 22
14/09/21 17:44:41 INFO mapred.JobClient:   Map-Reduce Framework
14/09/21 17:44:41 INFO mapred.JobClient:     Spilled Records=28
14/09/21 17:44:41 INFO mapred.JobClient:     Map output materialized bytes=158
14/09/21 17:44:41 INFO mapred.JobClient:     Reduce input records=14
14/09/21 17:44:41 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
14/09/21 17:44:41 INFO mapred.JobClient:     Map input records=14
14/09/21 17:44:41 INFO mapred.JobClient:     SPLIT_RAW_BYTES=345
14/09/21 17:44:41 INFO mapred.JobClient:     Map output bytes=112
14/09/21 17:44:41 INFO mapred.JobClient:     Reduce shuffle bytes=0
14/09/21 17:44:41 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
14/09/21 17:44:41 INFO mapred.JobClient:     Reduce input groups=13
14/09/21 17:44:41 INFO mapred.JobClient:     Combine output records=0
14/09/21 17:44:41 INFO mapred.JobClient:     Reduce output records=14
14/09/21 17:44:41 INFO mapred.JobClient:     Map output records=14
14/09/21 17:44:41 INFO mapred.JobClient:     Combine input records=0
14/09/21 17:44:41 INFO mapred.JobClient:     CPU time spent (ms)=0
14/09/21 17:44:41 INFO mapred.JobClient:     Total committed heap usage (bytes)=1325400064
14/09/21 17:44:41 INFO mapred.JobClient:   File Input Format Counters
14/09/21 17:44:41 INFO mapred.JobClient:     Bytes Read=48
14/09/21 17:44:41 INFO mapred.JobClient:   FileSystemCounters
14/09/21 17:44:41 INFO mapred.JobClient:     HDFS_BYTES_READ=161
14/09/21 17:44:41 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=162878
14/09/21 17:44:41 INFO mapred.JobClient:     FILE_BYTES_READ=3682
14/09/21 17:44:41 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=81
14/09/21 17:44:41 INFO mapred.JobClient:   File Output Format Counters
14/09/21 17:44:41 INFO mapred.JobClient:     Bytes Written=81
 
5、运行结果:
1    2
2    6
3    15
4    22
5    26
6    32
7    32
8    54
9    92
10    650
11    654
12    756
13    5956
14    65223

MapReduce编程系列 — 4:排序的更多相关文章

  1. 【原创】MapReduce编程系列之二元排序

    普通排序实现 普通排序的实现利用了按姓名的排序,调用了默认的对key的HashPartition函数来实现数据的分组.partition操作之后写入磁盘时会对数据进行排序操作(对一个分区内的数据作排序 ...

  2. MapReduce编程:数字排序

    问题描述 将乱序数字按照升序排序. 思路描述 按照mapreduce的默认排序,依次输出key值. 代码 package org.apache.hadoop.examples; import java ...

  3. MapReduce编程系列 — 6:多表关联

    1.项目名称: 2.程序代码: 版本一(详细版): package com.mtjoin; import java.io.IOException; import java.util.Iterator; ...

  4. MapReduce编程系列 — 5:单表关联

    1.项目名称: 2.项目数据: chile    parentTom    LucyTom    JackJone    LucyJone    JackLucy    MaryLucy    Ben ...

  5. MapReduce编程系列 — 3:数据去重

    1.项目名称: 2.程序代码: package com.dedup; import java.io.IOException; import org.apache.hadoop.conf.Configu ...

  6. MapReduce编程系列 — 2:计算平均分

    1.项目名称: 2.程序代码: package com.averagescorecount; import java.io.IOException; import java.util.Iterator ...

  7. MapReduce编程系列 — 1:计算单词

    1.代码: package com.mrdemo; import java.io.IOException; import java.util.StringTokenizer; import org.a ...

  8. 【原创】MapReduce编程系列之表连接

    问题描述 需要连接的表如下:其中左边是child,右边是parent,我们要做的是找出grandchild和grandparent的对应关系,为此需要进行表的连接. Tom Lucy Tom Jim ...

  9. MapReduce 编程 系列九 Reducer数目

    本篇介绍怎样控制reduce的数目.前面观察结果文件,都会发现通常是以part-r-00000 形式出现多个文件,事实上这个reducer的数目有关系.reducer数目多,结果文件数目就多. 在初始 ...

随机推荐

  1. JS中的!=、== 、!==、===的用法和区别。

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 var num = 1;   var str = '1';   var test = 1;   t ...

  2. 图片grayscale(灰阶效果)webkit内核支持。

    filter:gray;-webkit-filter: grayscale(100%); 置为灰阶等hove时候 -webkit-filter: grayscale(0%);显示出彩色.

  3. jsp多条件查询及查询结果在同一页面显示(原创)

    第一步,建立main.jsp页面,使用frameset分上下两个框架,上部是query.jsp.下部是detail .detail显示的是showdetail.jsp的页面 <title> ...

  4. select 函数实现 三种拓扑结构 n个客户端的异步通信 (完全图+线性链表+无环图)

    一.这里只介绍简单的三个客户端异步通信(完全图拓扑结构) //建立管道 mkfifo open顺序: cl1 读 , cl2 cl3 向 cl1写 cl2 读 , cl1 cl3 向 cl2写 cl3 ...

  5. [Learn Android Studio 汉化教程]第四章 : Refactoring Code

    [Learn Android Studio 汉化教程]第四章 : Refactoring Code 第四章 Refactoring Code    重构代码 在Android Studio中开发,解决 ...

  6. SQL Server数据库备份(本机)

    基础的SQL Server数据库备份存储过程 /**************************************************************************** ...

  7. chown

    chown 命令 用途:更改与文件关联的所有者或组 chown [ -f ] [ -h ] [ -R ] Owner [ :Group ] { File ... | Directory ... } c ...

  8. Entity Framework 安装出现问题

    Entity Framework 详情请看: http://ulfqbpl.blog.163.com/blog/static/8778355220126272473276/

  9. Experience all that SharePoint 15 has to offer. Start now or Remind me later.

    $spSite = Get-SpSite($waUrl); $spSite.AllowSelfServiceUpgrade = $false

  10. linux du 与 df 命令

    du 命令:显示每个文件和目录的磁盘使用空间 命令格式:du [选项][文件] -k或--kilobytes  以KB(1024bytes)为单位输出. -m或--megabytes  以MB为单位输 ...