学生成绩---增强版

数据信息

  1. computer,huangxiaoming,85,86,41,75,93,42,85
  2. computer,xuzheng,54,52,86,91,42
  3. computer,huangbo,85,42,96,38
  4. english,zhaobenshan,54,52,86,91,42,85,75
  5. english,liuyifei,85,41,75,21,85,96,14
  6. algorithm,liuyifei,75,85,62,48,54,96,15
  7. computer,huangjiaju,85,75,86,85,85
  8. english,liuyifei,76,95,86,74,68,74,48
  9. english,huangdatou,48,58,67,86,15,33,85
  10. algorithm,huanglei,76,95,86,74,68,74,48
  11. algorithm,huangjiaju,85,75,86,85,85,74,86
  12. computer,huangdatou,48,58,67,86,15,33,85
  13. english,zhouqi,85,86,41,75,93,42,85,75,55,47,22
  14. english,huangbo,85,42,96,38,55,47,22
  15. algorithm,liutao,85,75,85,99,66
  16. computer,huangzitao,85,86,41,75,93,42,85
  17. math,wangbaoqiang,85,86,41,75,93,42,85
  18. computer,liujialing,85,41,75,21,85,96,14,74,86
  19. computer,liuyifei,75,85,62,48,54,96,15
  20. computer,liutao,85,75,85,99,66,88,75,91
  21. computer,huanglei,76,95,86,74,68,74,48
  22. english,liujialing,75,85,62,48,54,96,15
  23. math,huanglei,76,95,86,74,68,74,48
  24. math,huangjiaju,85,75,86,85,85,74,86
  25. math,liutao,48,58,67,86,15,33,85
  26. english,huanglei,85,75,85,99,66,88,75,91
  27. math,xuzheng,54,52,86,91,42,85,75
  28. math,huangxiaoming,85,75,85,99,66,88,75,91
  29. math,liujialing,85,86,41,75,93,42,85,75
  30. english,huangxiaoming,85,86,41,75,93,42,85
  31. algorithm,huangdatou,48,58,67,86,15,33,85
  32. algorithm,huangzitao,85,86,41,75,93,42,85,75

数据解释

数据字段个数不固定:
第一个是课程名称,总共四个课程,computer,math,english,algorithm,
第二个是学生姓名,后面是每次考试的分数

统计需求

1、统计每门课程的参考人数和课程平均分

2、统计每门课程参考学生的平均分,并且按课程存入不同的结果文件,要求一门课程一个结果文件,并且按平均分从高到低排序,分数保留一位小数

3、求出每门课程参考学生成绩最高的学生的信息:课程,姓名和平均分

第一题

MRAvgScore1.java

  1. /**
  2. * 需求:统计每门课程的参考人数和课程平均分
  3. * */
  4. public class MRAvgScore1 {
  5.  
  6. public static void main(String[] args) throws Exception {
  7.  
  8. Configuration conf1 = new Configuration();
  9. Configuration conf2 = new Configuration();
  10.  
  11. Job job1 = Job.getInstance(conf1);
  12. Job job2 = Job.getInstance(conf2);
  13.  
  14. job1.setJarByClass(MRAvgScore1.class);
  15. job1.setMapperClass(AvgScoreMapper1.class);
  16. //job.setReducerClass(MFReducer.class);
  17.  
  18. job1.setOutputKeyClass(Text.class);
  19. job1.setOutputValueClass(DoubleWritable.class);
  20.  
  21. Path inputPath1 = new Path("D:\\MR\\hw\\work3\\input");
  22. Path outputPath1 = new Path("D:\\MR\\hw\\work3\\output_hw1_1");
  23.  
  24. FileInputFormat.setInputPaths(job1, inputPath1);
  25. FileOutputFormat.setOutputPath(job1, outputPath1);
  26.  
  27. job2.setMapperClass(AvgScoreMapper2.class);
  28. job2.setReducerClass(AvgScoreReducer2.class);
  29.  
  30. job2.setOutputKeyClass(Text.class);
  31. job2.setOutputValueClass(DoubleWritable.class);
  32.  
  33. Path inputPath2 = new Path("D:\\MR\\hw\\work3\\output_hw1_1");
  34. Path outputPath2 = new Path("D:\\MR\\hw\\work3\\output_hw1_end");
  35.  
  36. FileInputFormat.setInputPaths(job2, inputPath2);
  37. FileOutputFormat.setOutputPath(job2, outputPath2);
  38.  
  39. JobControl control = new JobControl("AvgScore");
  40.  
  41. ControlledJob aJob = new ControlledJob(job1.getConfiguration());
  42. ControlledJob bJob = new ControlledJob(job2.getConfiguration());
  43.  
  44. bJob.addDependingJob(aJob);
  45.  
  46. control.addJob(aJob);
  47. control.addJob(bJob);
  48.  
  49. Thread thread = new Thread(control);
  50. thread.start();
  51.  
  52. while(!control.allFinished()) {
  53. thread.sleep(1000);
  54. }
  55. System.exit(0);
  56.  
  57. }
  58.  
  59. /**
  60. * 数据类型:computer,huangxiaoming,85,86,41,75,93,42,85
  61. *
  62. * 需求:统计每门课程的参考人数和课程平均分
  63. *
  64. * 分析:以课程名称+姓名作为key,以平均分数作为value
  65. * */
  66. public static class AvgScoreMapper1 extends Mapper<LongWritable, Text, Text, DoubleWritable>{
  67.  
  68. @Override
  69. protected void map(LongWritable key, Text value,Context context)
  70. throws IOException, InterruptedException {
  71.  
  72. String[] splits = value.toString().split(",");
  73. //拼接成要输出的key
  74. String outKey = splits[0]+"\t"+splits[1];
  75. int length = splits.length;
  76. int sum = 0;
  77. //求出成绩的总和
  78. for(int i=2;i<length;i++) {
  79. sum += Integer.parseInt(splits[i]);
  80. }
  81. //求出平均分
  82. double outValue = sum / (length - 2);
  83.  
  84. context.write(new Text(outKey), new DoubleWritable(outValue));
  85.  
  86. }
  87.  
  88. }
  89.  
  90. /**
  91. * 对第一次MapReduce输出的结果进一步计算,第一步输出结果样式为
  92. * math huangjiaju 82.0
  93. * math huanglei 74.0
  94. * math huangxiaoming 83.0
  95. * math liujialing 72.0
  96. * math liutao 56.0
  97. * math wangbaoqiang 72.0
  98. * math xuzheng 69.0
  99. *
  100. * 需求:统计每门课程的参考人数和课程平均分
  101. * 分析:以课程名称作为key,以分数作为value进行 输出
  102. *
  103. * */
  104. public static class AvgScoreMapper2 extends Mapper<LongWritable, Text, Text, DoubleWritable>{
  105.  
  106. @Override
  107. protected void map(LongWritable key, Text value,Context context)
  108. throws IOException, InterruptedException {
  109.  
  110. String[] splits = value.toString().split("\t");
  111. String outKey = splits[0];
  112. String outValue = splits[2];
  113.  
  114. context.write(new Text(outKey), new DoubleWritable(Double.parseDouble(outValue)));
  115. }
  116.  
  117. }
  118.  
  119. /**
  120. * 针对同一门课程,对values进行遍历计数,看看有多少人参加了考试,并计算出平均成绩
  121. * */
  122. public static class AvgScoreReducer2 extends Reducer<Text, DoubleWritable, Text, Text>{
  123.  
  124. @Override
  125. protected void reduce(Text key, Iterable<DoubleWritable> values,
  126. Context context) throws IOException, InterruptedException {
  127.  
  128. int count = 0;
  129. double sum = 0;
  130. for(DoubleWritable value : values) {
  131. count++;
  132. sum += value.get();
  133. }
  134.  
  135. double avg = sum / count;
  136. String outValue = count + "\t" + avg;
  137. context.write(key, new Text(outValue));
  138. }
  139.  
  140. }
  141.  
  142. }

第二题

MRAvgScore2.java

  1. public class MRAvgScore2 {
  2.  
  3. public static void main(String[] args) throws Exception {
  4.  
  5. Configuration conf = new Configuration();
  6.  
  7. Job job = Job.getInstance(conf);
  8.  
  9. job.setJarByClass(MRAvgScore2.class);
  10. job.setMapperClass(ScoreMapper3.class);
  11. job.setReducerClass(ScoreReducer3.class);
  12.  
  13. job.setOutputKeyClass(StudentBean.class);
  14. job.setOutputValueClass(NullWritable.class);
  15.  
  16. job.setPartitionerClass(CoursePartitioner.class);
  17. job.setNumReduceTasks(4);
  18.  
  19. Path inputPath = new Path("D:\\MR\\hw\\work3\\output_hw1_1");
  20. Path outputPath = new Path("D:\\MR\\hw\\work3\\output_hw2_1");
  21.  
  22. FileInputFormat.setInputPaths(job, inputPath);
  23. FileOutputFormat.setOutputPath(job, outputPath);
  24. boolean isDone = job.waitForCompletion(true);
  25. System.exit(isDone ? 0 : 1);
  26. }
  27.  
  28. public static class ScoreMapper3 extends Mapper<LongWritable, Text, StudentBean, NullWritable>{
  29.  
  30. @Override
  31. protected void map(LongWritable key, Text value,Context context)
  32. throws IOException, InterruptedException {
  33.  
  34. String[] splits = value.toString().split("\t");
  35.  
  36. double score = Double.parseDouble(splits[2]);
  37. DecimalFormat df = new DecimalFormat("#.0");
  38. df.format(score);
  39.  
  40. StudentBean student = new StudentBean(splits[0],splits[1],score);
  41.  
  42. context.write(student, NullWritable.get());
  43.  
  44. }
  45.  
  46. }
  47.  
  48. public static class ScoreReducer3 extends Reducer<StudentBean, NullWritable, StudentBean, NullWritable>{
  49.  
  50. @Override
  51. protected void reduce(StudentBean key, Iterable<NullWritable> values,Context context)
  52. throws IOException, InterruptedException {
  53.  
  54. for(NullWritable nvl : values){
  55. context.write(key, nvl);
  56. }
  57.  
  58. }
  59. }
  60. }

StudentBean.java

  1. public class StudentBean implements WritableComparable<StudentBean>{
  2. private String course;
  3. private String name;
  4. private double avgScore;
  5.  
  6. public String getCourse() {
  7. return course;
  8. }
  9. public void setCourse(String course) {
  10. this.course = course;
  11. }
  12. public String getName() {
  13. return name;
  14. }
  15. public void setName(String name) {
  16. this.name = name;
  17. }
  18. public double getavgScore() {
  19. return avgScore;
  20. }
  21. public void setavgScore(double avgScore) {
  22. this.avgScore = avgScore;
  23. }
  24. public StudentBean(String course, String name, double avgScore) {
  25. super();
  26. this.course = course;
  27. this.name = name;
  28. this.avgScore = avgScore;
  29. }
  30. public StudentBean() {
  31. super();
  32. }
  33.  
  34. @Override
  35. public String toString() {
  36. return course + "\t" + name + "\t" + avgScore;
  37. }
  38. @Override
  39. public void readFields(DataInput in) throws IOException {
  40. course = in.readUTF();
  41. name = in.readUTF();
  42. avgScore = in.readDouble();
  43. }
  44. @Override
  45. public void write(DataOutput out) throws IOException {
  46. out.writeUTF(course);
  47. out.writeUTF(name);
  48. out.writeDouble(avgScore);
  49. }
  50. @Override
  51. public int compareTo(StudentBean stu) {
  52. double diffent = this.avgScore - stu.avgScore;
  53. if(diffent == 0) {
  54. return 0;
  55. }else {
  56. return diffent > 0 ? -1 : 1;
  57. }
  58. }
  59.  
  60. }

第三题

MRScore3.java

  1. public class MRScore3 {
  2.  
  3. public static void main(String[] args) throws Exception {
  4.  
  5. Configuration conf1 = new Configuration();
  6. Configuration conf2 = new Configuration();
  7.  
  8. Job job1 = Job.getInstance(conf1);
  9. Job job2 = Job.getInstance(conf2);
  10.  
  11. job1.setJarByClass(MRScore3.class);
  12. job1.setMapperClass(MRMapper3_1.class);
  13. //job.setReducerClass(ScoreReducer3.class);
  14.  
  15. job1.setMapOutputKeyClass(IntWritable.class);
  16. job1.setMapOutputValueClass(StudentBean.class);
  17. job1.setOutputKeyClass(IntWritable.class);
  18. job1.setOutputValueClass(StudentBean.class);
  19.  
  20. job1.setPartitionerClass(CoursePartitioner2.class);
  21.  
  22. job1.setNumReduceTasks(4);
  23.  
  24. Path inputPath = new Path("D:\\MR\\hw\\work3\\input");
  25. Path outputPath = new Path("D:\\MR\\hw\\work3\\output_hw3_1");
  26.  
  27. FileInputFormat.setInputPaths(job1, inputPath);
  28. FileOutputFormat.setOutputPath(job1, outputPath);
  29.  
  30. job2.setMapperClass(MRMapper3_2.class);
  31. job2.setReducerClass(MRReducer3_2.class);
  32.  
  33. job2.setMapOutputKeyClass(IntWritable.class);
  34. job2.setMapOutputValueClass(StudentBean.class);
  35. job2.setOutputKeyClass(StudentBean.class);
  36. job2.setOutputValueClass(NullWritable.class);
  37.  
  38. Path inputPath2 = new Path("D:\\MR\\hw\\work3\\output_hw3_1");
  39. Path outputPath2 = new Path("D:\\MR\\hw\\work3\\output_hw3_end");
  40.  
  41. FileInputFormat.setInputPaths(job2, inputPath2);
  42. FileOutputFormat.setOutputPath(job2, outputPath2);
  43.  
  44. JobControl control = new JobControl("Score3");
  45.  
  46. ControlledJob aJob = new ControlledJob(job1.getConfiguration());
  47. ControlledJob bJob = new ControlledJob(job2.getConfiguration());
  48.  
  49. bJob.addDependingJob(aJob);
  50.  
  51. control.addJob(aJob);
  52. control.addJob(bJob);
  53.  
  54. Thread thread = new Thread(control);
  55. thread.start();
  56.  
  57. while(!control.allFinished()) {
  58. thread.sleep(1000);
  59. }
  60. System.exit(0);
  61.  
  62. }
  63.  
  64. public static class MRMapper3_1 extends Mapper<LongWritable, Text, IntWritable, StudentBean>{
  65.  
  66. StudentBean outKey = new StudentBean();
  67. IntWritable outValue = new IntWritable();
  68. List<String> scoreList = new ArrayList<>();
  69.  
  70. protected void map(LongWritable key, Text value, Context context) throws java.io.IOException ,InterruptedException {
  71.  
  72. scoreList.clear();
  73. String[] splits = value.toString().split(",");
  74. long sum = 0;
  75.  
  76. for(int i=2;i<splits.length;i++) {
  77. scoreList.add(splits[i]);
  78. sum += Long.parseLong(splits[i]);
  79. }
  80.  
  81. Collections.sort(scoreList);
  82. outValue.set(Integer.parseInt(scoreList.get(scoreList.size()-1)));
  83.  
  84. double avg = sum * 1.0/(splits.length-2);
  85. outKey.setCourse(splits[0]);
  86. outKey.setName(splits[1]);
  87. outKey.setavgScore(avg);
  88.  
  89. context.write(outValue, outKey);
  90.  
  91. };
  92. }
  93.  
  94. public static class MRMapper3_2 extends Mapper<LongWritable, Text,IntWritable, StudentBean >{
  95.  
  96. StudentBean outValue = new StudentBean();
  97. IntWritable outKey = new IntWritable();
  98.  
  99. protected void map(LongWritable key, Text value, Context context) throws java.io.IOException ,InterruptedException {
  100.  
  101. String[] splits = value.toString().split("\t");
  102. outKey.set(Integer.parseInt(splits[0]));
  103.  
  104. outValue.setCourse(splits[1]);
  105. outValue.setName(splits[2]);
  106. outValue.setavgScore(Double.parseDouble(splits[3]));
  107.  
  108. context.write(outKey, outValue);
  109.  
  110. };
  111. }
  112.  
  113. public static class MRReducer3_2 extends Reducer<IntWritable, StudentBean, StudentBean, NullWritable>{
  114.  
  115. StudentBean outKey = new StudentBean();
  116.  
  117. @Override
  118. protected void reduce(IntWritable key, Iterable<StudentBean> values,Context context)
  119. throws IOException, InterruptedException {
  120.  
  121. int length = values.toString().length();
  122.  
  123. for(StudentBean value : values) {
  124. outKey = value;
  125. }
  126.  
  127. context.write(outKey, NullWritable.get());
  128.  
  129. }
  130. }
  131.  
  132. }

Hadoop学习之路(二十五)MapReduce的API使用(二)的更多相关文章

  1. Hadoop学习之路(十五)MapReduce的多Job串联和全局计数器

    MapReduce 多 Job 串联 需求 一个稍复杂点的处理逻辑往往需要多个 MapReduce 程序串联处理,多 job 的串联可以借助 MapReduce 框架的 JobControl 实现 实 ...

  2. FastAPI 学习之路(十五)响应状态码

    系列文章: FastAPI 学习之路(一)fastapi--高性能web开发框架 FastAPI 学习之路(二) FastAPI 学习之路(三) FastAPI 学习之路(四) FastAPI 学习之 ...

  3. Hadoop学习之路(十三)MapReduce的初识

    MapReduce是什么 首先让我们来重温一下 hadoop 的四大组件: HDFS:分布式存储系统 MapReduce:分布式计算系统 YARN:hadoop 的资源调度系统 Common:以上三大 ...

  4. Hadoop学习之路(十二)分布式集群中HDFS系统的各种角色

    NameNode 学习目标 理解 namenode 的工作机制尤其是元数据管理机制,以增强对 HDFS 工作原理的 理解,及培养 hadoop 集群运营中“性能调优”.“namenode”故障问题的分 ...

  5. Hadoop学习之路(十四)MapReduce的核心运行机制

    概述 一个完整的 MapReduce 程序在分布式运行时有两类实例进程: 1.MRAppMaster:负责整个程序的过程调度及状态协调 2.Yarnchild:负责 map 阶段的整个数据处理流程 3 ...

  6. Hadoop学习之路(十九)MapReduce框架排序

    流量统计项目案例 样本示例 需求 1. 统计每一个用户(手机号)所耗费的总上行流量.总下行流量,总流量 2. 得出上题结果的基础之上再加一个需求:将统计结果按照总流量倒序排序 3. 将流量汇总统计结果 ...

  7. Hadoop学习之路(十八)MapReduce框架Combiner分区

    对combiner的理解 combiner其实属于优化方案,由于带宽限制,应该尽量map和reduce之间的数据传输数量.它在Map端把同一个key的键值对合并在一起并计算,计算规则与reduce一致 ...

  8. Kubernetes学习之路(十五)之Ingress和Ingress Controller

    目录 一.什么是Ingress? 1.Pod 漂移问题 2.端口管理问题 3.域名分配及动态更新问题 二.如何创建Ingress资源 三.Ingress资源类型 1.单Service资源型Ingres ...

  9. 学习之路三十五:Android和WCF通信 - 大数据压缩后传输

    最近一直在优化项目的性能,就在前几天找到了一些资料,终于有方案了,那就是压缩数据. 一丶前端和后端的压缩和解压缩流程 二丶优点和缺点 优点:①字符串的压缩率能够达到70%-80%左右 ②字符串数量更少 ...

  10. Python小白学习之路(十五)—【map()函数】【filter()函数】【reduce()函数】

    一.map()函数 map()是 Python 内置的高阶函数 有两个参数,第一个是接收一个函数 f(匿名函数或者自定义函数都OK啦):第二个参数是一个 可迭代对象 功能是通过把函数 f 依次作用在 ...

随机推荐

  1. mongodb在w10安装及配置

    官网网站下载mongodb 第一步:安装 默认安装一直next,直到choose setup type,系统盘空间足够大,安装在c盘就好 第二步:配置及使用 1.创建目录mongodb,及三个文件夹d ...

  2. 转【js & jquery】遮罩层实现禁止a、span、button等元素的鼠标事件

    /*遮罩层代码 作用:通过遮罩层的方式防止表单提交次数过多 */ function MaskIt(obj){ var hoverdiv = '<div class="divMask&q ...

  3. LeetCode刷题第一天

    1 . 两数之和 给定一个整数数组 nums 和一个目标值 target,请你在该数组中找出和为目标值的那 两个 整数,并返回他们的数组下标. 你可以假设每种输入只会对应一个答案.但是,你不能重复利用 ...

  4. 【SSH网上商城项目实战25】使用java email给用户发送邮件

       转自: https://blog.csdn.net/eson_15/article/details/51475046 当用户购买完商品后,我们应该向用户发送一封邮件,告诉他订单已生成之类的信息, ...

  5. 借助 Filter 生成静态页面缓存问题

    如果有些 jsp 页面,在一次 jsp 页面生成后 html 后, 就不太可能需要更新.可以使用缓存机制来解决这个问题. 解决思路如下 1.  定义一个文件夹 pagestaticize,用来存放 j ...

  6. 中南月赛 B题 Scoop water

    Problem B: Scoop water Time Limit: 2 Sec  Memory Limit: 128 MBSubmit: 261  Solved: 57[Submit][Status ...

  7. Elasticsearch数据类型

    Elasticsearch自带的数据类型是Lucene索引的依据,也是做手动映射调整的依据.映射中主要就是针对字段设置类型以及类型相关参数.1.JSON基础类型如下:字符串:string数字:byte ...

  8. MySQL数据库的备份与恢复命令

    1.数据库导出SQL脚本 启动MySQL服务器 输入:mysqldump -u root -p  数据库名>生成脚本文件路径 输入登录密码,回车键 例如: $ mysql.server star ...

  9. Lucas定理及扩展

    Lucas定理 不会证明... 若\(p\)为质数 则\(C(n, m)\equiv C(n/p, m/p)*C(n\%p, m\%p)(mod\ p)\) 扩展 求 \(C(n,m)\) 模 \(M ...

  10. 洛谷P2792 [JSOI2008]小店购物(最小树形图)

    题意 题目链接 Sol 一开始的思路:新建一个虚点向每个点连边,再加上题面中给出的边,边权均为大小*需要购买的数量 然后发现死活都过不去 看了题解才发现题目中有个细节--买了\(A\)就可以买\(B\ ...