【MapReduce】经常使用计算模型具体解释
前一阵子參加炼数成金的MapReduce培训,培训中的作业样例比較有代表性,用于解释问题再好只是了。
有一本国外的有关MR的教材,比較有用。点此下载。
一.MapReduce应用场景
二.MapReduce机制
三.经常使用计算模型
这里举一个样例。数据表在Oracle默认用户Scott下有DEPT表和EMP表。为方便,如今直接写成两个TXT文件例如以下:
1.部门表
DEPTNO,DNAME,LOC // 部门号。部门名称,所在地
10,ACCOUNTING,NEW YORK
20,RESEARCH,DALLAS
30,SALES,CHICAGO
40,OPERATIONS,BOSTON
2.员工表
7369,SMITH,CLERK,1980-12-17 00:00:00.0,800,,20,7902
7499,ALLEN,SALESMAN,1981-02-20 00:00:00.0,1600,300,30,7698
7521,WARD,SALESMAN,1981-02-22 00:00:00.0,1250,500,30,7698
7566,JONES,MANAGER,1981-04-02 00:00:00.0,2975,,20,7839
7654,MARTIN,SALESMAN,1981-09-28 00:00:00.0,1250,1400,30,7698
7698,BLAKE,MANAGER,1981-05-01 00:00:00.0,2850,,30,7839
7782,CLARK,MANAGER,1981-06-09 00:00:00.0,2450, ,10,7839
7839,KING,PRESIDENT,1981-11-17 00:00:00.0,5000,,10,
7844,TURNER,SALESMAN,1981-09-08 00:00:00.0,1500,0,30,7698
7900,JAMES,CLERK,1981-12-03 00:00:00.0,950,,30,7698
7902,FORD,ANALYST,1981-12-03 00:00:00.0,3000,,20,7566
7934,MILLER,CLERK,1982-01-23 00:00:00.0,1300,,10,7782
3.实例化为bean
public Emp(String inStr) {
String[] split = inStr.split(",");
this.empno = (split[0].isEmpty()? "" : split[0]);
this.ename = (split[1].isEmpty() ? "" : split[1]);
this.job = (split[2].isEmpty() ? "" : split[2]);
this.hiredate = (split[3].isEmpty() ? "" : split[3]);
this.sal = (split[4].isEmpty() ? "0" : split[4]);
this.comm = (split[5].isEmpty() ? "" : split[5]);
this.deptno = (split[6].isEmpty() ? "" : split[6]);
try {
this.mgr = (split[7].isEmpty() ? "" : split[7]);
} catch (IndexOutOfBoundsException e) { //防止最后一位为空的情况
this.mgr = "";
}
}
dept.java
public Dept(String string) {
String[] split = string.split(",");
this.deptno = split[0];
this.dname = split[1];
this.loc = split[2];
}
4.模型分析
4.1 求和
public static class Map_1 extends MapReduceBase implements Mapper<Object, Text, Text, IntWritable> {
public void map(Object key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
try {
Emp emp = new Emp(value.toString());
output.collect(new Text(emp.getDeptno()), new IntWritable(Integer.parseInt(emp.getSal()))); // { k=部门号,v=员工薪资}
} catch (Exception e) {
reporter.getCounter(ErrCount.LINESKIP).increment(1);
WriteErrLine.write("./input/" + this.getClass().getSimpleName() + "err_lines", reporter.getCounter(ErrCount.LINESKIP).getCounter() + " " + value.toString());
}
}
} public static class Reduce_1 extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum = sum + values.next().get();
}
output.collect(key, new IntWritable(sum));
} }
watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQv/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="" />
4.3 平均值
public static class Map_2 extends MapReduceBase implements Mapper<Object, Text, Text, IntWritable> {
public void map(Object key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
try {
Emp emp = new Emp(value.toString());
output.collect(new Text(emp.getDeptno()), new IntWritable(Integer.parseInt(emp.getSal()))); //{ k=部门号,v=薪资}
} catch (Exception e) {
reporter.getCounter(ErrCount.LINESKIP).increment(1);
WriteErrLine.write("./input/" + this.getClass().getSimpleName() + "err_lines", reporter.getCounter(ErrCount.LINESKIP).getCounter() + " " + value.toString());
} }
} public static class Reduce_2 extends MapReduceBase implements Reducer<Text, IntWritable, Text, Text> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
double sum = 0; //部门工资
int count =0 ; //人数
while (values.hasNext()) {
count++;
sum = sum + values.next().get();
}
output.collect(key, new Text( count+" "+sum/count));
} }
watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQv/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="" />
4.4 分组排序
public static class Map_3 extends MapReduceBase implements Mapper<Object, Text, Text, Text> {
public void map(Object key, Text value, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
try {
Emp emp = new Emp(value.toString());
output.collect(new Text(emp.getDeptno()), new Text(emp.getHiredate() + "~" + emp.getEname())); // { k=部门号。v=聘期}
} catch (Exception e) {
reporter.getCounter(ErrCount.LINESKIP).increment(1);
WriteErrLine.write("./input/" + this.getClass().getSimpleName() + "err_lines", reporter.getCounter(ErrCount.LINESKIP).getCounter() + " " + value.toString());
} }
} public static class Reduce_3 extends MapReduceBase implements Reducer<Text, Text, Text, Text> {
public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
DateFormat sdf = DateFormat.getDateInstance();
Date minDate = new Date(9999, 12, 30);
Date d;
String[] strings = null;
while (values.hasNext()) {
try {
strings = values.next().toString().split("~"); // 获取名字和日期
d = sdf.parse(strings[0].toString().substring(0, 10));
if (d.before(minDate)) {
minDate = d;
}
} catch (ParseException e) {
e.printStackTrace();
}
}
output.collect(key, new Text(minDate.toLocaleString() + " " + strings[1])); } }
4.5 多表关联
public static class Map_4 extends MapReduceBase implements Mapper<Object, Text, Text, Text> {
public void map(Object key, Text value, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
try {
String fileName = ((FileSplit) reporter.getInputSplit()).getPath().getName();
if (fileName.equalsIgnoreCase("emp.txt")) {
Emp emp = new Emp(value.toString());
output.collect(new Text(emp.getDeptno()), new Text("A#" + emp.getSal()));
}
if (fileName.equalsIgnoreCase("dept.txt")) {
Dept dept = new Dept(value.toString());
output.collect(new Text(dept.getDeptno()), new Text("B#" + dept.getLoc()));
}
} catch (Exception e) {
reporter.getCounter(ErrCount.LINESKIP).increment(1);
WriteErrLine.write("./input/" + this.getClass().getSimpleName() + "err_lines", reporter.getCounter(ErrCount.LINESKIP).getCounter() + " " + value.toString());
} }
} public static class Reduce_4 extends MapReduceBase implements Reducer<Text, Text, Text, Text> {
public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
String deptV;
Vector<String> empList = new Vector<String>(); // 保存EMP表的工资数据
Vector<String> deptList = new Vector<String>(); // 保存DEPT表的位置数据
while (values.hasNext()) {
deptV = values.next().toString();
if (deptV.startsWith("A#")) {
empList.add(deptV.substring(2));
}
if (deptV.startsWith("B#")) {
deptList.add(deptV.substring(2));
}
}
double sumSal = 0;
for (String location : deptList) {
for (String salary : empList) {
//每一个城市员工工资总和
sumSal = Integer.parseInt(salary) + sumSal;
}
output.collect(new Text(location), new Text(Double.toString(sumSal)));
}
} }
4.6 单表关联
public static class Map_5 extends MapReduceBase implements Mapper<Object, Text, Text, Text> {
public void map(Object key, Text value, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
try {
Emp emp = new Emp(value.toString());
output.collect(new Text(emp.getMgr()), new Text("A#" + emp.getEname() + "~" + emp.getSal())); // 员工表 { k=上司名。v=员工工资}
output.collect(new Text(emp.getEmpno()), new Text("B#" + emp.getEname() + "~" + emp.getSal()));// “经理表” { k=员工名,v=员工工资}
} catch (Exception e) {
reporter.getCounter(ErrCount.LINESKIP).increment(1);
WriteErrLine.write("./input/" + this.getClass().getSimpleName() + "err_lines", reporter.getCounter(ErrCount.LINESKIP).getCounter() + " " + value.toString());
}
}
} public static class Reduce_5 extends MapReduceBase implements Reducer<Text, Text, Text, Text> {
public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
String value;
Vector<String> empList = new Vector<String>(); // 员工表
Vector<String> mgrList = new Vector<String>(); // 经理表
while (values.hasNext()) {
value = values.next().toString();
if (value.startsWith("A#")) {
empList.add(value.substring(2));
}
if (value.startsWith("B#")) {
mgrList.add(value.substring(2));
}
}
String empName, empSal, mgrSal; for (String emploee : empList) {
for (String mgr : mgrList) {
String[] empInfo = emploee.split("~");
empName = empInfo[0];
empSal = empInfo[1];
String[] mgrInfo = mgr.split("~");
mgrSal = mgrInfo[1];
if (Integer.parseInt(empSal) > Integer.parseInt(mgrSal)) {
output.collect(key, new Text(empName + " " + empSal));
}
}
}
} }
执行结果
4.7 TOP N
public static class Map_8 extends MapReduceBase implements Mapper<Object, Text, Text, Text> {
public void map(Object key, Text value, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
try {
Emp emp = new Emp(value.toString());
output.collect(new Text("1"), new Text(emp.getEname() + "~" + emp.getSal())); // { k=任意字符串或数字,v=员工名字+薪资}
} catch (Exception e) {
reporter.getCounter(ErrCount.LINESKIP).increment(1);
WriteErrLine.write("./input/" + this.getClass().getSimpleName() + "err_lines", reporter.getCounter(ErrCount.LINESKIP).getCounter() + " " + value.toString());
} }
} public static class Reduce_8 extends MapReduceBase implements Reducer<Text, Text, Text, Text> {
public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
Map<Integer, String> emp = new TreeMap<Integer, String>(); // TreeMap默认key升序排列,巧妙利用这点能够实现top N
while (values.hasNext()) {
String[] valStrings = values.next().toString().split("~");
emp.put(Integer.parseInt(valStrings[1]), valStrings[0]);
}
int count = 0; // 计数器
for (Iterator<Integer> keySet = emp.keySet().iterator(); keySet.hasNext();) {
if (count < 3) { // N =3
Integer current_key = keySet.next();
output.collect(new Text(emp.get(current_key)), new Text(current_key.toString())); // 迭代key,即SAL
count++;
} else {
break;
}
}
}
}
运算结果
4.8 降序排序
public static class Map_9 extends MapReduceBase implements Mapper<Object, Text, Text, Text> {
public void map(Object key, Text value, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
try {
Emp emp = new Emp(value.toString());
int totalSal = Integer.parseInt(emp.getComm()) + Integer.parseInt(emp.getSal());
output.collect(new Text("1"), new Text(emp.getEname() + "~" + totalSal));
} catch (Exception e) {
reporter.getCounter(ErrCount.LINESKIP).increment(1);
WriteErrLine.write("./input/" + this.getClass().getSimpleName() + "err_lines", reporter.getCounter(ErrCount.LINESKIP).getCounter() + " " + value.toString());
} }
} public static class Reduce_9 extends MapReduceBase implements Reducer<Text, Text, Text, Text> {
public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
Map<Integer, String> emp = new TreeMap<Integer, String>(
// 重写比較器,使降序排列
new Comparator<Integer>() {
public int compare(Integer o1, Integer o2) {
return o2.compareTo(o1);
}
});
while (values.hasNext()) {
String[] valStrings = values.next().toString().split("~");
emp.put(Integer.parseInt(valStrings[1]), valStrings[0]);
}
for (Iterator<Integer> keySet = emp.keySet().iterator(); keySet.hasNext();) {
Integer current_key = keySet.next();
output.collect(new Text(emp.get(current_key)), new Text(current_key.toString())); // 迭代key,即SAL
}
}
}
执行结果
四.总结
【MapReduce】经常使用计算模型具体解释的更多相关文章
- 重要 | Spark和MapReduce的对比,不仅仅是计算模型?
[前言:笔者将分上下篇文章进行阐述Spark和MapReduce的对比,首篇侧重于"宏观"上的对比,更多的是笔者总结的针对"相对于MapReduce我们为什么选择Spar ...
- MapReduce 计算模型
前言 本文讲解Hadoop中的编程及计算模型MapReduce,并将给出在MapReduce模型下编程的基本套路. 模型架构 在Hadoop中,用于执行计算任务(MapReduce任务)的机器有两个角 ...
- MapReduce计算模型
MapReduce计算模型 MapReduce两个重要角色:JobTracker和TaskTracker. MapReduce Job 每个任务初始化一个Job,没个Job划分为两个阶段:Map和 ...
- MapReduce计算模型的优化
MapReduce 计算模型的优化涉及了方方面面的内容,但是主要集中在两个方面:一是计算性能方面的优化:二是I/O操作方面的优化.这其中,又包含六个方面的内容. 1.任务调度 任务调度是Hadoop中 ...
- 第四篇:MapReduce计算模型
前言 本文讲解Hadoop中的编程及计算模型MapReduce,并将给出在MapReduce模型下编程的基本套路. 模型架构 在Hadoop中,用于执行计算任务(MapReduce任务)的机器有两个角 ...
- MapReduce计算模型二
之前写过关于Hadoop方面的MapReduce框架的文章MapReduce框架Hadoop应用(一) 介绍了MapReduce的模型和Hadoop下的MapReduce框架,此文章将进一步介绍map ...
- 【CDN+】 Spark入门---Handoop 中的MapReduce计算模型
前言 项目中运用了Spark进行Kafka集群下面的数据消费,本文作为一个Spark入门文章/笔记,介绍下Spark基本概念以及MapReduce模型 Spark的基本概念: 官网: http://s ...
- 性能测试学习之二 ——性能测试模型(PV计算模型)
PV计算模型 现有的PV计算公式是: 每台服务器每秒平均PV量 =( (总PV*80%)/(24*60*60*40%))/服务器数量 =2*(总PV)/* (24*60*60) /服务器数量 通过定积 ...
- Spark计算模型
[TOC] Spark计算模型 Spark程序模型 一个经典的示例模型 SparkContext中的textFile函数从HDFS读取日志文件,输出变量file var file = sc.textF ...
随机推荐
- dubbo问题求解
各位大牛好,小弟公司开发中遇到一个很奇怪的问题望有大神指教一下,实在是已经搞了3天了一点头绪没有,公司使用的是eclipse+maven+zookeper+dubbo主要是dubbo的问题,刚开始使用 ...
- SGU 461 Wiki Lists dfs
不难的题,不过蛮有意思的dfs #include <iostream> #include <cstdio> #include <fstream> #include ...
- Domino服务器SSL的配置录像
Domino服务器SSL的配置录像 格式:avi, 大小:25M 时长: 6分钟 本文出自 "李晨光原创技术博客" 博客,转载请与作者联系!
- 【基础篇】activity生命周期及数据保存
常见的Android 的界面,均采用Activity+view的形式显示的,一提到Activity,立即就能联想到Activity的生命周期与状态的保存. 下面先从Activity的生命周期开始说起 ...
- 【python 设计模式】单例模式
单例模式(Singleton Pattern)是一种常用的软件设计模式,该模式的主要目的是确保某一个类只有一个实例存在.当你希望在整个系统中,某个类只能出现一个实例时,单例对象就能派上用场. 比如,某 ...
- snmpd修改端口
http://blog.csdn.net/cau99/article/details/5077239 http://blog.csdn.net/gua___gua/article/details/48 ...
- Intersection between a 2d line and a conic in OpenCASCADE
Intersection between a 2d line and a conic in OpenCASCADE eryar@163.com Abstract. OpenCASCADE provid ...
- Can not find a java.io.InputStream with the name [downloadFile] in the invocation stack.
1.错误描写叙述 八月 14, 2015 4:22:45 下午 com.opensymphony.xwork2.util.logging.jdk.JdkLogger error 严重: Excepti ...
- 34.Intellij IDEA 安装lombok及使用详解
转自:https://blog.csdn.net/qinxuefly/article/details/79159018 项目中经常使用bean,entity等类,绝大部分数据类类中都需要get.set ...
- 【基础篇】DatePickerDialog日期控件的基本使用(一)
项目步骤: 1.首先在Main.xml布局文件中添加一个Button标签,用来点击显示日期控件,Main.xml内容如下: <RelativeLayout xmlns:android=" ...