小笔记:

Mavon是一种项目管理工具,通过xml配置来设置项目信息。

Mavon POM(project of model).

Steps:

1. set up and configure the development environment.

2. writing your map and reduce functions and run them in local (standalone) mode from the command line or within your IDE.

3. unit test --> test on small dataset --> test on the full dataset after unleash in a cluster

--> tuning

1. Configuration API

  • Components in Hadoop are configured using Hadoop’s own configuration API.
  • org.apache.hadoop.conf package
  • Configurations read their properties from resources — XML files with a simple structure for defining name-value pairs.

For example, write a configuration-1.xml like:

<?xml version="1.0"?>
<configuration>
<property>
<name>color</name>
<value>yellow</value>
<description>Color</description>
</property>
<property>
<name>size</name>
<value>10</value>
<description>Size</description>
</property>
<property>
<name>weight</name>
<value>heavy</value>
<final>true</final>
<description>Weight</description>
</property>
<property>
<name>size-weight</name>
<value>${size},${weight}</value>
<description>Size and weight</description>
</property>
</configuration>

then access it by coding below:

Configuration conf = new Configuration();
conf.addResource("configuration-1.xml");
conf.addResource("configuration-2.xml"); // more than one resource are added orderly, and the latter will overwrite the former. assertThat(conf.get("color"), is("yellow"));
assertThat(conf.getInt("size", 0), is(10));
assertThat(conf.get("breadth", "wide"), is("wide"));

Note:

  • type information is not stored in the XML file;
  • instead, properties can be interpreted as a given type when they are read.
  • Also, the get() methods allow you to specify a default value, which is used if the property is not defined in the XML file, as in the case of breadth here.
  • more than one resource are added orderly, and the latter properties will overwrite the former.
  • However, properties that are marked as final cannot be overridden in later definitions.
  • system properties take priority:
System.setProperty("size", "14")
  • Options specified with -D take priority over properties from the configuration files.

This will override the number of reducers set on the cluster or set in any client-side configuration files.

% hadoop ConfigurationPrinter -D color=yellow | grep color

  

2. Set up dev enviroment

The Maven POMs (Project Object Model) are used to show the dependencies needed for building and testing MapReduce programs. Actually a xml file.

  • hadoop-client dependency, which contains all the Hadoop client-side classes needed to interact with HDFS and MapReduce.
  • For running unit tests, we use junit,
  • for writing MapReduce tests, we use mrunit.
  • The hadoop-minicluster library contains the “mini-” clusters that are useful for testing with Hadoop clusters running in a single JVM.

Many IDEs can read Maven POMs directly, so you can just point them at the directory containing the pom.xml file and start writing code.

Alternatively, you can use Maven to generate configuration files for your IDE. For example, the following creates Eclipse configuration files so you can import the project into Eclipse:

% mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true

3. Managing switching

It is common to switch between running the application locally and running it on a cluster.

  • have Hadoop configuration files containing the connection settings for each cluster
  • we assume the existence of a directory called conf that contains three configuration files: hadoop-local.xml, hadoop-localhost.xml, and hadoopcluster.xml
  • For example, the following command shows a directory listing on the HDFS serverrunning in pseudodistributed mode on localhost:

- conf

% hadoop  fs  -conf  conf/hadoop-localhost.xml  -ls

Found 2 items
drwxr-xr-x - tom supergroup 0 2014-09-08 10:19 input
drwxr-xr-x - tom supergroup 0 2014-09-08 10:19 output

4.  Starts MapReduce example:

Mapper:  to get year and temperature from an input string

public class MaxTemperatureMapper
extends Mapper<LongWritable, Text, Text, IntWritable> { @Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
String year = line.substring(15, 19);
int airTemperature = Integer.parseInt(line.substring(87, 92)); context.write(new Text(year), new IntWritable(airTemperature));
}
}

Unit test for the Mapper:

import java.io.IOException;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mrunit.mapreduce.MapDriver;
import org.junit.*; public class MaxTemperatureMapperTest {
@Test
public void processesValidRecord() throws IOException, InterruptedException {
Text value = new Text("0043011990999991950051518004+68750+023550FM-12+0382" +
// Year ^^^^
"99999V0203201N00261220001CN9999999N9-00111+99999999999");
// Temperature ^^^^^ new MapDriver<LongWritable, Text, Text, IntWritable>()
.withMapper(new MaxTemperatureMapper())
.withInput(new LongWritable(0), value)
.withOutput(new Text("1950"), new IntWritable(-11))
.runTest();
}
}

Reducer:  to get the maxmium

public class MaxTemperatureReducer
extends Reducer<Text, IntWritable, Text, IntWritable> { @Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException { int maxValue = Integer.MIN_VALUE; for (IntWritable value : values) {
maxValue = Math.max(maxValue, value.get());
} context.write(key, new IntWritable(maxValue));
}
}

Unit test for the Reducer:

@Test
public void returnsMaximumIntegerInValues() throws IOException, InterruptedException { new ReduceDriver<Text, IntWritable, Text, IntWritable>()
.withReducer(new MaxTemperatureReducer())
.withInput(new Text("1950"),
Arrays.asList(new IntWritable(10), new IntWritable(5)))
.withOutput(new Text("1950"), new IntWritable(10))
.runTest();
}

5 . a write job driver

Using the Tool interface , it’s easy to write a driver to run a MapReduce job.

Then run the driver locally.

% mvn compile
% export HADOOP_CLASSPATH=target/classes/
% hadoop v2.MaxTemperatureDriver -conf conf/hadoop-local.xml \
input/ncdc/micro output

% hadoop v2.MaxTemperatureDriver -fs file:/// -jt local input/ncdc/micro output

The local job runner uses a single JVM to run a job, so as long as all the classes that your job needs are on its classpath, then things will just work.

6. Running on a cluster

  • a job’s classes must be packaged into a job JAR file to send to the cluster

 

 

Hadoop 学习笔记3 Develping MapReduce的更多相关文章

  1. Hadoop学习笔记—4.初识MapReduce

    一.神马是高大上的MapReduce MapReduce是Google的一项重要技术,它首先是一个编程模型,用以进行大数据量的计算.对于大数据量的计算,通常采用的处理手法就是并行计算.但对许多开发者来 ...

  2. Hadoop学习笔记(2) 关于MapReduce

    1. 查找历年最高的温度. MapReduce任务过程被分为两个处理阶段:map阶段和reduce阶段.每个阶段都以键/值对作为输入和输出,并由程序员选择它们的类型.程序员还需具体定义两个函数:map ...

  3. Hadoop学习笔记—22.Hadoop2.x环境搭建与配置

    自从2015年花了2个多月时间把Hadoop1.x的学习教程学习了一遍,对Hadoop这个神奇的小象有了一个初步的了解,还对每次学习的内容进行了总结,也形成了我的一个博文系列<Hadoop学习笔 ...

  4. Hadoop学习笔记(7) ——高级编程

    Hadoop学习笔记(7) ——高级编程 从前面的学习中,我们了解到了MapReduce整个过程需要经过以下几个步骤: 1.输入(input):将输入数据分成一个个split,并将split进一步拆成 ...

  5. Hadoop学习笔记(6) ——重新认识Hadoop

    Hadoop学习笔记(6) ——重新认识Hadoop 之前,我们把hadoop从下载包部署到编写了helloworld,看到了结果.现是得开始稍微更深入地了解hadoop了. Hadoop包含了两大功 ...

  6. Hadoop学习笔记(2)

    Hadoop学习笔记(2) ——解读Hello World 上一章中,我们把hadoop下载.安装.运行起来,最后还执行了一个Hello world程序,看到了结果.现在我们就来解读一下这个Hello ...

  7. Hadoop学习笔记(5) ——编写HelloWorld(2)

    Hadoop学习笔记(5) ——编写HelloWorld(2) 前面我们写了一个Hadoop程序,并让它跑起来了.但想想不对啊,Hadoop不是有两块功能么,DFS和MapReduce.没错,上一节我 ...

  8. Hadoop学习笔记(2) ——解读Hello World

    Hadoop学习笔记(2) ——解读Hello World 上一章中,我们把hadoop下载.安装.运行起来,最后还执行了一个Hello world程序,看到了结果.现在我们就来解读一下这个Hello ...

  9. Hadoop学习笔记(1) ——菜鸟入门

    Hadoop学习笔记(1) ——菜鸟入门 Hadoop是什么?先问一下百度吧: [百度百科]一个分布式系统基础架构,由Apache基金会所开发.用户可以在不了解分布式底层细节的情况下,开发分布式程序. ...

随机推荐

  1. sublime快捷键<转>

    写在前面的话:平时做项目中在用eclipse和vs,但是对于一些小项目,感觉没有必要搞那么大的一个工具使用,比如写个小微商城,搞个小脚本了什么,所以就一直在用Sublime Text,界面清新简洁,没 ...

  2. 单机多实例Tomcat部署

    单机单用户基础上, 如何运行多个tomcat实例. 首先是tomcat的目录结构 bin    – 包含所有运行tomcat的二进制和脚本文件 lib     – 包含tomcat使用的所有共享库 c ...

  3. 通俗理解T检验和F检验

    来源: http://blog.sina.com.cn/s/blog_4ee13c2c01016div.html   1,T检验和F检验的由来 一般而言,为了确定从样本(sample)统计结果推论至总 ...

  4. react拷贝index.html很恶心之解决办法

    https://www.npmjs.com/package/html-webpack-plugin

  5. Java 集合系列12之 TreeMap详细介绍(源码解析)和使用示例

    概要 这一章,我们对TreeMap进行学习.我们先对TreeMap有个整体认识,然后再学习它的源码,最后再通过实例来学会使用TreeMap.内容包括:第1部分 TreeMap介绍第2部分 TreeMa ...

  6. Ninject 学习杂记

    IOC容器的DI实现并不依赖于方法调用拦截,而是通过DI容器内部自己通过反射的方式生成需要的类型实例,并调用实例的成员.然后再把实例返回给容器外部环境使用. Ninject本身及其扩展库,还针对特定的 ...

  7. eap

    本文介绍了eap  

  8. java 中的异步回调

    异步回调,本来在c#中是一件极为简单和优雅的事情,想不到在java的世界里,却如此烦琐,先看下类图: 先定义了一个CallBackTask,做为外层的面子工程,其主要工作为start 开始一个异步操作 ...

  9. 求解区间最值 - RMQ - ST 算法介绍

    解析 ST 算法是 RMQ(Range Minimum/Maximum Query)中一个很经典的算法,它天生用来求得一个区间的最值,但却不能维护最值,也就是说,过程中不能改变区间中的某个元素的值.O ...

  10. 打字机游戏Ⅱ之手速pk

    前言 demo预览->typewriter gameⅡ (chrome only 没做兼容) 别看一开始时速度不快,会线性增长的哦,反正楼主的score还没达到过40... 为什么叫Ⅱ呢?之前写 ...