We've seen the internals of MapReduce in the last post. Now we can make a little change to the WordCount and create a JAR for being executed by Hadoop. 
If we look at the result of the WordCount we ran before, the lines of the file are only split by space, and thus all other punctuation characters and symbols remain attached to the words and in some way "invalidate" the count. For example, we can see some of these wrong values:

KING 1
KING'S 1
KING. 5
King 6
King, 4
King. 2
King; 1

The word is always king but sometimes it appears in upper case, sometimes in lower case, or with a punctuation character after it. So we'd like to update the behaviour of the WordCount program to count all the occurrences of any word, aside from the punctuation and other symbols. In order to do so, we have to modify the code of the mapper, since it's here that we get the data from the file and split it. 
If we look at the code of mapper:

public static class TokenizerMapper extends Mapper<object, text,="" intwritable=""> {

      private final static IntWritable one = new IntWritable(1);
private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}

we see that the StringTokenizer takes the line as the parameter; before doing that, we remove all the symbols from the line using a RegExp that maps each of these symbols into a space:

String cleanLine = value.toString().toLowerCase().replaceAll("[_|$#<>\\[\\]\\*/\\\\,;,.\\-:()?!\"']", " ");

that simply says "if you see any of these character _, |, $, #, <, >, [, ], *, /, \, ,, ;, ., -, :, ,(, ), ?, !, ", ' transform it into a space". All the backslashes are needed for correctly escaping the characters. Then we trim the token to avoid empty tokens:

itr.nextToken().trim()

So now the code looks like this (the updates to the original file are printed in bold):

public static class TokenizerMapper extends Mapper<object, text,="" intwritable=""> {

 private final static IntWritable one = new IntWritable(1);
private Text word = new Text(); @Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String cleanLine = value.toString().toLowerCase().replaceAll("[_|$#<>\\^=\\[\\]\\*/\\\\,;,.\\-:()?!\"']", " ");
StringTokenizer itr = new StringTokenizer(cleanLine);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken().trim());
context.write(word, one);
}
}
}

The complete code of the class is available on my github repository.

We now want to create the JAR to be executed; to do it, we have to go to the output directory of our wordcount file (the directory that contains the .class files) and create a manifest filenamed manifest_file that contains something like this:

Manifest-Version: 1.0
Main-Class: samples.wordcount.WordCount

in which we tell the JAR which class to execute at startup. More details in Java official documentation. Note that there's no need to add classpath info, because the JAR will be run by Hadoop that already has a classpath that contains all needed libraries. 
Now we can launch the command:

$ jar cmf manifest_file wordcount.jar .

that creates a JAR named wordcount.jar that contains all classes starting from this directory (the . parameter) e using the content of manifest_file to create a Manifest. 
We can run the batch as we saw before. Looking at the result file we can check that there are no more symbols and punctuation characters; the occurrences of the word king is now the sum of all occurrences we found before:

king 20

which is what we were looking for.

from: http://andreaiacono.blogspot.com/2014/02/creating-jar-for-hadoop.html

为Hadoop创建JAR包文件Creating a JAR for Hadoop的更多相关文章

  1. 有引用外部jar包时(J2SE)生成jar文件

    一.工程没有引用外部jar包时(J2SE) 选中工程---->右键,Export...--->Java--->选择JAR file--->next-->选择jar fil ...

  2. jmeter,badboy,jar包文件 常数吞吐量计时器?

    badboy录制脚本 1.按f2 红色开始录制 URL输入:https://www.so.com/ 2.搜索框输入zxw 回车键搜索 3.选中关键字(刮例如zxw软件——>tools——> ...

  3. spring原理案例-基本项目搭建 02 spring jar包详解 spring jar包的用途

    Spring4 Jar包详解 SpringJava Spring AOP: Spring的面向切面编程,提供AOP(面向切面编程)的实现 Spring Aspects: Spring提供的对Aspec ...

  4. Eclipse用Runnable JAR file方式打jar包,并用该jar包进行二次开发

    目录: 1.eclipse创建Java项目(带jar包的) 2. eclipse用Export的Runnable JAR file方式打jar包(带jar包的) 打jar包 1)class2json1 ...

  5. 多个jar包合并成一个jar包(ant)

    https://blog.csdn.net/gzl003csdn/article/details/53539133 多个jar包合并成一个jar 使用Apache的Ant是一个基于Java的生成工具. ...

  6. spring boot:在项目中引入第三方外部jar包集成为本地jar包(spring boot 2.3.2)

    一,为什么要集成外部jar包? 不是所有的第三方库都会上传到mvnrepository, 这时我们如果想集成它的第三方库,则需要直接在项目中集成它们的jar包, 在操作上还是很简单的, 这里用luos ...

  7. 多个jar包合并成一个jar包的办法

    步骤: 1.将多个JAR包使用压缩软件打开,并将全包名的类拷贝到一个临时目录地下. 2.cmd命令到该临时目录下,此时会有很多.class文件,其中需要带完整包路径 3.执行 jar -cvfM te ...

  8. Ant打包可运行的Jar包(加入第三方jar包)

    本章介绍使用ant打包可运行的Jar包. 打包jar包最大的问题在于如何加入第三方jar包使得jar文件可以直接运行.以下用一个实例程序进行说明. 程序结构: 关键代码: package com.al ...

  9. 【Maven学习】Maven打包生成普通jar包、可运行jar包、包含所有依赖的jar包

    http://blog.csdn.net/u013177446/article/details/54134394 ******************************************* ...

随机推荐

  1. day4 递归原理及解析

    递归 递归是一种调用自身的方法,在函数执行过程中重复不断的调用自身的过程,递归的规模每次都要缩小,一般前一步的程序作为后一步的参数.但是必须有递归结束条件. 递归算法是一种直接或者间接地调用自身算法的 ...

  2. 【C#日期系列(一)】--C#获取某月第一天0分0秒以及最后一天59分59秒

    工作中可能会遇到很多不常见的需求,比如这次需要获取某个月的第一天和最后一天 #region 取得某月的第一天0分0秒 /// <summary> /// 取得某月的第一天0分0秒 /// ...

  3. 三年.NET即将转Java,我该何去何从

    2014年5月,大三报了某培训班5个月学习.NET 2014年12月-2015年6月,在某软件公司实习,用ASP.NET开发企业级系统 2015年7月-2017年3月,从毕业生到成为该公司的主要开发人 ...

  4. img标签src图片地址找不到显示默认图片

    可以采用onerror的属性: onerror="javascript:this.src='${base}/after/img/aifu.png'" <img id=&quo ...

  5. centos7.5 ab压力测试安装和swoole压力测试

    Apache Benchmark(简称ab) 是Apache安装包中自带的压力测试工具 ,简单易用 1.ab安装 yum -y install httpd-tools 2.ab参数详解,传送门:htt ...

  6. ref:ubuntu下如何批量修改文件后缀名

    ref:https://blog.csdn.net/whuslei/article/details/6724900 ubuntu下如何批量修改文件后缀名 正确的方法是: 在命令行中输入   renam ...

  7. 配置自己的ubuntu

    终端 zsh 安装zsh apt install zsh 3 安装oh-my-zsh bash -c "$(wget https://raw.githubusercontent.com/ro ...

  8. Python并发编程-一个简单的多进程实例

    import time from multiprocessing import Process import os def func(args,args2): #传递参数到进程 print(args, ...

  9. Python编程举例-装饰器

    装饰器的通常用途是扩展已定义好的函数的功能 一个浅显的装饰器编程例子 #装饰器函数 def outer(fun): def wrapper(): #添加新的功能 print('验证') fun() r ...

  10. 关于 DP 的一些题目

    DP 是真的好玩. 口胡一段话题解: DP 题集 1 DP 题集 2