https://www.codetd.com/article/664330

https://blog.csdn.net/dream_an/article/details/84342770

通过idea开发mapreduce程序并直接run,提交到远程hadoop集群执行mapreduce。

简要流程:本地开发mapreduce程序–>设置yarn 模式 --> 直接本地run–>远程集群执行mapreduce程序;

完整的流程:本地开发mapreduce程序——> 设置yarn模式——>初次编译产生jar文件——>增加 job.setJar("mapreduce/build/libs/mapreduce-0.1.jar");——>直接在Idea中run——>远程集群执行mapreduce程序;

一图说明问题:

源码
build.gradle

plugins {
id 'java'
} group 'com.ruizhiedu'
version '0.1' sourceCompatibility = 1.8 repositories {
mavenCentral()
} dependencies {
compile group: 'org.apache.hadoop', name: 'hadoop-common', version: '3.1.0'
compile group: 'org.apache.hadoop', name: 'hadoop-mapreduce-client-core', version: '3.1.0'
compile group: 'org.apache.hadoop', name: 'hadoop-mapreduce-client-jobclient', version: '3.1.0' testCompile group: 'junit', name: 'junit', version: '4.12'
}

java文件

输入、输出已经让我写死了,可以直接run。不需要再运行时候设置idea运行参数

wc.java

package com;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Counter;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.util.StringUtils; import java.io.BufferedReader; import java.io.FileReader;
import java.io.IOException;
import java.net.URI;
import java.util.*; /**
* @author wangxiaolei(王小雷)
* @since 2018/11/22
*/ public class wc {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable> { static enum CountersEnum { INPUT_WORDS } private final static IntWritable one = new IntWritable();
private Text word = new Text(); private boolean caseSensitive;
private Set<String> patternsToSkip = new HashSet<String>(); private Configuration conf;
private BufferedReader fis; @Override
public void setup(Context context) throws IOException,
InterruptedException {
conf = context.getConfiguration();
caseSensitive = conf.getBoolean("wordcount.case.sensitive", true);
if (conf.getBoolean("wordcount.skip.patterns", false)) {
URI[] patternsURIs = Job.getInstance(conf).getCacheFiles();
for (URI patternsURI : patternsURIs) {
Path patternsPath = new Path(patternsURI.getPath());
String patternsFileName = patternsPath.getName().toString();
parseSkipFile(patternsFileName);
}
}
} private void parseSkipFile(String fileName) {
try {
fis = new BufferedReader(new FileReader(fileName));
String pattern = null;
while ((pattern = fis.readLine()) != null) {
patternsToSkip.add(pattern);
}
} catch (IOException ioe) {
System.err.println("Caught exception while parsing the cached file '"
+ StringUtils.stringifyException(ioe));
}
} @Override
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
String line = (caseSensitive) ?
value.toString() : value.toString().toLowerCase();
for (String pattern : patternsToSkip) {
line = line.replaceAll(pattern, "");
}
StringTokenizer itr = new StringTokenizer(line);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
Counter counter = context.getCounter(CountersEnum.class.getName(),
CountersEnum.INPUT_WORDS.toString());
counter.increment();
}
}
} public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = ;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
} public static void main(String[] args) throws Exception { Configuration conf = new Configuration();
conf.set("yarn.resourcemanager.address", "192.168.56.101:8050");
conf.set("mapreduce.framework.name", "yarn");
conf.set("fs.defaultFS", "hdfs://vbusuanzi:9000/");
// conf.set("mapred.jar", "mapreduce/build/libs/mapreduce-0.1.jar"); // 也可以在这里设置刚刚编译好的jar
conf.set("mapred.job.tracker", "vbusuanzi:9001");
// conf.set("mapreduce.app-submission.cross-platform", "true");// Windows开发者需要设置跨平台
args = new String[]{"/tmp/test/LICENSE.txt","/tmp/test/out30"};
GenericOptionsParser optionParser = new GenericOptionsParser(conf, args);
String[] remainingArgs = optionParser.getRemainingArgs(); if ((remainingArgs.length != ) && (remainingArgs.length != )) {
System.err.println("Usage: wordcount <in> <out> [-skip skipPatternFile]");
System.exit();
} Job job = Job.getInstance(conf,"test");
job.setJar("mapreduce/build/libs/mapreduce-0.1.jar");
job.setJarByClass(com.wc.class); job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class); List<String> otherArgs = new ArrayList<String>();
for (int i=; i < remainingArgs.length; ++i) {
if ("-skip".equals(remainingArgs[i])) {
job.addCacheFile(new Path(remainingArgs[++i]).toUri());
job.getConfiguration().setBoolean("wordcount.skip.patterns", true);
} else {
otherArgs.add(remainingArgs[i]);
}
}
FileInputFormat.addInputPath(job, new Path(otherArgs.get()));
FileOutputFormat.setOutputPath(job, new Path(otherArgs.get())); job.waitForCompletion(true); System.exit(job.waitForCompletion(true) ? : );
}
}
												

本地idea开发mapreduce程序提交到远程hadoop集群执行的更多相关文章

  1. eclipse连接远程hadoop集群开发时权限不足问题解决方案

    转自:http://blog.csdn.net/shan9liang/article/details/9734693 eclipse连接远程hadoop集群开发时报错   Exception in t ...

  2. eclipse连接远程hadoop集群开发时0700问题解决方案

    eclipse连接远程hadoop集群开发时报错 错误信息: Exception in thread "main" java.io.IOException:Failed to se ...

  3. 在windows远程提交任务给Hadoop集群(Hadoop 2.6)

    我使用3台Centos虚拟机搭建了一个Hadoop2.6的集群.希望在windows7上面使用IDEA开发mapreduce程序,然后提交的远程的Hadoop集群上执行.经过不懈的google终于搞定 ...

  4. 本地Pycharm将spark程序发送到远端spark集群进行处理

    前言 最近在搞hadoop+spark+python,所以就搭建了一个本地的hadoop环境,基础环境搭建地址hadoop2.7.7 分布式集群安装与配置,spark集群安装并集成到hadoop集群, ...

  5. Eclipse提交任务至Hadoop集群遇到的问题

    环境:Windows8.1,Eclipse 用Hadoop自带的wordcount示例 hadoop2.7.0 hadoop-eclipse-plugin-2.7.0.jar //Eclipse的插件 ...

  6. idea打jar包-MapReduce作业提交到hadoop集群执行

    https://blog.csdn.net/jiaotangX/article/details/78661862 https://liushilang.iteye.com/blog/2093173

  7. Eclipse远程提交hadoop集群任务

    文章概览: 1.前言 2.Eclipse查看远程hadoop集群文件 3.Eclipse提交远程hadoop集群任务 4.小结   1 前言 Hadoop高可用品台搭建完备后,参见<Hadoop ...

  8. IntelliJ IDEA编写的spark程序在远程spark集群上运行

    准备工作 需要有三台主机,其中一台主机充当master,另外两台主机分别为slave01,slave02,并且要求三台主机处于同一个局域网下 通过命令:ifconfig 可以查看主机的IP地址,如下图 ...

  9. Hadoop集群(第7期)_Eclipse开发环境设置

    1.Hadoop开发环境简介 1.1 Hadoop集群简介 Java版本:jdk-6u31-linux-i586.bin Linux系统:CentOS6.0 Hadoop版本:hadoop-1.0.0 ...

随机推荐

  1. hdu 3791:二叉搜索树(数据结构,二叉搜索树 BST)

    二叉搜索树 Time Limit : 2000/1000ms (Java/Other)   Memory Limit : 32768/32768K (Java/Other) Total Submiss ...

  2. Nginx upstream性能优化

    1      目的 完成基于大报文和小报文场景的Nginx压测方案设计,其在长连接和短连接的最佳并发模型测试结果如下表: 大报文在短连接场景QPS在1.8K左右,在长连接场景QPS在2.1K左右,提升 ...

  3. 【OpenWRT】网络配置

    cd /etc/config vim network vim wireless cd /etc/init.d/network

  4. Logstash zabbix 插件

    zabbix 监控 logstash 安装社区扩展包wget http://download.elasticsearch.org/logstash/logstash/logstash-contrib- ...

  5. leetcode -- Maximal Rectangle TODO O(N)

    Given a 2D binary matrix filled with 0's and 1's, find the largest rectangle containing all ones and ...

  6. 码农深耕 - 说说IDisposable

    概要 C#提供了方便的垃圾回收机制,使我们几乎不再需要为资源管理费心.可事实上,能被垃圾回收释放掉的只是托管资源,非托管资源还是需要我们手动释放.而为了实现这一目的,C#提供了 IDisposable ...

  7. Math.max得到数组中最大值

    Math.max(param1,param2) 因为参数不支持数组. 所以可以根据apply的特点来解决, var max = Math.max.apply(null,array),这样就可以轻易的得 ...

  8. JAVA基础之sql模糊匹配、外键以及jsp中include的用法

    一.SQL模糊匹配 适用于对字符串进行模糊搜索 格式:   字段名 Like '%关键词%'      %          表示这个位置可有任意个字符(没有也可以) %关键词%  只要包含关键词就算 ...

  9. 让IIS8支持WCF的最简单方法

    以前在IIS8中使用WCF时,总是参考在IIS8添加WCF服务支持这篇博文进行手工设置: 1. 首先添加MIME类型:扩展名“.svc”,MIME类型 “application/octet-strea ...

  10. http协议----->请求头和响应头

    http实用头字段-----Range 如果请求里有这个range头,那么响应里也有 1.首先在webroot下放好a.txt 内容如下: 2.然后在本地有个下载未完成的a.txt 本地a.txt内容 ...