运行mapreduce - java.lang.InterruptedException
错误日志:
2018-11-19 05:23:51,686 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-11-19 05:23:52,595 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1181)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2018-11-19 05:23:52,596 INFO [main] jvm.JvmMetrics (JvmMetrics.java:init(79)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2018-11-19 05:23:53,215 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2018-11-19 05:23:53,289 INFO [main] input.FileInputFormat (FileInputFormat.java:listStatus(289)) - Total input files to process : 1
2018-11-19 05:23:53,375 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(200)) - number of splits:1
2018-11-19 05:23:53,684 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(289)) - Submitting tokens for job: job_local2033293629_0001
2018-11-19 05:23:54,051 INFO [main] mapreduce.Job (Job.java:submit(1345)) - The url to track the job: http://localhost:8080/
2018-11-19 05:23:54,052 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1390)) - Running job: job_local2033293629_0001
2018-11-19 05:23:54,053 INFO [Thread-23] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(498)) - OutputCommitter set in config null
2018-11-19 05:23:54,059 INFO [Thread-23] output.FileOutputCommitter (FileOutputCommitter.java:<init>(123)) - File Output Committer Algorithm version is 1
2018-11-19 05:23:54,061 INFO [Thread-23] output.FileOutputCommitter (FileOutputCommitter.java:<init>(138)) - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2018-11-19 05:23:54,070 INFO [Thread-23] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(516)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2018-11-19 05:23:54,156 INFO [Thread-23] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(475)) - Waiting for map tasks
2018-11-19 05:23:54,157 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(251)) - Starting task: attempt_local2033293629_0001_m_000000_0
2018-11-19 05:23:54,185 INFO [LocalJobRunner Map Task Executor #0] output.FileOutputCommitter (FileOutputCommitter.java:<init>(123)) - File Output Committer Algorithm version is 1
2018-11-19 05:23:54,187 INFO [LocalJobRunner Map Task Executor #0] output.FileOutputCommitter (FileOutputCommitter.java:<init>(138)) - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2018-11-19 05:23:54,219 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(619)) - Using ResourceCalculatorProcessTree : [ ]
2018-11-19 05:23:54,224 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(756)) - Processing split: hdfs://hadoop1:9000/Input/test.txt:0+1149413
2018-11-19 05:23:54,303 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1205)) - (EQUATOR) 0 kvi 26214396(104857584)
2018-11-19 05:23:54,303 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(998)) - mapreduce.task.io.sort.mb: 100
2018-11-19 05:23:54,304 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(999)) - soft limit at 83886080
2018-11-19 05:23:54,304 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(1000)) - bufstart = 0; bufvoid = 104857600
2018-11-19 05:23:54,304 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(1001)) - kvstart = 26214396; length = 6553600
2018-11-19 05:23:54,305 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(403)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2018-11-19 05:23:54,983 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(618)) -
2018-11-19 05:23:54,987 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1462)) - Starting flush of map output
2018-11-19 05:23:54,987 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1484)) - Spilling map output
2018-11-19 05:23:54,987 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1485)) - bufstart = 0; bufend = 1574408; bufvoid = 104857600
2018-11-19 05:23:54,988 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1487)) - kvstart = 26214396(104857584); kvend = 25789404(103157616); length = 424993/6553600
2018-11-19 05:23:55,054 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1411)) - Job job_local2033293629_0001 running in uber mode : false
2018-11-19 05:23:55,055 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1418)) - map 0% reduce 0%
2018-11-19 05:23:55,483 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1669)) - Finished spill 0
2018-11-19 05:23:55,496 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1099)) - Task:attempt_local2033293629_0001_m_000000_0 is done. And is in the process of committing
2018-11-19 05:23:55,516 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(618)) - map
2018-11-19 05:23:55,516 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1219)) - Task 'attempt_local2033293629_0001_m_000000_0' done.
2018-11-19 05:23:55,516 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(276)) - Finishing task: attempt_local2033293629_0001_m_000000_0
2018-11-19 05:23:55,517 INFO [Thread-23] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(483)) - map task executor complete.
2018-11-19 05:23:55,518 INFO [Thread-23] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(475)) - Waiting for reduce tasks
2018-11-19 05:23:55,518 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(329)) - Starting task: attempt_local2033293629_0001_r_000000_0
2018-11-19 05:23:55,530 INFO [pool-7-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:<init>(123)) - File Output Committer Algorithm version is 1
2018-11-19 05:23:55,530 INFO [pool-7-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:<init>(138)) - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2018-11-19 05:23:55,531 INFO [pool-7-thread-1] mapred.Task (Task.java:initialize(619)) - Using ResourceCalculatorProcessTree : [ ]
2018-11-19 05:23:55,542 INFO [pool-7-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@2de8a214
2018-11-19 05:23:55,564 INFO [pool-7-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(206)) - MergerManager: memoryLimit=678356544, maxSingleShuffleLimit=169589136, mergeThreshold=447715328, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2018-11-19 05:23:55,574 INFO [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local2033293629_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2018-11-19 05:23:55,690 INFO [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(145)) - localfetcher#1 about to shuffle output of map attempt_local2033293629_0001_m_000000_0 decomp: 569824 len: 569828 to MEMORY
2018-11-19 05:23:55,702 INFO [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:doShuffle(93)) - Read 569824 bytes from map-output for attempt_local2033293629_0001_m_000000_0
2018-11-19 05:23:55,706 INFO [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(321)) - closeInMemoryFile -> map-output of size: 569824, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->569824
2018-11-19 05:23:55,707 INFO [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
2018-11-19 05:23:55,708 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(618)) - 1 / 1 copied.
2018-11-19 05:23:55,708 INFO [pool-7-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(693)) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2018-11-19 05:23:55,722 INFO [pool-7-thread-1] mapred.Merger (Merger.java:merge(606)) - Merging 1 sorted segments
2018-11-19 05:23:55,729 INFO [pool-7-thread-1] mapred.Merger (Merger.java:merge(705)) - Down to the last merge-pass, with 1 segments left of total size: 569809 bytes
2018-11-19 05:23:55,838 INFO [pool-7-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(760)) - Merged 1 segments, 569824 bytes to disk to satisfy reduce memory limit
2018-11-19 05:23:55,844 INFO [pool-7-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(790)) - Merging 1 files, 569828 bytes from disk
2018-11-19 05:23:55,848 INFO [pool-7-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(805)) - Merging 0 segments, 0 bytes from memory into reduce
2018-11-19 05:23:55,850 INFO [pool-7-thread-1] mapred.Merger (Merger.java:merge(606)) - Merging 1 sorted segments
2018-11-19 05:23:55,852 INFO [pool-7-thread-1] mapred.Merger (Merger.java:merge(705)) - Down to the last merge-pass, with 1 segments left of total size: 569809 bytes
2018-11-19 05:23:55,857 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(618)) - 1 / 1 copied.
2018-11-19 05:23:55,901 INFO [pool-7-thread-1] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1181)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2018-11-19 05:23:56,056 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1418)) - map 100% reduce 0%
2018-11-19 05:23:56,084 INFO [pool-7-thread-1] mapred.Task (Task.java:done(1099)) - Task:attempt_local2033293629_0001_r_000000_0 is done. And is in the process of committing
2018-11-19 05:23:56,089 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(618)) - 1 / 1 copied.
2018-11-19 05:23:56,091 INFO [pool-7-thread-1] mapred.Task (Task.java:commit(1260)) - Task attempt_local2033293629_0001_r_000000_0 is allowed to commit now
2018-11-19 05:23:56,114 INFO [pool-7-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(582)) - Saved output of task 'attempt_local2033293629_0001_r_000000_0' to hdfs://hadoop1:9000/Output/_temporary/0/task_local2033293629_0001_r_000000
2018-11-19 05:23:56,120 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(618)) - reduce > reduce
2018-11-19 05:23:56,120 INFO [pool-7-thread-1] mapred.Task (Task.java:sendDone(1219)) - Task 'attempt_local2033293629_0001_r_000000_0' done.
2018-11-19 05:23:56,120 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(352)) - Finishing task: attempt_local2033293629_0001_r_000000_0
2018-11-19 05:23:56,120 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(329)) - Starting task: attempt_local2033293629_0001_r_000001_0
2018-11-19 05:23:56,127 INFO [pool-7-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:<init>(123)) - File Output Committer Algorithm version is 1
2018-11-19 05:23:56,127 INFO [pool-7-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:<init>(138)) - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2018-11-19 05:23:56,128 INFO [pool-7-thread-1] mapred.Task (Task.java:initialize(619)) - Using ResourceCalculatorProcessTree : [ ]
2018-11-19 05:23:56,128 INFO [pool-7-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@2bfb9fd5
2018-11-19 05:23:56,133 INFO [pool-7-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(206)) - MergerManager: memoryLimit=678356544, maxSingleShuffleLimit=169589136, mergeThreshold=447715328, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2018-11-19 05:23:56,138 INFO [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local2033293629_0001_r_000001_0 Thread started: EventFetcher for fetching Map Completion Events
2018-11-19 05:23:56,151 INFO [localfetcher#2] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(145)) - localfetcher#2 about to shuffle output of map attempt_local2033293629_0001_m_000000_0 decomp: 1217086 len: 1217090 to MEMORY
2018-11-19 05:23:56,153 INFO [localfetcher#2] reduce.InMemoryMapOutput (InMemoryMapOutput.java:doShuffle(93)) - Read 1217086 bytes from map-output for attempt_local2033293629_0001_m_000000_0
2018-11-19 05:23:56,157 INFO [localfetcher#2] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(321)) - closeInMemoryFile -> map-output of size: 1217086, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->1217086
2018-11-19 05:23:56,157 INFO [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
2018-11-19 05:23:56,158 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(618)) - 1 / 1 copied.
2018-11-19 05:23:56,158 INFO [pool-7-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(693)) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2018-11-19 05:23:56,160 INFO [pool-7-thread-1] mapred.Merger (Merger.java:merge(606)) - Merging 1 sorted segments
2018-11-19 05:23:56,161 INFO [pool-7-thread-1] mapred.Merger (Merger.java:merge(705)) - Down to the last merge-pass, with 1 segments left of total size: 1217075 bytes
2018-11-19 05:23:56,303 INFO [pool-7-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(760)) - Merged 1 segments, 1217086 bytes to disk to satisfy reduce memory limit
2018-11-19 05:23:56,305 INFO [pool-7-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(790)) - Merging 1 files, 1217090 bytes from disk
2018-11-19 05:23:56,307 INFO [pool-7-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(805)) - Merging 0 segments, 0 bytes from memory into reduce
2018-11-19 05:23:56,307 INFO [pool-7-thread-1] mapred.Merger (Merger.java:merge(606)) - Merging 1 sorted segments
2018-11-19 05:23:56,308 INFO [pool-7-thread-1] mapred.Merger (Merger.java:merge(705)) - Down to the last merge-pass, with 1 segments left of total size: 1217075 bytes
2018-11-19 05:23:56,313 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(618)) - 1 / 1 copied.
2018-11-19 05:23:56,529 WARN [DataStreamer for file /Output/_temporary/0/_temporary/attempt_local2033293629_0001_r_000001_0/part-r-00001] hdfs.DataStreamer (DataStreamer.java:closeResponder(929)) - Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:927)
at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:578)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:755)
2018-11-19 05:23:56,531 INFO [pool-7-thread-1] mapred.Task (Task.java:done(1099)) - Task:attempt_local2033293629_0001_r_000001_0 is done. And is in the process of committing
2018-11-19 05:23:56,534 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(618)) - 1 / 1 copied.
2018-11-19 05:23:56,534 INFO [pool-7-thread-1] mapred.Task (Task.java:commit(1260)) - Task attempt_local2033293629_0001_r_000001_0 is allowed to commit now
2018-11-19 05:23:56,543 INFO [pool-7-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(582)) - Saved output of task 'attempt_local2033293629_0001_r_000001_0' to hdfs://hadoop1:9000/Output/_temporary/0/task_local2033293629_0001_r_000001
2018-11-19 05:23:56,548 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(618)) - reduce > reduce
2018-11-19 05:23:56,549 INFO [pool-7-thread-1] mapred.Task (Task.java:sendDone(1219)) - Task 'attempt_local2033293629_0001_r_000001_0' done.
2018-11-19 05:23:56,549 INFO [pool-7-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(352)) - Finishing task: attempt_local2033293629_0001_r_000001_0
2018-11-19 05:23:56,549 INFO [Thread-23] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(483)) - reduce task executor complete.
2018-11-19 05:23:57,057 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1418)) - map 100% reduce 100%
2018-11-19 05:23:57,058 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1429)) - Job job_local2033293629_0001 completed successfully
2018-11-19 05:23:57,089 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1436)) - Counters: 35
File System Counters
FILE: Number of bytes read=4750713
FILE: Number of bytes written=8708886
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=3448239
HDFS: Number of bytes written=472
HDFS: Number of read operations=27
HDFS: Number of large read operations=0
HDFS: Number of write operations=12
Map-Reduce Framework
Map input records=101421
Map output records=106249
Map output bytes=1574408
Map output materialized bytes=1786918
Input split bytes=99
Combine input records=0
Combine output records=0
Reduce input groups=23
Reduce shuffle bytes=1786918
Reduce input records=106249
Reduce output records=23
Spilled Records=212498
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=37
Total committed heap usage (bytes)=497233920
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1149413
File Output Format Counters
Bytes Written=361
OK
错误代码:
package WordCount; import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCountMain { public static final String HADOOP_ROOT_PATH = "hdfs://hadoop1:9000";
public static final String HADOOP_INPUT_PATH = "hdfs://hadoop1:9000/Input";
public static final String HADOOP_OUTPUT_PATH = "hdfs://hadoop1:9000/Output"; public static void main(String[] args) throws IOException,
URISyntaxException, ClassNotFoundException, InterruptedException { Configuration conf = new Configuration();
// 1、设置job运行时要访问的默认文件系统
//conf.set("fs.defaultFS", HADOOP_ROOT_PATH);
// 2、设置job提交到哪去运行
//conf.set("mapreduce.framework.name", "yarn");
//conf.set("yarn.resourcemanager.hostname", "hadoop1");
// 3、如果要从windows系统上运行这个job提交客户端程序,则需要加这个跨平台提交的参数
//conf.set("mapreduce.app-submission.cross-platform", "true"); Job job = Job.getInstance(conf); // 1、封装参数:jar包所在的位置
job.setJar("/home/hadoop/wordcount.jar");
//job.setJarByClass(WordCountMain.class); // 2、封装参数: 本次job所要调用的Mapper实现类、Reducer实现类
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordcountReducer.class); // 3、封装参数:本次job的Mapper实现类、Reducer实现类产生的结果数据的key、value类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class); // 4、封装参数:本次job要处理的输入数据集所在路径、最终结果的输出路径
Path output = new Path(HADOOP_OUTPUT_PATH);
FileSystem fs = FileSystem.get(new URI(HADOOP_ROOT_PATH), conf);
if (fs.exists(output)) {
fs.delete(output, true);
}
FileInputFormat.setInputPaths(job, new Path(HADOOP_INPUT_PATH));
FileOutputFormat.setOutputPath(job, output); // 注意:输出路径必须不存在 // 5、封装参数:想要启动的reduce task的数量
job.setNumReduceTasks(2); // 6、提交job给yarn
boolean res = job.waitForCompletion(true);
System.out.println("OK");
System.exit(res ? 0 : -1); } }
修改方式添加 job 提交的地址
开放以下代码
// 2、设置job提交到哪去运行
conf.set("mapreduce.framework.name", "yarn");
运行mapreduce - java.lang.InterruptedException的更多相关文章
- Druid出现DruidDataSource - recyle error - recyle error java.lang.InterruptedException: null异常排查与解决
一.问题回顾 线上的代码之前运行的都很平稳,突然就出现了一个很奇怪的问题,看错误信息是第三方框架Druid报出来了,连接池回收连接时出现的问题. 2018-05-14 20:01:32.810 ERR ...
- 16.java.lang.InterruptedException
java.lang.InterruptedException 被中止异常 当某个线程处于长时间的等待.休眠或其他暂停状态,而此时其他的线程通过Thread的interrupt方法终止该线程时抛出该异常 ...
- 报错: WARN hdfs.DFSClient: Caught exception java.lang.InterruptedException
WARN hdfs.DFSClient: Caught exception java.lang.InterruptedException 而且日志中没有错误. 官网语句:$ bin/hdfs dfs ...
- Flume启动时报错Caused by: java.lang.InterruptedException: Timed out before HDFS call was made. Your hdfs.callTimeout might be set too low or HDFS calls are taking too long.解决办法(图文详解)
前期博客 Flume自定义拦截器(Interceptors)或自带拦截器时的一些经验技巧总结(图文详解) 问题详情 -- ::, (agent-shutdown-hook) [INFO - org.a ...
- SparkStreaming运行出现 java.lang.NoClassDefFoundError: org/apache/htrace/Trace 错误
1.简介 最近在摸索利用sparkstreaming从kafka中准实时的读取数据,并将在读取的过程中,可以做一个简单的分析,最后将分析结果写入hbase中. 2.出现的问题 (1)将从kafka中读 ...
- 我的Android进阶之旅------>Android项目运行报java.lang.NoClassDefFoundError错误的解决办法
今天在运行一个Android项目的时候,报了以下错误: D/AndroidRuntime( 3859): Shutting down VM E/AndroidRuntime( 3859): FATAL ...
- eclipse 运行报java.lang.OutOfMemoryError: PermGen space解决方法
一.在window下eclipse里面Server挂的是tomcat6,一开始还是以为,tomcat配置的问题,后面发现,配置了tomcat里面的catalina.bat文件,加入 set JAVA_ ...
- eclipse运行报java.lang.OutOfMemoryError: PermGen space解决方法
一.在window下eclipse里面Server挂的是tomcat6,一开始还是以为,tomcat配置的问题,后面发现,配置了tomcat里面的catalina.bat文件,加入 set JAVA_ ...
- WARN hdfs.DFSClient: Caught exception java.lang.InterruptedException
Hadoop 2.7.4 The reason is this: originally, DataStreamer::closeResponder always prints a warning ab ...
随机推荐
- ios之UITextfield (2)
UItextField通常用于外部数据输入,以实现人机交互.下面以一个简单的登陆界面来讲解UItextField的详细使用. //用来显示“用户名”的label UILabel* label1 = [ ...
- [LUOGU] P3871 [TJOI2010]中位数
题目描述 给定一个由N个元素组成的整数序列,现在有两种操作: 1 add a 在该序列的最后添加一个整数a,组成长度为N + 1的整数序列 2 mid 输出当前序列的中位数 中位数是指将一个序列按照从 ...
- Springboot整合Shiro安全框架
最近在学习Springboot,在这个过程中遇到了很多之前都没有技术知识,学习了一阵子,稍微总结一些. ---- Shiro框架 shiro框架,是一个相对比较简便的安全框架,它可以干净利落地处理身份 ...
- Centos 7 编译nginx 1.14.0
步骤一:下载nginx安装包 wget https://nginx.org/download/nginx-1.14.0.tar.gz 步骤二:安装nginx依赖包 yum install -y gcc ...
- Linux基础学习-RHEL7.4之YUM更换CentOS源
1.配置YUM本地源 1.挂载镜像 [root@qdlinux ~]# mount /dev/cdrom /mnt 2.查看是否挂载成功 [root@qdlinux ~]# df -h Filesys ...
- docker快速搭建
curl -sSL https://get.docker.com|sh docker --version systemctl start docker.service ps -ef|grep doc ...
- 用cpp写对拍程序
#include <bits/stdc++.h> using namespace std; int main() { while(true) { puts(""); p ...
- 算法导论 第六章 2 优先队列(python)
优先队列: 物理结构: 顺序表(典型的是数组){python用到list} 逻辑结构:似完全二叉树 使用的特点是:动态的排序..排序的元素会增加,减少#和快速排序对比 快速一次排完 增 ...
- Jedis 工具类
package com.pig4cloud.pigx.admin.utils; import redis.clients.jedis.*; import java.util.ArrayList; im ...
- loadrunner协议开发
可以参考loadrunner自带的VuGen Guild文档,里面详细描述了所有协议的录制和开发内容