1 例子jar位置

[hadoop@hadoop02 mapreduce]$ pwd
/hadoop/hadoop-2.8.2/share/hadoop/mapreduce
[hadoop@hadoop02 mapreduce]$ ls -lrt
总用量 5084
drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 lib
drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 jdiff
-rw-r--r-- 1 hadoop hadoop 301936 10月 20 05:11 hadoop-mapreduce-examples-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 77142 10月 20 05:11 hadoop-mapreduce-client-shuffle-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 1588114 10月 20 05:11 hadoop-mapreduce-client-jobclient-2.8.2-tests.jar
-rw-r--r-- 1 hadoop hadoop 67003 10月 20 05:11 hadoop-mapreduce-client-jobclient-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 31535 10月 20 05:11 hadoop-mapreduce-client-hs-plugins-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 195052 10月 20 05:11 hadoop-mapreduce-client-hs-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 1571759 10月 20 05:11 hadoop-mapreduce-client-core-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 782757 10月 20 05:11 hadoop-mapreduce-client-common-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 563771 10月 20 05:11 hadoop-mapreduce-client-app-2.8.2.jar
drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 sources
drwxr-xr-x 2 hadoop hadoop 29 10月 20 05:11 lib-examples

2 生成数据文件

[hadoop@hadoop01 ~]$ echo "Hello World">>word.txt
[hadoop@hadoop01 ~]$ echo "Hello Hadoop">>word.txt
[hadoop@hadoop01 ~]$ echo "Hello Hive">>word.txt

3 创建HDFS目录

[hadoop@hadoop01 ~]$ hadoop dfs -mkdir /work/data/input
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. [hadoop@hadoop01 ~]$ hadoop dfs -lsr /work/data
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. lsr: DEPRECATED: Please use 'ls -R' instead.
drwxr-xr-x - hadoop supergroup 0 2017-11-12 09:00 /work/data/input
[hadoop@hadoop01 ~]$

4 将数据文件word.txt上传以HDFS /work/data/input目录下

[hadoop@hadoop01 ~]$ hadoop dfs -copyFromLocal word.txt /work/data/input
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. [hadoop@hadoop01 ~]$ hadoop dfs -text /work/data/input/word.txt
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. Hello World
Hello Hadoop
Hello Hive
[hadoop@hadoop01 ~]$

5 运行wordcount例子

[hadoop@hadoop01 hadoop-2.8.2]$ pwd
/hadoop/hadoop-2.8.2
[hadoop@hadoop01 hadoop-2.8.2]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.jar wordcount /work/data/input /work/data/output
17/11/12 09:05:14 INFO client.RMProxy: Connecting to ResourceManager at hadoop02/192.168.169.102:8032
17/11/12 09:05:15 INFO input.FileInputFormat: Total input files to process : 1
17/11/12 09:05:15 INFO mapreduce.JobSubmitter: number of splits:1
17/11/12 09:05:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1510447239720_0001
17/11/12 09:05:16 INFO impl.YarnClientImpl: Submitted application application_1510447239720_0001
17/11/12 09:05:16 INFO mapreduce.Job: The url to track the job: http://hadoop02:8088/proxy/application_1510447239720_0001/
17/11/12 09:05:16 INFO mapreduce.Job: Running job: job_1510447239720_0001
17/11/12 09:05:25 INFO mapreduce.Job: Job job_1510447239720_0001 running in uber mode : false
17/11/12 09:05:25 INFO mapreduce.Job: map 0% reduce 0%
17/11/12 09:05:35 INFO mapreduce.Job: map 100% reduce 0%
17/11/12 09:05:40 INFO mapreduce.Job: map 100% reduce 100%
17/11/12 09:05:41 INFO mapreduce.Job: Job job_1510447239720_0001 completed successfully
17/11/12 09:05:41 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=53
FILE: Number of bytes written=276955
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=152
HDFS: Number of bytes written=31
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=5860
Total time spent by all reduces in occupied slots (ms)=3296
Total time spent by all map tasks (ms)=5860
Total time spent by all reduce tasks (ms)=3296
Total vcore-milliseconds taken by all map tasks=5860
Total vcore-milliseconds taken by all reduce tasks=3296
Total megabyte-milliseconds taken by all map tasks=6000640
Total megabyte-milliseconds taken by all reduce tasks=3375104
Map-Reduce Framework
Map input records=3
Map output records=6
Map output bytes=59
Map output materialized bytes=53
Input split bytes=117
Combine input records=6
Combine output records=4
Reduce input groups=4
Reduce shuffle bytes=53
Reduce input records=4
Reduce output records=4
Spilled Records=8
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=224
CPU time spent (ms)=2190
Physical memory (bytes) snapshot=443719680
Virtual memory (bytes) snapshot=4207517696
Total committed heap usage (bytes)=293076992
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=35
File Output Format Counters
Bytes Written=31
[hadoop@hadoop01 hadoop-2.8.2]$

6 查看结果

[hadoop@hadoop01 hadoop-2.8.2]$ hadoop dfs -lsr /work/data/output
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. lsr: DEPRECATED: Please use 'ls -R' instead.
-rw-r--r-- 2 hadoop supergroup 0 2017-11-12 09:05 /work/data/output/_SUCCESS
-rw-r--r-- 2 hadoop supergroup 31 2017-11-12 09:05 /work/data/output/part-r-00000
[hadoop@hadoop01 hadoop-2.8.2]$ hadoop dfs -text /work/data/output/part-r-00000
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. Hadoop 1
Hello 3
Hive 1
World 1
[hadoop@hadoop01 hadoop-2.8.2]$

Hadoop2.8.2 运行wordcount的更多相关文章

  1. hadoop2.6.4运行wordcount

    hadoop用户登录,启动服务: start-dfs.sh && start-yarn.sh 创建输入目录: hadoop df -mkdir /input 把测试文件导入/input ...

  2. hadoop2.6.5运行wordcount实例

    运行wordcount实例 在/tmp目录下生成两个文本文件,上面随便写两个单词. cd /tmp/ mkdir file cd file/ echo "Hello world" ...

  3. hadoop2.7.x运行wordcount程序卡住在INFO mapreduce.Job: Running job:job _1469603958907_0002

    一.抛出问题 Hadoop集群(全分布式)配置好后,运行wordcount程序测试,发现每次运行都会卡住在Running job处,然后程序就呈现出卡死的状态. wordcount运行命令:[hado ...

  4. CentOS上安装Hadoop2.7,添加数据节点,运行wordcount

    安装hadoop的步骤比较繁琐,但是并不难. 在CentOS上安装Hadoop2.7 1. 安装 CentOS,注:图形界面并无必要 2. 在CentOS里设置静态IP,手工编辑如下4个文件 /etc ...

  5. win10+eclipse+hadoop2.7.2+maven+local模式直接通过Run as Java Application运行wordcount

    一.准备工作 (1)Hadoop2.7.2 在linux部署完毕,成功启动dfs和yarn,通过jps查看,进程都存在 (2)安装maven 二.最终效果 在windows系统中,直接通过Run as ...

  6. Spark源码编译并在YARN上运行WordCount实例

    在学习一门新语言时,想必我们都是"Hello World"程序开始,类似地,分布式计算框架的一个典型实例就是WordCount程序,接触过Hadoop的人肯定都知道用MapRedu ...

  7. 解决在windows的eclipse上面运行WordCount程序出现的一系列问题详解

    一.简介 要在Windows下的 Eclipse上调试Hadoop2代码,所以我们在windows下的Eclipse配置hadoop-eclipse-plugin- 2.6.0.jar插件,并在运行H ...

  8. Spark on YARN简介与运行wordcount(master、slave1和slave2)(博主推荐)

    前期博客 Spark on YARN模式的安装(spark-1.6.1-bin-hadoop2.6.tgz +hadoop-2.6.0.tar.gz)(master.slave1和slave2)(博主 ...

  9. Spark standalone简介与运行wordcount(master、slave1和slave2)

    前期博客 Spark standalone模式的安装(spark-1.6.1-bin-hadoop2.6.tgz)(master.slave1和slave2)  Spark运行模式概述 1. Stan ...

随机推荐

  1. python学习-文件I/O

    12.2使用os.path操作目录 # os.path_test.py import os import time print(os.path.abspath("abc.txt") ...

  2. F#周报2019年第43期

    新闻 F# eXchange 2020--征文通知 FSSF在忙什么?2019年第三季度版本 Miguel强烈推荐使用TensorFlow.NET 运行在ASP.NET Core 3上的SAFE-Bo ...

  3. WPF中的文本度量

    关于WPF中的文本度量,需要了解以下几个问题: WPF中支持一些常用的度量单位:px(device independent pixels).in(inches).cm(centimeters).pt( ...

  4. 基于Opentracing+Jaeger全链路灰度调用链

    当网关和服务在实施全链路分布式灰度发布和路由时候,我们需要一款追踪系统来监控网关和服务走的是哪个灰度组,哪个灰度版本,哪个灰度区域,甚至监控从Http Header头部全程传递的灰度规则和路由策略.这 ...

  5. web常用知识

    Html 1.打电话,发短信和发邮件 <a href="tel:0755-10086">打电话给:0755-10086</a> <a href=&qu ...

  6. 我发现了Unity3D的2D Light Renderer, 随后就把它抄了过来

    . 前几个月,偶然在群里看到有人讨论Unity3D光照,于是我又萌生了一个新的目标----把它抄过来! . 众所周知,3D渲染的整个流水线都跟光照密不可分,相关的技术更是数不甚数,而2D游戏的光照通常 ...

  7. 详解PHP中的三大经典模式

    单例模式 单例模式的含义: 作为对象的创建模式,单例模式确保某一个类只有一个实例,而且自行实例化并向整个系统全局地提供这个实例.它不会创建实例副本,而是会向单例类内部存储的实例返回一个引用. 单例模式 ...

  8. CSPS模拟 85

    WWB大佬的bitset映射真是太强了! %%% T1 观察样例,猜规律. T2 对题目的翻译工作用了很长时间 翻译错了好几次.. 观察到奇环没法染色,选的边必须把奇环弄断 如果在偶环上,偶环就变得没 ...

  9. Google I/O 大会上提出的UI优化相关

    1.ListView的Adapter Adapter在ListView中的工作原理是: 上图也正好反映出ListView使用了Adapter来适配数据源. 每一个Item条目都是通过Adapter.g ...

  10. 【RocketMQ源码学习】- 3. Client 发送同步消息

    本文较长,代码后面给了方法简图,希望给你帮助 发送的方式 同步发送 异步发送 消息的类型 普通消息 顺序消息 事务消息 发送同步消息的时序图 为了防止读者朋友嫌烦,可以看下时序图,后面我也会给出方法的 ...