1 例子jar位置

[hadoop@hadoop02 mapreduce]$ pwd
/hadoop/hadoop-2.8.2/share/hadoop/mapreduce
[hadoop@hadoop02 mapreduce]$ ls -lrt
总用量 5084
drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 lib
drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 jdiff
-rw-r--r-- 1 hadoop hadoop 301936 10月 20 05:11 hadoop-mapreduce-examples-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 77142 10月 20 05:11 hadoop-mapreduce-client-shuffle-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 1588114 10月 20 05:11 hadoop-mapreduce-client-jobclient-2.8.2-tests.jar
-rw-r--r-- 1 hadoop hadoop 67003 10月 20 05:11 hadoop-mapreduce-client-jobclient-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 31535 10月 20 05:11 hadoop-mapreduce-client-hs-plugins-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 195052 10月 20 05:11 hadoop-mapreduce-client-hs-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 1571759 10月 20 05:11 hadoop-mapreduce-client-core-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 782757 10月 20 05:11 hadoop-mapreduce-client-common-2.8.2.jar
-rw-r--r-- 1 hadoop hadoop 563771 10月 20 05:11 hadoop-mapreduce-client-app-2.8.2.jar
drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 sources
drwxr-xr-x 2 hadoop hadoop 29 10月 20 05:11 lib-examples

2 生成数据文件

[hadoop@hadoop01 ~]$ echo "Hello World">>word.txt
[hadoop@hadoop01 ~]$ echo "Hello Hadoop">>word.txt
[hadoop@hadoop01 ~]$ echo "Hello Hive">>word.txt

3 创建HDFS目录

[hadoop@hadoop01 ~]$ hadoop dfs -mkdir /work/data/input
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. [hadoop@hadoop01 ~]$ hadoop dfs -lsr /work/data
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. lsr: DEPRECATED: Please use 'ls -R' instead.
drwxr-xr-x - hadoop supergroup 0 2017-11-12 09:00 /work/data/input
[hadoop@hadoop01 ~]$

4 将数据文件word.txt上传以HDFS /work/data/input目录下

[hadoop@hadoop01 ~]$ hadoop dfs -copyFromLocal word.txt /work/data/input
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. [hadoop@hadoop01 ~]$ hadoop dfs -text /work/data/input/word.txt
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. Hello World
Hello Hadoop
Hello Hive
[hadoop@hadoop01 ~]$

5 运行wordcount例子

[hadoop@hadoop01 hadoop-2.8.2]$ pwd
/hadoop/hadoop-2.8.2
[hadoop@hadoop01 hadoop-2.8.2]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.jar wordcount /work/data/input /work/data/output
17/11/12 09:05:14 INFO client.RMProxy: Connecting to ResourceManager at hadoop02/192.168.169.102:8032
17/11/12 09:05:15 INFO input.FileInputFormat: Total input files to process : 1
17/11/12 09:05:15 INFO mapreduce.JobSubmitter: number of splits:1
17/11/12 09:05:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1510447239720_0001
17/11/12 09:05:16 INFO impl.YarnClientImpl: Submitted application application_1510447239720_0001
17/11/12 09:05:16 INFO mapreduce.Job: The url to track the job: http://hadoop02:8088/proxy/application_1510447239720_0001/
17/11/12 09:05:16 INFO mapreduce.Job: Running job: job_1510447239720_0001
17/11/12 09:05:25 INFO mapreduce.Job: Job job_1510447239720_0001 running in uber mode : false
17/11/12 09:05:25 INFO mapreduce.Job: map 0% reduce 0%
17/11/12 09:05:35 INFO mapreduce.Job: map 100% reduce 0%
17/11/12 09:05:40 INFO mapreduce.Job: map 100% reduce 100%
17/11/12 09:05:41 INFO mapreduce.Job: Job job_1510447239720_0001 completed successfully
17/11/12 09:05:41 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=53
FILE: Number of bytes written=276955
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=152
HDFS: Number of bytes written=31
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=5860
Total time spent by all reduces in occupied slots (ms)=3296
Total time spent by all map tasks (ms)=5860
Total time spent by all reduce tasks (ms)=3296
Total vcore-milliseconds taken by all map tasks=5860
Total vcore-milliseconds taken by all reduce tasks=3296
Total megabyte-milliseconds taken by all map tasks=6000640
Total megabyte-milliseconds taken by all reduce tasks=3375104
Map-Reduce Framework
Map input records=3
Map output records=6
Map output bytes=59
Map output materialized bytes=53
Input split bytes=117
Combine input records=6
Combine output records=4
Reduce input groups=4
Reduce shuffle bytes=53
Reduce input records=4
Reduce output records=4
Spilled Records=8
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=224
CPU time spent (ms)=2190
Physical memory (bytes) snapshot=443719680
Virtual memory (bytes) snapshot=4207517696
Total committed heap usage (bytes)=293076992
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=35
File Output Format Counters
Bytes Written=31
[hadoop@hadoop01 hadoop-2.8.2]$

6 查看结果

[hadoop@hadoop01 hadoop-2.8.2]$ hadoop dfs -lsr /work/data/output
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. lsr: DEPRECATED: Please use 'ls -R' instead.
-rw-r--r-- 2 hadoop supergroup 0 2017-11-12 09:05 /work/data/output/_SUCCESS
-rw-r--r-- 2 hadoop supergroup 31 2017-11-12 09:05 /work/data/output/part-r-00000
[hadoop@hadoop01 hadoop-2.8.2]$ hadoop dfs -text /work/data/output/part-r-00000
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. Hadoop 1
Hello 3
Hive 1
World 1
[hadoop@hadoop01 hadoop-2.8.2]$

Hadoop2.8.2 运行wordcount的更多相关文章

  1. hadoop2.6.4运行wordcount

    hadoop用户登录,启动服务: start-dfs.sh && start-yarn.sh 创建输入目录: hadoop df -mkdir /input 把测试文件导入/input ...

  2. hadoop2.6.5运行wordcount实例

    运行wordcount实例 在/tmp目录下生成两个文本文件,上面随便写两个单词. cd /tmp/ mkdir file cd file/ echo "Hello world" ...

  3. hadoop2.7.x运行wordcount程序卡住在INFO mapreduce.Job: Running job:job _1469603958907_0002

    一.抛出问题 Hadoop集群(全分布式)配置好后,运行wordcount程序测试,发现每次运行都会卡住在Running job处,然后程序就呈现出卡死的状态. wordcount运行命令:[hado ...

  4. CentOS上安装Hadoop2.7,添加数据节点,运行wordcount

    安装hadoop的步骤比较繁琐,但是并不难. 在CentOS上安装Hadoop2.7 1. 安装 CentOS,注:图形界面并无必要 2. 在CentOS里设置静态IP,手工编辑如下4个文件 /etc ...

  5. win10+eclipse+hadoop2.7.2+maven+local模式直接通过Run as Java Application运行wordcount

    一.准备工作 (1)Hadoop2.7.2 在linux部署完毕,成功启动dfs和yarn,通过jps查看,进程都存在 (2)安装maven 二.最终效果 在windows系统中,直接通过Run as ...

  6. Spark源码编译并在YARN上运行WordCount实例

    在学习一门新语言时,想必我们都是"Hello World"程序开始,类似地,分布式计算框架的一个典型实例就是WordCount程序,接触过Hadoop的人肯定都知道用MapRedu ...

  7. 解决在windows的eclipse上面运行WordCount程序出现的一系列问题详解

    一.简介 要在Windows下的 Eclipse上调试Hadoop2代码,所以我们在windows下的Eclipse配置hadoop-eclipse-plugin- 2.6.0.jar插件,并在运行H ...

  8. Spark on YARN简介与运行wordcount(master、slave1和slave2)(博主推荐)

    前期博客 Spark on YARN模式的安装(spark-1.6.1-bin-hadoop2.6.tgz +hadoop-2.6.0.tar.gz)(master.slave1和slave2)(博主 ...

  9. Spark standalone简介与运行wordcount(master、slave1和slave2)

    前期博客 Spark standalone模式的安装(spark-1.6.1-bin-hadoop2.6.tgz)(master.slave1和slave2)  Spark运行模式概述 1. Stan ...

随机推荐

  1. Vuex使用总结

    Vuex 是什么? Vuex 是一个专为 Vue.js 应用程序开发的状态管理模式.它采用集中式存储管理应用的所有组件的状态,并以相应的规则保证状态以一种可预测的方式发生变化. Vuex的五个核心概念 ...

  2. 05jmeter正则表达式

    1.必须掌握的正则字符 "^" :^会匹配行或者字符串的起始位置,有时还会匹配整个文档的起始位置."$" :$会匹配行或字符串的结尾."\w" ...

  3. 4.Linux文件管理相关命令(上)

    1.复制命令cp cp - copy files and directories 拷贝 文件 和 目录 -r 递归复制,通常用来复制目录 -p 保持复制源文件的属性 -v 显示复制的过程 1. 将当前 ...

  4. 基于Prometheus和Grafana的监控平台 - 运维告警

    通过前面几篇文章我们搭建好了监控环境并且监控了服务器.数据库.应用,运维人员可以实时了解当前被监控对象的运行情况,但是他们不可能时时坐在电脑边上盯着DashBoard,这就需要一个告警功能,当服务器或 ...

  5. 玩转 RTC时钟库 DS1302

    1.前言     最近博主在弄8266编程的时候,偶然发现两个全新时钟模块压仓货: DS1302 DS3231     为了避免资源浪费以及重复编写代码,博主还是抱着尝试的心态去寻找能够同时兼容 DS ...

  6. nuxt.js部署vue应用到服务端过程

    由于seo的需要,最近将项目移植道nuxt.js下采用ssr渲染 移植完成后,一路顺畅,但是到了要部署到服务器端上时候,还是个头疼的问题,但最终还是顺利完成.现在记录一下部署中的过程. 注:部署时候过 ...

  7. Object 对象方法学习之(1)—— 使用 Object.assign 复制对象、合并对象

    作用 Object.assign() 方法用于把一个或多个源对象的可枚举属性值复制到目标对象中,返回值为目标对象. 语法 Object.assign(target, ...sources) 参数 ta ...

  8. 拨云见日,彻底弄清楚Java日志框架 log4j, logback, slf4j的区别与联系

    log4j 以及 logback, slf4j 官网 日志框架的困惑 作为一个正常的项目,是必须有日志框架的存在的,没有日志,很难追踪一些奇奇怪怪的系统问题. 但是,我们经常在项目的依赖中,见到奇奇怪 ...

  9. Oracle数据库 常见的SQL题,复习

    01.查询员工表所有数据,并说明使用*的缺点 select * from emp 02.查询职位(JOB)为'PRESIDENT'的员工的工资 select sal from emp where jo ...

  10. Andriod开发环境搭建

    一.相关安装文件准备