Hadoop基础-配置历史服务器

                                    作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

   Hadoop自带了一个历史服务器,可以通过历史服务器查看已经运行完的Mapreduce作业记录,比如用了多少个Map、用了多少个Reduce、作业提交时间、作业启动时间、作业完成时间等信息。默认情况下,Hadoop历史服务器是没有启动的,我们可以通过Hadoop自带的命令(mr-jobhistory-daemon.sh)来启动Hadoop历史服务器。

一.yarn上运行mr程序

1>.启动集群

[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
ResourceManager
NameNode
Jps
DFSZKFailoverController
命令执行成功
============= s102 jps ============
DataNode
JournalNode
NodeManager
Jps
QuorumPeerMain
命令执行成功
============= s103 jps ============
DataNode
JournalNode
NodeManager
QuorumPeerMain
Jps
命令执行成功
============= s104 jps ============
NodeManager
Jps
QuorumPeerMain
DataNode
JournalNode
命令执行成功
============= s105 jps ============
Jps
NameNode
DFSZKFailoverController
命令执行成功
[yinzhengjie@s101 ~]$

2>.在yarn上执行MapReduce程序

[yinzhengjie@s101 ~]$ hadoop jar /soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7..jar wordcount /yinzhengjie/data/ /yinzhengjie/data/output
// :: INFO client.RMProxy: Connecting to ResourceManager at s101/172.30.1.101:
// :: INFO input.FileInputFormat: Total input paths to process :
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1534851274873_0001
// :: INFO impl.YarnClientImpl: Submitted application application_1534851274873_0001
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1534851274873_0001/
// :: INFO mapreduce.Job: Running job: job_1534851274873_0001
// :: INFO mapreduce.Job: Job job_1534851274873_0001 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1534851274873_0001 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Launched reduce tasks=
Data-local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total time spent by all reduce tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total vcore-milliseconds taken by all reduce tasks=
Total megabyte-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all reduce tasks=
Map-Reduce Framework
Map input records=
Map output records=
Map output bytes=
Map output materialized bytes=
Input split bytes=
Combine input records=
Combine output records=
Reduce input groups=
Reduce shuffle bytes=
Reduce input records=
Reduce output records=
Spilled Records=
Shuffled Maps =
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
Shuffle Errors
BAD_ID=
CONNECTION=
IO_ERROR=
WRONG_LENGTH=
WRONG_MAP=
WRONG_REDUCE=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
[yinzhengjie@s101 ~]$

3>.通过webUI查看hdfs是否有数据产生

4>.查看yarn的记录信息

5>.查看历史日志,发现无法访问

二.配置yarn历史服务器

1>.修改“mapred-site.xml”配置文件

 [yinzhengjie@s101 ~]$ more /soft/hadoop/etc/hadoop/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property> <property>
<name>mapreduce.jobhistory.address</name>
<value>s101:10020</value>
</property> <property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>s101:19888</value>
</property> <property>
<name>mapreduce.jobhistory.done-dir</name>
<value>${yarn.app.mapreduce.am.staging-dir}/done</value>
</property> <property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>${yarn.app.mapreduce.am.staging-dir}/done_intermediate</value>
</property> <property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/yinzhengjie/logs/hdfs/history</value>
</property> </configuration> <!--
mapred-site.xml 配置文件的作用:
#HDFS的相关设定,如reduce任务的默认个数、任务所能够使用内存
的默认上下限等,此中的参数定义会覆盖mapred-default.xml文件中的
默认配置. mapreduce.framework.name 参数的作用:
#指定MapReduce的计算框架,有三种可选,第一种:local(本地),第
二种是classic(hadoop一代执行框架),第三种是yarn(二代执行框架),我
们这里配置用目前版本最新的计算框架yarn即可。 mapreduce.jobhistory.address 参数的作用:
#指定job的历史服务器 mapreduce.jobhistory.webapp.address 参数的作用:
#指定日志服务器的web访问端口 mapreduce.jobhistory.done-dir 参数的作用:
#指定存放已经运行完的Hadoop作业记录 mapreduce.jobhistory.intermediate-done-dir 参数的作用:
#指定正在运行的Hadoop作业记录 yarn.app.mapreduce.am.staging-dir 参数的作用:
#指定applicationID以及需要的jar包文件等 -->
[yinzhengjie@s101 ~]$

2>.启动历史服务器服务

[yinzhengjie@s101 ~]$ hdfs dfs -mkdir /yinzhengjie/logs/hdfs/history      #创建存放历史日志的路径
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ mr-jobhistory-daemon.sh start historyserver      #启动历史服务
starting historyserver, logging to /soft/hadoop-2.7./logs/mapred-yinzhengjie-historyserver-s101.out
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ jps
ResourceManager
JobHistoryServer        #注意,这个进程就是历史服务进程
NameNode
Jps
DFSZKFailoverController
[yinzhengjie@s101 ~]$

3>.在yarn上执行MapReduce程序

[yinzhengjie@s101 ~]$ hdfs dfs -rm -R /yinzhengjie/data/output        #删除之前的输出路径
// :: INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = minutes, Emptier interval = minutes.
Deleted /yinzhengjie/data/output
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hadoop jar /soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7..jar wordcount /yinzhengjie/data/input /yinzhengjie/data/output
// :: INFO client.RMProxy: Connecting to ResourceManager at s101/172.30.1.101:
// :: INFO input.FileInputFormat: Total input paths to process :
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1534851274873_0002
// :: INFO impl.YarnClientImpl: Submitted application application_1534851274873_0002
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1534851274873_0002/
// :: INFO mapreduce.Job: Running job: job_1534851274873_0002
// :: INFO mapreduce.Job: Job job_1534851274873_0002 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1534851274873_0002 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Launched reduce tasks=
Data-local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total time spent by all reduce tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total vcore-milliseconds taken by all reduce tasks=
Total megabyte-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all reduce tasks=
Map-Reduce Framework
Map input records=
Map output records=
Map output bytes=
Map output materialized bytes=
Input split bytes=
Combine input records=
Combine output records=
Reduce input groups=
Reduce shuffle bytes=
Reduce input records=
Reduce output records=
Spilled Records=
Shuffled Maps =
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
Shuffle Errors
BAD_ID=
CONNECTION=
IO_ERROR=
WRONG_LENGTH=
WRONG_MAP=
WRONG_REDUCE=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
[yinzhengjie@s101 ~]$

4>.通过webUI查看hdfs是否有数据产生

5>.查看yarn的webUI的历史任务

6>.查看历史记录

7>.配置日志聚集功能

  详情请参考:https://www.cnblogs.com/yinzhengjie/p/9471921.html

Hadoop基础-配置历史服务器的更多相关文章

  1. hadoop配置历史服务器&&配置日志聚集

    配置历史服务器 1.在mapred-site.xml中写入一下配置 <property> <name>mapreduce.jobhistory.address</name ...

  2. hadoop配置历史服务器

    此文档不建议当教程,仅供参考 配置历史服务器 我是在hadoop1机器上配置的 配置mapred-site.xml <property> <name>mapreduce.job ...

  3. hadoop 3.x 配置历史服务器

    修改$HADOOP_HOME/etc/hadoop/mapred-site.xml,加入以下配置(修改主机名为你自己的主机或IP,尽量不要使用中文注释) <!--history address- ...

  4. 零基础配置Linux服务器环境

    详细步骤请走官方通道 over!over!over!

  5. Hadoop jobhistory历史服务器

    Hadoop自带了一个历史服务器,可以通过历史服务器查看已经运行完的Mapreduce作业记录,比如用了多少个Map.用了多少个Reduce.作业提交时间.作业启动时间.作业完成时间等信息.默认情况下 ...

  6. 【转载】Hadoop历史服务器详解

    免责声明:     本文转自网络文章,转载此文章仅为个人收藏,分享知识,如有侵权,请联系博主进行删除.     原文作者:过往记忆(http://www.iteblog.com/)     原文地址: ...

  7. linux系统ansible一键完成三大服务器基础配置(剧本)

    ansible自动化管理剧本方式一键完成三大服务器基础配置 环境准备:五台服务器:管理机m01:172.16.1.61,两台web服务器172.16.1.7,172.16.1.8,nfs存储服务器17 ...

  8. hadoop中的Jobhistory历史服务器

    1.  启动脚本 mr-jobhistory-daemon.sh start historyserver 2. 配置说明 jobhistory用于查询每个job运行完以后的历史日志信息,是作为一台单独 ...

  9. 大数据专栏 - 基础1 Hadoop安装配置

    Hadoop安装配置 环境 1, JDK8 --> 位置: /opt/jdk8 2, Hadoop2.10: --> 位置: /opt/bigdata/hadoop210 3, CentO ...

随机推荐

  1. 《Linux内核》第七周 进程的切换和系统的一般执行过程 20135311傅冬菁

    进程的切换和系统的一般执行过程 一.内容总结与分析 进程调度与进程调度时机 进程调度需求的分类: 第一种分类方式: I/O -bound(频繁进行I/O,通常会花很多时间等待I/O操作) CPU-bo ...

  2. HDU 2071 Max Num

    http://acm.hdu.edu.cn/showproblem.php?pid=2071 Problem Description There are some students in a clas ...

  3. mybatis批量插入和批量更新

    批量插入数据使用的sql语句是: insert into table (aa,bb,cc) values(xx,xx,xx),(oo,oo,oo) mybatis中mapper.xml的代码如下: & ...

  4. Linux 忘记root密码

    1 将系统重启,读秒的时候按下任意键就会出现如下图菜单界面 2 进入上图菜单界面之后,按e键就可以进入grub的编辑模式 3 选择第二行 kernel开头,再按 e 键进入该行的编辑界面中,然后在出现 ...

  5. Docker 安装私有镜像库的简单使用

    公司的网络实在是太差了, 想着自己搭建一个私有的镜像库进行使用测试使用.... docker pull registry.docker-cn.com/library/registry docker t ...

  6. 关于python 自带csv库的使用心得 附带操作实例以及excel下乱码的解决

    因为上次帮我们产品处理过一个文件,他想生成能excel处理操作的.但是上次由于时间非常紧张,所以并没有处理好. 正好无聊就来好好研究一下 ,找算法要了几个 csv文件.来好好玩一玩. 全篇使用了pyt ...

  7. html5 画布和SVG的差别

    canvas和SVG可以在浏览器绘制图形,但是本质上是不同的.canves是绘制2d图象,SVG也是绘制2d图象. Canvas是Javascript进行绘图的,是逐像素绘图.Canvas一旦图象绘制 ...

  8. MySQL中char、varchar和nvarchar的区别

    一.char和varchar的区别char是固定长度的,而varchar会根据具体的长度来使用存储空间,另外varchar需要用额外的1-2个字节存储字符串长度.1). 当字符串长度小于255时,用额 ...

  9. BZOJ1195[HNOI2006]最短母串——AC自动机+BFS+状态压缩

    题目描述 给定n个字符串(S1,S2,„,Sn),要求找到一个最短的字符串T,使得这n个字符串(S1,S2,„,Sn)都是T的子串. 输入 第一行是一个正整数n(n<=12),表示给定的字符串的 ...

  10. BZOJ3589 动态树(树链剖分+容斥原理)

    显然容斥后转化为求树链的交.这个题非常良心的保证了查询的路径都是到祖先的,求交就很休闲了. #include<iostream> #include<cstdio> #inclu ...