localhost:50030/jobtracker.jsp

localhost:50060/tasktracker.jsp

localhost:50070/dfshealth.jsp

1. NameNode进程

NameNode节点进程 – 运行在端口9000上

INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: asn-ThinkPad-SL410/127.0.1.1:9000

对应的Jetty服务器 -- 运行在端口50070上, 50070NameNode Web的管理端口

INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070

INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070

INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070

INFO org.mortbay.log: jetty-6.1.26

INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070

2. DataNode进程

DataNode控制进程 -- 运行在50010上

INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-1647545997-127.0.1.1-50010-1399439341888

INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010  -- 网络拓扑结构,向默认机架中增加了1个数据节点

INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 0, processing time: 2 msecs

DatanodeRegistration(asn-ThinkPad-SL410:50010, storageID=DS-1647545997-127.0.1.1-50010-1399439341888, infoPort=50075, ipcPort=50020)

................. DatanodeRegistration(127.0.0.1:50010, storageID=DS-1647545997-127.0.1.1-50010-1399439341888, infoPort=50075, ipcPort=50020) In DataNode.run, data = FSDataset{dirpath='/opt/hadoop/data/current'}

DataNode 对应的Jetty服务器 – 运行在端口50075上, DataNode Web 的管理端口

INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075

INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075

INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075

INFO org.mortbay.log: jetty-6.1.26

INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075

DataNode 的 RPC  -- 运行在50020端口

INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting

INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting

INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting

INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec

INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting

3.  TaskTracker进程

TaskTracker服务进程  -- 运行在58567端口上

2014-05-09 08:51:54,128 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up at: localhost/127.0.0.1:58567

2014-05-09 08:51:54,128 INFO org.apache.hadoop.mapred.TaskTracker: Starting tracker tracker_asn-ThinkPad-SL410:localhost/127.0.0.1:58567

2014-05-09 08:51:54,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 58567: starting

2014-05-09 08:52:24,443 INFO org.apache.hadoop.mapred.TaskTracker: Starting thread: Map-events fetcher for all reduce tasks on tracker_asn-ThinkPad-SL410:localhost/127.0.0.1:58567

TaskTracker服务对应的Jetty服务器  -- 运行在50060端口上

2014-05-09 08:52:24,513 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50060

2014-05-09 08:52:24,514 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50060 webServer.getConnectors()[0].getLocalPort() returned 50060

2014-05-09 08:52:24,514 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50060

2014-05-09 08:52:24,514 INFO org.mortbay.log: jetty-6.1.26

2014-05-09 08:52:25,088 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50060

4. JobTracker 进程

一个Job由多个Task组成

JobTracker up at: 9001

JobTracker webserver: 50030

2014-05-09 12:20:05,598 INFO org.apache.hadoop.mapred.JobTracker: Starting jobtracker with owner as asn

2014-05-09 12:20:05,664 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9001 registered.

2014-05-09 12:20:05,665 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9001 registered.

2014-05-09 12:20:06,166 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030

2014-05-09 12:20:06,169 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030

2014-05-09 12:20:06,169 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030

2014-05-09 12:20:06,169 INFO org.mortbay.log: jetty-6.1.26

2014-05-09 12:20:07,481 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50030

2014-05-09 12:20:07,513 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001

2014-05-09 12:20:07,513 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030

2014-05-09 12:20:08,165 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory

2014-05-09 12:20:08,479 INFO org.apache.hadoop.mapred.JobTracker: History server being initialized in embedded mode

2014-05-09 12:20:08,487 INFO org.apache.hadoop.mapred.JobHistoryServer: Started job history server at: localhost:50030

2014-05-09 12:20:08,487 INFO org.apache.hadoop.mapred.JobTracker: Job History Server web address: localhost:50030

2014-05-09 12:20:08,513 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive

2014-05-09 12:20:08,931 INFO org.apache.hadoop.mapred.JobTracker: Refreshing hosts information

注:关闭nanenode安全模式

命令为:

[hadoop@localhost hadoop-0.20.203.0]$ bin/hadoop dfsadmin -safemode leave

hadoop 端口总结的更多相关文章

  1. Hadoop端口说明

    Hadoop端口说明: 默认端口                            设置位置                                    描述信息 8020        ...

  2. Hadoop端口一览表

    Hadoop端口一览表 @(Hadoop) 端口名 用途 50070 Hadoop Namenode UI端口 50075 Hadoop Datanode UI端口 50090 Hadoop Seco ...

  3. hadoop端口配置指南

    获取默认配置 配置hadoop,主要是配置core-site.xml,hdfs-site.xml,mapred-site.xml三个配置文件,默认下来,这些配置文件都是空的,所以很难知道这些配置文件有 ...

  4. hadoop端口使用配置总结(非常好的总结)

    转自http://www.aboutyun.com/thread-7513-1-1.html Hadoop集群的各部分一般都会使用到多个端口,有些是daemon之间进行交互之用,有些是用于RPC访问以 ...

  5. Hadoop端口

    本文转自:<Hadoop默认端口应用一览> Hadoop集群的各部分一般都会使用到多个端口,有些是daemon之间进行交互之用,有些是用于RPC访问以及HTTP访问.而随着Hadoop周边 ...

  6. Hadoop端口访问

    Hadoop集群默认端口 Hadoop本地开发,9000端口拒绝访问

  7. Hadoop端口与界面

    NameNode:7180 Cloudera Manager集群管理界面: NameNode:50070 NameNode Web UI/数据管理界面:   NameNode:8020/9000 Ha ...

  8. 大数据Hadoop学习之搭建Hadoop平台(2.1)

     关于大数据,一看就懂,一懂就懵. 一.简介 Hadoop的平台搭建,设置为三种搭建方式,第一种是"单节点安装",这种安装方式最为简单,但是并没有展示出Hadoop的技术优势,适合 ...

  9. 关于Hadoop未授权访问可导致数据泄露通知

    尊敬的腾讯云客户: 您好!近日,外部媒体报道全球Hadoop服务器因配置不安全导致海量数据泄露,涉及使用Hadoop分布式文件系统(HDFS)的近4500台服务器,数据量高达5120 TB (5.12 ...

随机推荐

  1. FatMouse' Trade (贪心)

    #include <iostream> #include <stdio.h> #include <cstring> #include <cmath> # ...

  2. layers.py cs231n

    如果有错误,欢迎指出,不胜感激. import numpy as np def affine_forward(x, w, b): 第一个最简单的 affine_forward简单的前向传递,返回 ou ...

  3. 如何设置单个 Git 仓库的代理从而提高更新速度

    如何设置单个 Git 仓库的代理从而提高更新速度 因为特殊原因,需要单独对 Git 仓库设置远程代理,从而提高更新速度. 主要原因是因为有一些远程 Git 仓库比较慢. 最初的想法是系统全局代理,但是 ...

  4. day38 14-Spring的Bean的属性的注入:集合属性的注入

    集合:List.Set.Map. package cn.itcast.spring3.demo6; import java.util.List; import java.util.Map; impor ...

  5. 洛谷P1541 乌龟棋 [2010NOIP提高组]

    P1541 乌龟棋 题目背景 小明过生日的时候,爸爸送给他一副乌龟棋当作礼物. 题目描述 乌龟棋的棋盘是一行N个格子,每个格子上一个分数(非负整数).棋盘第1格是唯一的起点,第N格是终点,游戏要求玩家 ...

  6. pytest 用 @pytest.mark.usefixtures("fixtureName")装饰类,可以让执行每个case前,都执行一遍指定的fixture

    conftest.py import pytest import uuid @pytest.fixture() def declass(): print("declass:"+st ...

  7. 【JZOJ4816】【NOIP2016提高A组五校联考4】label

    题目描述 输入 输出 样例输入 3 2 2 0 1 2 3 3 2 1 3 1 2 3 3 1 1 2 2 3 样例输出 4 2 12 数据范围 样例解释 解法 设f[i][j]为在第i个点填了j的合 ...

  8. 10种简单的Java性能优化(转)

    本文由 ImportNew - 一直在路上 翻译自 jaxenter.欢迎加入翻译小组.转载请见文末要求. 你是否正打算优化hashCode()方法?是否想要绕开正则表达式?Lukas Eder介绍了 ...

  9. oralce系统触发器

    系统事件是指基于oracle事件(例如logon.logoff和startup.shutdown)所建立的触发器,通过使用系统事件触发器,提供了跟踪系统或是数据库变化机制.下面介绍使用的系统事件属性函 ...

  10. scala2.11读取文件

    1.读取行 要读取文件的所有行,可以调用scala.io.Source对象的getLines方法: import scala.io.Source val source = Source.fromFil ...