本文总结了Hadoop生态系统中各个组件使用的端口,包括了HDFS,Map Reduce,HBase,Hive,Spark,WebHCat,Impala,Alluxio,Sqoop等,后续会持续更新。

HDFS Ports

Service

Servers

Default Ports Used

Protocol

Description

Need End User Access?

Configuration Parameters

NameNode WebUI

Master Nodes (NameNode and any back-up NameNodes)

http

Web UI to look at current status of HDFS, explore file system

Yes (Typically admins, Dev/Support teams)

dfs.http.address

https

Secure http service

dfs.https.address

NameNode metadata service

Master Nodes (NameNode and any back-up NameNodes)

8020/9000

IPC

File system metadata operations

Yes (All clients who directly need to interact with the HDFS)

Embedded in URI specified by fs.default.name

DataNode

All Slave Nodes

http

DataNode WebUI to access the status, logs etc.

Yes (Typically admins, Dev/Support teams)

dfs.datanode.http.address

https

Secure http service

dfs.datanode.https.address

 

Data transfer

 

dfs.datanode.address

IPC

Metadata operations

No

dfs.datanode.ipc.address

Secondary NameNode

Secondary NameNode and any backup Secondary NameNode

http

Checkpoint for NameNode metadata

No

dfs.secondary.http.address

Map Reduce Ports:

Service

Servers

Default Ports Used

Protocol

Description

Need End User Access?

Configuration Parameters

JobTracker  WebUI

Master Nodes (JobTracker Node and any back-up Job­Tracker node )

http

Web UI for JobTracker

Yes

mapred.job.tracker.http.address

JobTracker

Master Nodes (JobTracker Node)

IPC

For job submissions

Yes (All clients who need to submit the MapReduce jobs  including Hive, Hive server, Pig)

Embedded in URI specified by mapred.job.tracker

Task­Tracker Web UI and Shuffle

All Slave Nodes

http

DataNode Web UI to access status, logs, etc.

Yes (Typically admins, Dev/Support teams)

mapred.task.tracker.http.address

History Server WebUI

 

http

Web UI for Job History

Yes

mapreduce.history.server.http.address

HBase Ports:

Service

Servers

Default Ports Used

Protocol

Description

Need End User Access?

Configuration Parameters

HMaster

Master Nodes (HBase Master Node and any back-up HBase Master node)

   

Yes

hbase.master.port

HMaster Info Web UI

Master Nodes (HBase master Node and back up HBase Master node if any)

http

The port for the HBase­Master web UI. Set to -1 if you do not want the info server to run.

Yes

hbase.master.info.port

Region Server

All Slave Nodes

   

Yes (Typically admins, dev/support teams)

hbase.regionserver.port

Region Server

All Slave Nodes

http

 

Yes (Typically admins, dev/support teams)

hbase.regionserver.info.port

 

All ZooKeeper Nodes

 

Port used by ZooKeeper peers to talk to each other.Seehere for more information.

No

hbase.zookeeper.peerport

 

All ZooKeeper Nodes

 

Port used by ZooKeeper peers to talk to each other.Seehere for more information.

 

hbase.zookeeper.leaderport

     

Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect.

 

hbase.zookeeper.property.clientPort

Hive Ports:

Service

Servers

Default Ports Used

Protocol

Description

Need End User Access?

Configuration Parameters

Hive Server2

Hive Server machine (Usually a utility machine)

thrift

Service for programatically (Thrift/JDBC) connecting to Hive

Yes (Clients who need to connect to Hive either programatically or through UI SQL tools that use JDBC)

ENV Variable HIVE_PORT

Hive Metastore

 

thrift

 

Yes (Clients that run Hive, Pig and potentially M/R jobs that use HCatalog)

hive.metastore.uris

WebHCat Ports:

Service

Servers

Default Ports Used

Protocol

Description

Need End User Access?

WebHCat Server

Any utility machine

http

Web API on top of HCatalog and other Hadoop services

Yes

Spark Ports:

Service

Servers

Default Ports Used

Description

Spark GUI

Nodes running spark

Spark web interface for monitoring and troubleshooting

Impala Ports:

Service

Servers

Default Ports Used

Description

Impala Daemon

Nodes running impala daemon

Used by transmit commands and receive results by impala-shell

Impala Daemon

Nodes running impala daemon

Used by applications through JDBC

Impala Daemon

Nodes running impala daemon

Impala web interface for monitoring and troubleshooting

Impala StateStore Daemon

Nodes running impala StateStore daemon

StateStore web interface for monitoring and troubleshooting

Impala Catalog Daemon

Nodes running impala catalog daemon

Catalog service web interface for monitoring and troubleshooting

Alluxio Ports:

Service

Servers

Default Ports Used

Protocol

Description

Need End User Access?

Alluxio Web GUI

Any utility machine

http

Web GUI to check alluxio status

Yes

Alluxio API

Any utility machine

Tcp

Api to access data on alluxio

No

Sqoop Ports:

Service

Servers

Default Ports Used

Description

Sqoop server

Nodes running Sqoop

Used by Sqoop client to access the sqoop server

Hadoop Ecosystem related ports的更多相关文章

  1. Hadoop ecosystem notes Outline - TODO

    Motivation Sometimes I fell like giving up, then I remember I have a lot of motherfuckers to prove w ...

  2. Hadoop ecosystem

    How did it all start- huge data on the web! Nutch built to crawl this web data Huge data had to save ...

  3. Hadoop ecosystem 生态圈

    Cascading: hadoop上面的workflow Sqoop(发音:skup)是一款开源的工具,主要用于在Hadoop(Hive)与传统的数据库(mysql.postgresql...)间进行 ...

  4. hadoop发行版本

    Azure HDInsight Azure HDInsight is Microsoft's distribution of Hadoop. The Azure HDInsight ecosystem ...

  5. Hadoop HDFS 用户指南

    This document is a starting point for users working with Hadoop Distributed File System (HDFS) eithe ...

  6. 关于hadoop

    hadoop 是什么? 1. 适合海量数据的分布式存储与计算平台. 海量: 是指 1T 以上数据. 分布式: 任务分配到多态虚拟机上进行计算. 2. 多个任务是怎么被分配到多个虚拟机当中的? 分配是需 ...

  7. 使用Windows Azure的VM安装和配置CDH搭建Hadoop集群

    本文主要内容是使用Windows Azure的VIRTUAL MACHINES和NETWORKS服务安装CDH (Cloudera Distribution Including Apache Hado ...

  8. Hadoop入门进阶课程10--HBase介绍、安装与应用案例

    本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,博主为石山园,博客地址为 http://www.cnblogs.com/shishanyuan  ...

  9. [Hadoop 周边] Hadoop技术生态圈

    Hadoop版本演进 当前Hadoop有两大版本:Hadoop 1.0和Hadoop 2.0. Hadoop1.0被称为第一代Hadoop,由分布式文件系统HDFS和分布式计算框架MapReduce组 ...

随机推荐

  1. (十)ASP.NET自定义用户控件(3)

    using HX.DHL.EIP.Services.Def.Localization; using HX.DHL.EIP.Web.Framework; using System; using Syst ...

  2. HTML、CSS、JavaScript拾遗

    1.html元素中,如果有文本存在,当元素大小不足以容纳文本时,文本会进行强制换行.比如说设置页面不出现滚动条,body的overflow为hidden时,或者scroll为no时,span在超过页面 ...

  3. 如何为PAC运算设定请求集

    由于第一支PAC:定期请购成本处理程序 请求在运行过程中会自动提交 定期实际成本处理程序 请求,且 这两个请求不存在父子关系,导致 第二个请求尚未处理完成时 第一个请求已经运行完成,这就导致在设定请求 ...

  4. GitHub团队协作流程

    说来惭愧,这么长时间,第一次参与修改开源项目,所以整理了一份GitHub团队协作流程,作为备忘,文章大部分内容参考https://www.cnblogs.com/schaepher/p/4933873 ...

  5. Android学习笔记 TextSwitcher文本切换组件的使用

    activity_main.xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android&qu ...

  6. HTML4.0 需要掌握的主要内容

    常用标签: <html></html> 创建一个HTML文档 <head></head> 设置文档标题和其它在网页中不显示的信息 <title&g ...

  7. cinder侧卸载卷流程分析

    cinder侧卸载卷分析,存储类型以lvm+iscsi的方式为分析基础在虚机卸载卷的过程中,主要涉及如下三个函数1)cinder.volume.api.begin_detaching 把volume的 ...

  8. js 封装常用方法

    1. 获取数据类型 function getType(params) { , -) } 2. 深拷贝 function deepCopy(params) { var obj; if (typeof p ...

  9. kali linux之选择和修改exp与windows后渗透

    网上公开的exp代码,选择可信赖的exp源,exploit-db,securityfocus,searchsploit,有能力修改exp(python,perl,ruby,c,c++.......) ...

  10. Redhat7无法启动mysql

    是这样的,7的这个环境安装了叫MariaDB了 安装MariaDB之后必须先启动MariaDB [root@redhatx ~]# yum -y install mysql [root@redhatx ...