1.3.3、CDH 搭建Hadoop在安装之前(端口---CDH组件使用的端口)
列出的所有端口都是TCP。
| Component | Service | Qualifier | Port | Access Requirement | Configuration | Comment |
|---|---|---|---|---|---|---|
|
Hadoop HDFS |
DataNode |
50010 |
External |
dfs.datanode.address |
DataNode HTTP server port |
|
|
DataNode |
Secure |
1004 |
External |
dfs.datanode.address |
||
|
DataNode |
50075 |
External |
dfs.datanode.http.address |
|||
|
DataNode |
50475 |
External |
dfs.datanode.https.address |
|||
|
DataNode |
Secure |
1006 |
External |
dfs.datanode.http.address |
||
|
DataNode |
50020 |
External |
dfs.datanode.ipc.address |
|||
|
NameNode |
8020 |
External |
fs.default.name or fs.defaultFS |
fs.default.name is deprecated (but still works) |
||
|
NameNode |
8022 |
External |
dfs.namenode. servicerpc-address |
Optional port used by HDFS daemons to avoid sharing the RPC port used by clients (8020). Cloudera recommends using port 8022. |
||
|
NameNode |
50070 |
External |
dfs.http.address or dfs.namenode.http-address |
dfs.http.address is deprecated (but still works) |
||
|
NameNode |
Secure |
50470 |
External |
dfs.https.address or dfs.namenode.https-address |
dfs.https.address is deprecated (but still works) |
|
|
Secondary NameNode |
50090 |
Internal |
dfs.secondary.http.address or dfs.namenode. secondary.http-address |
dfs.secondary.http.address is deprecated (but still works) |
||
|
Secondary NameNode |
Secure |
50495 |
Internal |
dfs.secondary.https.address |
||
|
JournalNode |
8485 |
Internal |
dfs.namenode.shared.edits.dir |
|||
|
JournalNode |
8480 |
Internal |
dfs.journalnode. http-address |
|||
|
JournalNode |
8481 |
Internal |
dfs.journalnode. https-address |
|||
|
Failover Controller |
8019 |
Internal |
Used for NameNode HA |
|||
|
NFS gateway |
2049 |
External |
nfs port (nfs3.server.port) |
|||
|
NFS gateway |
4242 |
External |
mountd port (nfs3.mountd.port) |
|||
|
NFS gateway |
111 |
External |
portmapper or rpcbind port |
|||
| NFS gateway | 50079 | External |
nfs.http.port |
CDH 5.4.0 and higher. The NFS gateway daemon uses this port to serve metrics. The port is configurable on versions 5.10 and higher. | ||
| NFS gateway | Secure | 50579 | External |
nfs.https.port |
CDH 5.4.0 and higher. The NFS gateway daemon uses this port to serve metrics. The port is configurable on versions 5.10 and higher. | |
|
HttpFS |
14000 |
External | ||||
|
HttpFS |
14001 |
External | ||||
|
Hadoop YARN (MRv2) |
ResourceManager |
8032 |
External |
yarn. resourcemanager.address |
||
|
ResourceManager |
8030 |
Internal |
yarn. resourcemanager.scheduler.address |
|||
|
ResourceManager |
8031 |
Internal |
yarn. resourcemanager.resource-tracker. address |
|||
|
ResourceManager |
8033 |
External |
yarn. resourcemanager.admin.address |
|||
|
ResourceManager |
8088 |
External |
yarn. resourcemanager.webapp.address |
|||
|
ResourceManager |
8090 |
External |
yarn. resourcemanager.webapp.https.address |
|||
|
NodeManager |
8040 |
Internal |
yarn. nodemanager.localizer. address |
|||
|
NodeManager |
8041 |
Internal |
yarn. nodemanager. address |
|||
|
NodeManager |
8042 |
External |
yarn. nodemanager.webapp.address |
|||
|
NodeManager |
8044 |
External |
yarn. nodemanager.webapp.https.address |
|||
|
JobHistory Server |
10020 |
Internal |
mapreduce. jobhistory.address |
|||
|
JobHistory Server |
10033 |
Internal |
mapreduce.jobhistory.admin. address |
|||
|
Shuffle HTTP |
13562 |
Internal |
mapreduce.shuffle.port | |||
|
JobHistory Server |
19888 |
External |
mapreduce. jobhistory.webapp.address |
|||
|
JobHistory Server |
19890 |
External |
mapreduce. jobhistory.webapp.https.address |
|||
|
ApplicationMaster |
External |
The ApplicationMaster serves an HTTP service using an ephemeral port that cannot be restricted. This port is never accessed directly from outside the cluster by clients. All requests to the ApplicationMaster web server is routed using the YARN ResourceManager (proxy service). Locking down access to ephemeral port ranges within the cluster's network might restrict your access to the ApplicationMaster UI and its logs, along with the ability to look at running applications. |
||||
|
Flume |
Flume Agent |
41414 |
External |
|||
|
Hadoop KMS |
Key Management Server |
16000 |
External |
kms_http_port |
CDH 5.2.1 and higher. Applies to both Java KeyStore KMS and Key Trustee KMS. |
|
|
Key Management Server |
16001 |
Localhost |
kms_admin_port |
CDH 5.2.1 and higher. Applies to both Java KeyStore KMS and Key Trustee KMS. |
||
|
HBase |
Master |
60000 |
External |
hbase.master. port |
IPC |
|
|
Master |
60010 |
External |
hbase.master. info.port |
HTTP |
||
|
RegionServer |
60020 |
External |
hbase. regionserver. port |
IPC |
||
|
RegionServer |
60030 |
External |
hbase. regionserver.info.port |
HTTP |
||
|
HQuorumPeer |
2181 |
Internal |
hbase. zookeeper. property.clientPort |
HBase-managed ZooKeeper mode |
||
|
HQuorumPeer |
2888 |
Internal |
hbase. zookeeper. peerport |
HBase-managed ZooKeeper mode |
||
|
HQuorumPeer |
3888 |
Internal |
hbase. zookeeper.leaderport |
HBase-managed ZooKeeper mode |
||
|
REST |
Non- Cloudera Manager - managed |
8080 |
External |
hbase.rest.port |
The default REST port in HBase is 8080. Because this is a commonly used port, Cloudera Manager sets the default to 20550 instead. |
|
|
REST |
Cloudera Manager - managed |
20550 |
External |
hbase.rest.port |
The default REST port in HBase is 8080. Because this is a commonly used port, Cloudera Manager sets the default to 20550 instead. |
|
|
REST UI |
8085 |
External |
||||
|
Thrift Server |
Thrift Server |
9090 |
External |
Pass -p <port> on CLI |
||
|
Thrift Server |
9095 |
External |
||||
|
Avro server |
9090 |
External |
Pass --port <port> on CLI |
|||
| hbase-solr-indexer | Lily Indexer | 11060 | External | |||
|
Hive |
Metastore |
9083 |
External |
|||
|
HiveServer2 |
10000 |
External |
hive. server2. thrift.port |
The Beeline command interpreter requires that you specify this port on the command line. If you use Oracle database, you must manually reserve this port. For more information, see Reserving Ports for HiveServer 2. |
||
|
HiveServer2 Web User Interface (UI) |
10002 |
External |
hive. server2. webui.port in hive-site.xml |
|||
|
WebHCat Server |
50111 |
External |
templeton.port |
|||
|
Hue |
Server |
8888 |
External |
|||
| Kafka |
Broker |
TCP Port |
9092 |
External/Internal |
port |
The primary communication port used by producers and consumers; also used for inter-broker communication. |
|
Broker |
TLS/SSL Port |
9093 |
External/Internal |
ssl_port |
A secured communication port used by producers and consumers; also used for inter-broker communication. |
|
|
Broker |
JMX Port |
9393 |
Internal |
jmx_port |
Internal use only. Used for administration via JMX. |
|
|
MirrorMaker |
JMX Port |
9394 |
Internal |
jmx_port |
Internal use only. Used to administer the producer and consumer of the MirrorMaker. |
|
|
Broker |
HTTP Metric Report Port |
24042 |
Internal |
kafka.http.metrics.port |
Internal use only. This is the port via which the HTTP metric reporter listens. It is used to retrieve metrics through HTTP instead of JMX. |
|
|
Kudu |
Master |
7051 |
External |
Kudu Master RPC port |
||
|
Master |
8051 |
External |
Kudu Master HTTP server port |
|||
|
TabletServer |
7050 |
External |
Kudu TabletServer RPC port |
|||
|
TabletServer |
8050 |
External |
Kudu TabletServer HTTP server port |
|||
|
Oozie |
Oozie Server |
11000 |
External |
OOZIE_HTTP_PORT in oozie-env.sh |
HTTP |
|
|
Oozie Server |
SSL |
11443 |
External |
HTTPS |
||
|
Sentry |
Sentry Server |
8038 |
External |
sentry.service. server.rpc-port |
||
|
Sentry Server |
51000 |
External |
sentry.service. web.port |
|||
|
Spark |
Default Master RPC port |
7077 |
External |
|||
|
Default Worker RPC port |
7078 |
External | ||||
|
Default Master web UI port |
18080 |
External |
||||
|
Default Worker web UI port |
18081 |
External | ||||
|
History Server |
18088 |
External |
history.port |
|||
| Shuffle service | 7337 | Internal | ||||
|
Sqoop |
Metastore |
16000 |
External |
sqoop. metastore.server.port |
||
|
Sqoop 2 |
Sqoop 2 server |
8005 |
Localhost |
SQOOP_ADMIN_PORTenvironment variable |
||
|
Sqoop 2 server |
12000 |
External |
||||
|
Sqoop 2 |
12001 |
External |
Admin port |
|||
|
ZooKeeper |
Server (with CDH 5 or Cloudera Manager 5) |
2181 |
External |
clientPort |
Client port |
|
|
Server (with CDH 5 only) |
2888 |
Internal |
X in server.N =host:X:Y |
Peer |
||
|
Server (with CDH 5 only) |
3888 |
Internal |
X in server.N =host:X:Y |
Peer |
||
|
Server (with CDH 5 and Cloudera Manager 5) |
3181 |
Internal |
X in server.N =host:X:Y |
Peer |
||
|
Server (with CDH 5 and Cloudera Manager 5) |
4181 |
Internal |
X in server.N =host:X:Y |
Peer |
||
|
ZooKeeper JMX port |
9010 |
Internal |
ZooKeeper will also use another randomly selected port for RMI. To allow Cloudera Manager to monitor ZooKeeper, you must do oneof the following:
|
1.3.3、CDH 搭建Hadoop在安装之前(端口---CDH组件使用的端口)的更多相关文章
- 1.3、CDH 搭建Hadoop在安装之前(端口)
端口 Cloudera Manager,CDH组件,托管服务和第三方组件使用下表中列出的端口.在部署Cloudera Manager,CDH和托管服务以及第三方组件之前,请确保在每个系统上打开这些端口 ...
- 1.5.7、CDH 搭建Hadoop在安装之前(定制安装解决方案---配置单用户模式)
配置单用户模式 在传统的Cloudera Manager部署中,管理每台主机上的Hadoop进程的Cloudera Manager Agent以root用户身份运行.但是,某些环境会限制对root帐户 ...
- 2.6、CDH 搭建Hadoop在安装(安装CDH和其他软件)
第6步:安装CDH和其他软件 设置Cloudera Manager数据库后,启动Cloudera Manager Server,然后登录Cloudera Manager Admin Console: ...
- 2.1、CDH 搭建Hadoop在安装(为Cloudera Manager配置存储库)
步骤1:为Cloudera Manager配置存储库 使用包管理工具安装Cloudera Manager yum 对于RHEL兼容系统, zypper对于SLES,和 apt-get对于Ubuntu. ...
- 1.2、CDH 搭建Hadoop在安装之前(CDH基于包的安装所需的权限)
CDH基于包的安装所需的权限 以下部分描述了使用Cloudera Manager进行基于软件包的CDH安装的用户权限要求.这些要求是安装和管理包和服务的标准UNIX系统要求. 所需特权 sudo由Cl ...
- 1.3.1、CDH 搭建Hadoop在安装之前(端口---Cloudera Manager和Cloudera Navigator使用的端口)
下图概述了Cloudera Manager,Cloudera Navigator和Cloudera Management Service角色使用的一些端口: Cloudera Manager和Clou ...
- 2.2、CDH 搭建Hadoop在安装(安装Java Development Kit)
第2步:安装Java Development Kit 要安装Oracle JDK,您可以使用Cloudera Manager安装Cloudera提供的版本,也可以直接安装Oracle的其他版本. 继续 ...
- 1.4、CDH 搭建Hadoop在安装之前(推荐的群集主机和角色分配)
推荐的群集主机和角色分配 要点:本主题描述了Cloudera Manager管理的CDH群集的建议角色分配.您为部署选择的实际分配可能会有所不同,具体取决于工作负载的类型和数量,群集中部署的服务,硬件 ...
- 2.7、CDH 搭建Hadoop在安装(使用向导设置群集)
步骤7:使用向导设置群集 完成“ 群集安装”向导后,“ 群集设置”向导将自动启动.以下部分将指导您完成向导的每个页面: 选择服务 分配角色 设置数据库 查看更改 首次运行命令 恭喜! 选择服务 “ 选 ...
- 2.5、CDH 搭建Hadoop在安装(设置Cloudera Manager数据库)
步骤5:设置Cloudera Manager数据库 Cloudera Manager Server包含一个可以为自己创建和配置数据库的脚本.该脚本可以: 创建Cloudera Manager Serv ...
随机推荐
- 带约束的粒子群优化算法C++实现
2018年1月份给师姐做的一个小项目,本来不打算写的,因为论文还没发表,涉及查重等乱七八糟的问题.... 感觉现在不写,以后应该来不及了,因为已经在实习岗位了.... 不做过多介绍,只做大概的描述,我 ...
- IDEA下载Git中项目
一. 打开idea,点击File>Settings,搜索git(安装系统默认设置即可) 二. 选择git 三. Git中项目的路径粘贴到ID ...
- 使用Hexo + Github Pages搭建个人独立博客
使用Hexo + Github Pages搭建个人独立博客 https://linghucong.js.org/2016/04/15/2016-04-15-hexo-github-pages-blog ...
- java中的排序(自定义数据排序)--使用Collections的sort方法
排序:将一组数据按相应的规则 排列 顺序 1.规则: 基本数据类型:日常的大小排序. 引用类型: 内置引用类型(String,Integer..),内部已经指定规则,直接使用即可.---- ...
- The type javax.swing.JComponent cannot be resolved. It is indirectly referenced from required .class files
一段简单程序, frame.add(lbl);出现 问题. 也不知道为什么就是这里, 而我Ctrl + Shift + T 确实也是没有发现 JComponent . public void disp ...
- 转:ArcGIS中利用ArcMap将地理坐标系转换成投影坐标系(从WKID=4326到WKID=102100)
对于非地理专业的开发人员,对与这些生涩的概念,我们不一定都要了解,但是我们要理解,凡是以经纬度为单位的都是地理坐标系,因为它归根结底是一个椭球体,只不过各个国家为了反映该国家所在区域地球的真实形状,而 ...
- 利用java反射排查一次线上问题(确定问题及问题定位)
背景 hive 用的 1.1.0版本(其实这个版本bug挺多,包括执行计划串列的等等问题吧,建议大家如果选1.x版本用1.2.2吧),一下提到的代码部分如无特殊说明都是hive-1.1.0版本. 前段 ...
- 使用expect解决shell交互问题
比如ssh的时候,如果没设置免密登陆,那么就需要输入密码.使用expect可以做成自动应答 1.expect检测和安装 sudo apt-get install tcl tk expect 2.脚本样 ...
- delphi 调用Webservice 引入wsdl 报错 document empty
delphi 调用Webservice 引入wsdl 报错 document empty 直接引入wsdl 地址报错 document empty 解决办法:在浏览器里保存为xml文件,然后在开发环境 ...
- ADOQuery.Parameters: Property Parameters does not exist
Exception class EReadError with message 'Property Parameters does not exist'. Exception class EReadE ...