七台机器部署Hadoop2.6.5高可用集群
1.HA架构注意事项
- 两个Namenode节点在某个时间只能有一个节点正常响应客户端请求,响应请求的节点状态必须是active
- standby状态要能够快速无缝切换成active状态,两个NN节点必须时刻保持元数据一致
- 将edits文件放到qjournal(一种分布式应用,依赖zookeeper实现,管理edits),而不存储在两个NN上,如果各个edits放在各个NN上,只能通过网络通信达到同步效果,可用性、安全性大大降低
- 每个namenode有一个监控进程zkfc,用来监控namenode是否异常
- 避免状态切换时发生brain split,执行自定义脚本杀死NN进程,确保只有一个NN是active状态
- 两个NN组成Federation
2.搭建准备
准备七台机器
3.安装过程
在CentOS7One机器上安装jdk、hadoop的过程不再赘述,参考本文
首先配置免密
CentOS7One需要免密连接CentOS7Five,CentOS7Six,CentOS7Seven,用以启动zookeeper,datanode
CentOS7Three需要免密连接CentOS7Five,CentOS7Six,CentOS7Seven,用以启动nodemanager
免密配置过程参考Hadoop免密钥配置
3.1 zookeeper配置
配置CentOS7Five的zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zkdata
dataLogDir=/opt/zkdatalog
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1 server.4=192.168.94.142:2888:3888
server.5=192.168.94.143:2888:3888
server.6=192.168.94.144:2888:
将配置好的zookeeper复制到CentOS7Six,CentOS7Seven
scp /opt/zookeeper/zookeeper-3.4.10 CentOS7Six:/opt/zookeeper/zookeeper-3.4.10
scp /opt/zookeeper/zookeeper-3.4.10 CentOS7Seven:/opt/zookeeper/zookeeper-3.4.10
3.2 hadoop配置
编辑CentOS7One的core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1/</value>
</property> <property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-2.6.5/data</value>
</property> <property>
<name>ha.zookeeper.quorum</name>
<value>CentOS7Five:2181,CentOS7Six:2181,CentOS7Seven:2181</value>
</property>
</configuration>
编辑CentOS7One的hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>CentOS7One:9000</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>CentOS7One:50070</value>
</property>
<!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>CentOS7Two:9000</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>CentOS7Two:50070</value>
</property>
<!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://CentOS7Five:8485;CentoS7Six:8485;CentOS7Seven:8485/ns1</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop-2.6.5/journaldata</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>
编辑CentOS7One的mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
编辑CentOS7One的yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <configuration>
<!-- 开启RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>CentOS7Three</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>CentOS7Four</value>
"yarn-site.xml" 52L, 1479C
<name>yarn.resourcemanager.hostname.rm2</name>
<value>CentOS7Four</value>
</property>
<!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>CentOS7Five:2181,CentOS7Six:2181,CentOS7Seven:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
将配置好的Hadoop发送给所有机器
scp -r /usr/local/hadoop-2.6.5 CentOS7Two:/usr/local/hadoop-2.6.5/
scp -r /usr/local/hadoop-2.6.5 CentOS7Three:/usr/local/hadoop-2.6.5/
scp -r /usr/local/hadoop-2.6.5 CentOS7Four:/usr/local/hadoop-2.6.5/
scp -r /usr/local/hadoop-2.6.5 CentOS7Five:/usr/local/hadoop-2.6.5/
scp -r /usr/local/hadoop-2.6.5 CentOS7Six:/usr/local/hadoop-2.6.5/
scp -r /usr/local/hadoop-2.6.5 CentOS7Seven:/usr/local/hadoop-2.6.5/
至此,配置完毕
4.启动流程
在CentOS7Five,CentOS7Six,CentOS7Seven启动zookeeper
sh zkServer.sh start
在CentOS7Five,CentOS7Six,CentOS7Seven启动journalnode
sh hadoop-daemon.sh start journalnode
在CentOS7One上格式化namenode(以后启动无需格式化)
hdfs namenode -format
在CentOS7One上格式化 ZKFC(以后启动无需格式化)
hdfs zkfc -formatZK
在CentOS7One上启动hdfs
sh start-dfs.sh
在CentOS7Three,CentOS7Four上启动yarn
sh start-yarn.sh
5.结果
CentOS7One启动成功应该有如下进程
[root@CentOS7One ~]# jps
17397 DFSZKFailoverController
17111 NameNode
17480 Jps
CentOS7Two
[root@CentOS7Two ~]# jps
2497 Jps
2398 DFSZKFailoverController
2335 NameNode
CentOS7Three
[root@CentOS7Three ~]# jps
2344 ResourceManager
2619 Jps
CentOS7Four
[root@CentOS7Four ~]# jps
2344 ResourceManager
2619 Jps
CentOS7Five
[root@CentOS7Five logs]# jps
2803 Jps
2310 QuorumPeerMain
2460 JournalNode
2668 NodeManager
2543 DataNode
CentOS7Six
[root@CentOS7Six ~]# jps
2400 JournalNode
2608 NodeManager
2483 DataNode
2743 Jps
2301 QuorumPeerMain
CentOS7Seven
[root@CentOS7Seven ~]# jps
2768 Jps
2313 QuorumPeerMain
2425 JournalNode
2650 NodeManager
2525 DataNode
访问http://centos7one:50070,可以看到我们有三个datanode
访问http://centos7three:8088/
查看集群情况
6.致谢
https://www.cnblogs.com/biehongli/p/7660310.html
https://www.bilibili.com/video/av15390641/?p=44
7.常见问题
7.1 备用Namenode启动失败
原因:主节点格式化后备用节点的data目录还是原来的
解决:主节点格式化之后要将主节点data目录复制给备用
sh /usr/local/hadoop-2.6./hadoop-2.6./bin/hadoop namenode -format scp -r /usr/local/hadoop-2.6.5/data/ CentOS7Two:/usr/local/hadoop-2.6.5/
七台机器部署Hadoop2.6.5高可用集群的更多相关文章
- [转帖]Breeze部署kubernetes1.13.2高可用集群
Breeze部署kubernetes1.13.2高可用集群 2019年07月23日 10:51:41 willblog 阅读数 673 标签: kubernetes 更多 个人分类: kubernet ...
- 使用Ansible部署etcd 3.2高可用集群
之前写过一篇手动搭建etcd 3.1集群的文章<etcd 3.1 高可用集群搭建>,最近要初始化一套新的环境,考虑用ansible自动化部署整套环境, 先从部署etcd 3.2集群开始. ...
- 部署kubernetes1.8.3高可用集群
Kubernetes作为容器应用的管理平台,通过对pod的运行状态进行监控,并且根据主机或容器失效的状态将新的pod调度到其他node上,实现了应用层的高可用. 针对kubernetes集群,高可用性 ...
- Hadoop2.6.5高可用集群搭建
软件环境: linux系统: CentOS6.7 Hadoop版本: 2.6.5 zookeeper版本: 3.4.8 主机配置: 一共m1, m2, m3, m4, m5这五部机, 每部主机的用户名 ...
- centos7下部署mariadb+galera数据库高可用集群
[root@node1 ~]# cat /etc/yum.repos.d/mariadb.repo # MariaDB 10.1 CentOS repository list - created 20 ...
- (七) Docker 部署 MySql8.0 一主一从 高可用集群
参考并感谢 官方文档 https://hub.docker.com/_/mysql y0ngb1n https://www.jianshu.com/p/0439206e1f28 vito0319 ht ...
- Hadoop部署方式-高可用集群部署(High Availability)
版权声明:原创作品,谢绝转载!否则将追究法律责任. 本篇博客的高可用集群是建立在完全分布式基础之上的,详情请参考:https://www.cnblogs.com/yinzhengjie/p/90651 ...
- lvs+keepalived部署k8s v1.16.4高可用集群
一.部署环境 1.1 主机列表 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 lvs-keepal ...
- Centos7.6部署k8s v1.16.4高可用集群(主备模式)
一.部署环境 主机列表: 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 master01 7.6. ...
随机推荐
- jQuery插件初级练习4答案
html: $("p").log().css("color","red") jQuery: $.fn.extend({ log: funct ...
- SRM482
250pt 题意:给定n把锁,第i轮每间隔i个打开一个木有打开的.问最后打开的事几 思路:直接vector模拟 code: #line 7 "LockersDivOne.cpp" ...
- AngularJS $eval $parse
$eval $parse都可以解析或计算Angular表达式的值. 一.$parse 是一个独立的可以注入的服务,注入就可以使用,它返回一个函数,我们需要显式将表达式求值的上下文传递给该函数.$par ...
- linux 三剑客之awk
#AWK命令 基础显示 打印install.log文件中包含data字段行的第二区域 awk '/data/ {print $2}' install.log 查看num10.txt的第一行 head ...
- POJ 2570 线段树
Potted Flower Time Limit: 2000 MS Memory Limit: 65536 KB 64-bit integer IO format: %I64d , %I64u Jav ...
- poj 2886 线段树的更新+反素数
Who Gets the Most Candies? Time Limit: 5000 MS Memory Limit: 0 KB 64-bit integer IO format: %I64d , ...
- poj 2262 Goldbach's Conjecture
素数判定...很简单= =.....只是因为训练题有,所以顺便更~ #include<cstdio> #include<memory.h> #define maxn 50000 ...
- Python 中的深拷贝和浅拷贝
一.浅拷贝python中 对象赋值时 默认是浅拷贝,满足如下规律:1. 对于 不可变对象(字符串,元组 等),赋值 实际上是创建一个新的对象:例如: >>> person=['nam ...
- windows 7 下elasticsearch5.0 安装head 插件
windows 7 下elasticsearch5.0 安装head 插件 elasticsearch5.0 和2有了很大的变化,以前的很多插件都有了变化比如 bigdesk head,以下是安装he ...
- C++动态(显式)调用 C++ dll
1.创建DLL新项目Dll1,Dll1.cpp: extern "C" __declspec(dllexport) const char* myfunc() { return &q ...