etcd介绍,以及适用场景,参考:http://www.infoq.com/cn/articles/etcd-interpretation-application-scenario-implement-principle

etcd的项目:https://github.com/coreos/etcd/releases/tag/v3.2.10   并且下载最新版本etcd

环境:

master:192.168.101.14,node1:192.168.101.15,node2:192.168.101.19

1、在三个节点上下载etcd-v3.2.10-linux-amd64.tar.gz,并提供etcd命令,具体操作如下

[root@docker ~]# tar xf etcd-v3.2.10-linux-amd64.tar.gz
[root@docker ~]# cd etcd-v3.2.10-linux-amd64
[root@docker etcd-v3.2.10-linux-amd64]# cp etcd etcdctl /usr/local/bin/

2、在三个节点上创建etcd数据目录:

# mkdir -p /var/lib/etcd

3、在每个节点上创建etcd的systemd unit文件/usr/lib/systemd/system/etcd.service

master节点:
[root@docker ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd --name master --initial-advertise-peer-urls http://192.168.101.14:2380 --listen-peer-urls http://192.168.101.14:2380 --listen-client-urls http://192.168.101.14:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.101.14:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380,node2=http://192.168.101.19:2380 --initial-cluster-state new --data-dir=/var/lib/etcd Restart=on-failure
RestartSec=
LimitNOFILE= [Install]
WantedBy=multi-user.target
node1节点:
[root@localhost ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd --name node1 --initial-advertise-peer-urls http://192.168.101.15:2380 --listen-peer-urls http://192.168.101.15:2380 --listen-client-urls http://192.168.101.15:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.101.15:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380,node2=http://192.168.101.19:2380 --initial-cluster-state new --data-dir=/var/lib/etcd Restart=on-failure
RestartSec=
LimitNOFILE= [Install]
WantedBy=multi-user.target
node2节点:
[root@localhost ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd --name node2 --initial-advertise-peer-urls http://192.168.101.19:2380 --listen-peer-urls http://192.168.101.19:2380 --listen-client-urls http://192.168.101.19:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.101.19:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380,node2=http://192.168.101.19:2380 --initial-cluster-state new --data-dir=/var/lib/etcd Restart=on-failure
RestartSec=
LimitNOFILE= [Install]
WantedBy=multi-user.target

由于并没有配置TSL认证,所以都是http而不是https,etcd客户端监听在2379,服务端监听在2380

三个节点配置完成了服务文件,于是加载该文件并启动服务:

# systemctl daemon-reload
# systemctl enable etcd
# systemctl start etcd
# systemctl status etcd

查看三个节点的状态:

[root@docker ~]# systemctl status etcd
● etcd.service - etcd server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu -- :: CST; 12s ago
Main PID: (etcd)
CGroup: /system.slice/etcd.service
└─ /usr/local/bin/etcd --name master --initial-advertise-peer-urls http://192.168.101.14:2380 --listen-peer-urls http://192.168.101.... Nov :: docker etcd[]: enabled capabilities for version 3.0
Nov :: docker etcd[]: health check for peer 192d36c71643c39d could not connect: dial tcp 192.168.101.19:: getsockopt: con...n refused
Nov :: docker etcd[]: peer 192d36c71643c39d became active
Nov :: docker etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream Message writer)
Nov :: docker etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream MsgApp v2 writer)
Nov :: docker etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream MsgApp v2 reader)
Nov :: docker etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream Message reader)
Nov :: docker etcd[]: updating the cluster version from 3.0 to 3.2
Nov :: docker etcd[]: updated the cluster version from 3.0 to 3.2
Nov :: docker etcd[]: enabled capabilities for version 3.2
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost ~]# systemctl status etcd
● etcd.service - etcd server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu -- :: CST; 15s ago
Main PID: (etcd)
CGroup: /system.slice/etcd.service
└─ /usr/local/bin/etcd --name node1 --initial-advertise-peer-urls http://192.168.101.15:2380 --listen-peer-urls http://192.168.101.1... Nov :: localhost.localdomain systemd[]: Started etcd server.
Nov :: localhost.localdomain etcd[]: set the initial cluster version to 3.0
Nov :: localhost.localdomain etcd[]: enabled capabilities for version 3.0
Nov :: localhost.localdomain etcd[]: peer 192d36c71643c39d became active
Nov :: localhost.localdomain etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream Message writer)
Nov :: localhost.localdomain etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream MsgApp v2 reader)
Nov :: localhost.localdomain etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream MsgApp v2 writer)
Nov :: localhost.localdomain etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream Message reader)
Nov :: localhost.localdomain etcd[]: updated the cluster version from 3.0 to 3.2
Nov :: localhost.localdomain etcd[]: enabled capabilities for version 3.2
[root@localhost ~]# systemctl status etcd
● etcd.service - etcd server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu -- :: CST; 17s ago
Main PID: (etcd)
CGroup: /system.slice/etcd.service
└─ /usr/local/bin/etcd --name node2 --initial-advertise-peer-urls http://192.168.101.19:2380 --listen-peer-urls http://192.168.101.1... Nov :: localhost.localdomain etcd[]: dialing to target with scheme: ""
Nov :: localhost.localdomain etcd[]: could not get resolver for scheme: ""
Nov :: localhost.localdomain etcd[]: serving insecure client requests on 127.0.0.1:, this is strongly discouraged!
Nov :: localhost.localdomain etcd[]: ready to serve client requests
Nov :: localhost.localdomain etcd[]: dialing to target with scheme: ""
Nov :: localhost.localdomain etcd[]: could not get resolver for scheme: ""
Nov :: localhost.localdomain etcd[]: serving insecure client requests on 192.168.101.19:, this is strongly discouraged!
Nov :: localhost.localdomain systemd[]: Started etcd server.
Nov :: localhost.localdomain etcd[]: updated the cluster version from 3.0 to 3.2
Nov :: localhost.localdomain etcd[]: enabled capabilities for version 3.2

没有报错问题后查看集群中的成员,任意节点执行就行:

[root@docker ~]# etcdctl member list
192d36c71643c39d: name=node2 peerURLs=http://192.168.101.19:2380 clientURLs=http://192.168.101.19:2379 isLeader=false
5f3835545a5f41e4: name=master peerURLs=http://192.168.101.14:2380 clientURLs=http://192.168.101.14:2379 isLeader=true
77c1ac60c5100363: name=node1 peerURLs=http://192.168.101.15:2380 clientURLs=http://192.168.101.15:2379 isLeader=false

可以看见集群中自动推选了一个节点作为leader,然后查看集群健康状态:

[root@docker ~]# etcdctl cluster-health
member 192d36c71643c39d is healthy: got healthy result from http://192.168.101.19:2379
member 5f3835545a5f41e4 is healthy: got healthy result from http://192.168.101.14:2379
member 77c1ac60c5100363 is healthy: got healthy result from http://192.168.101.15:2379
cluster is healthy

使用etcd进行操作数据:

[root@docker ~]# etcdctl set name wadeson
wadeson

在node1、node2节点上进行查看:

[root@localhost ~]# etcdctl get name
wadeson
[root@localhost ~]# etcdctl get name
wadeson

下面也是查看集群健康状态的一种:

[root@docker ~]# curl http://192.168.101.14:2379/health
{"health": "true"}[root@docker ~]# curl http://192.168.101.15:2379/health
{"health": "true"}[root@docker ~]# curl http://192.168.101.19:2379/health
{"health": "true"}

4、对etcd集群中的member进行操作:

4.1、更新advertise client urls

  直接修改参数--advertise-client-urls或者ETCD_ADVERTISE_CLIENT_URLS,然后重启该member

4.2、更新advertise peer urls:

  $ etcdctl member update 192d36c71643c39d http://192.168.101.15:2380

  更新要修改的peer urls的member id和peer urls,然后重启member
 4.3、移除集群中的某一个节点member:
首先查看member列表:
[root@docker ~]# etcdctl member list
192d36c71643c39d: name=node2 peerURLs=http://192.168.101.19:2380 clientURLs=http://192.168.101.19:2379 isLeader=false
5f3835545a5f41e4: name=master peerURLs=http://192.168.101.14:2380 clientURLs=http://192.168.101.14:2379 isLeader=true
77c1ac60c5100363: name=node1 peerURLs=http://192.168.101.15:2380 clientURLs=http://192.168.101.15:2379 isLeader=false

现在将node2也就是192.168.101.19这个节点移除该集群:

[root@docker ~]# etcdctl member remove 192d36c71643c39d
Removed member 192d36c71643c39d from cluster
[root@docker ~]# etcdctl member list
5f3835545a5f41e4: name=master peerURLs=http://192.168.101.14:2380 clientURLs=http://192.168.101.14:2379 isLeader=true
77c1ac60c5100363: name=node1 peerURLs=http://192.168.101.15:2380 clientURLs=http://192.168.101.15:2379 isLeader=false

集群针对的操作是对member_id进行操作

4.4、向集群增加一个member,现在集群只有两个member:

[root@docker ~]# etcdctl member list
5f3835545a5f41e4: name=master peerURLs=http://192.168.101.14:2380 clientURLs=http://192.168.101.14:2379 isLeader=true
77c1ac60c5100363: name=node1 peerURLs=http://192.168.101.15:2380 clientURLs=http://192.168.101.15:2379 isLeader=false

现在向该集群增加一个member,也就是增加一个节点:

[root@docker ~]# etcdctl member add node2 http://192.168.101.19:2380
Added member named node2 with ID 4edc521f6598ba03 to cluster ETCD_NAME="node2"
ETCD_INITIAL_CLUSTER="node2=http://192.168.101.19:2380,master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

然后安装提示操作:

[root@docker ~]# export ETCD_NAME="node2"
[root@docker ~]# export ETCD_INITIAL_CLUSTER="node2=http://192.168.101.19:2380,master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380"
[root@docker ~]# export ETCD_INITIAL_CLUSTER_STATE="existing"
[root@docker ~]# etcd --listen-client-urls http://192.168.101.19:2379 --advertise-client-urls http://192.168.101.19:2379 --listen-peer-urls http://192.168.101.19:2380 --initial-advertise-peer-urls http://192.168.101.19:2380 --data-dir %data_dir%
-- ::33.649640 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=node2=http://192.168.101.19:2380,master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380
-- ::33.649694 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing
-- ::33.649706 I | pkg/flags: recognized and used environment variable ETCD_NAME=node2
-- ::33.649748 I | etcdmain: etcd Version: 3.2.
-- ::33.649752 I | etcdmain: Git SHA: 694728c
-- ::33.649754 I | etcdmain: Go Version: go1.8.5
-- ::33.649757 I | etcdmain: Go OS/Arch: linux/amd64
-- ::33.649760 I | etcdmain: setting maximum number of CPUs to , total number of available CPUs is
-- ::33.649850 C | etcdmain: listen tcp 192.168.101.19:: bind: cannot assign requested address

不知道为何最后绑定不到192.168.101.19上,这里是一个问题。。。。。。。待后面解决

etcd集群搭建的更多相关文章

  1. Docker下ETCD集群搭建

    搭建集群之前首先准备两台安装了CentOS 7的主机,并在其上安装好Docker. Master 10.100.97.46 Node 10.100.97.64 ETCD集群搭建有三种方式,分别是Sta ...

  2. [Kubernetes]CentOS7下Etcd集群搭建

    Etcd简要介绍 Etcd是Kubernetes集群中的一个十分重要的组件,用于保存集群所有的网络配置和对象的状态信息 Etcd构建自身高可用集群主要有三种形式: ①静态发现: 预先已知 Etcd 集 ...

  3. Kubernetes-3.3:ETCD集群搭建及使用(https认证+数据备份恢复)

    etcd集群搭建 环境介绍 基于CentOS Linux release 7.9.2009 (Core) ip hostname role 172.17.0.4 cd782d0a790b etcd1 ...

  4. Centos7下Etcd集群搭建

    一.简介 "A highly-available key value store for shared configuration and service discovery." ...

  5. kubeadm 线上集群部署(一) 外部 ETCD 集群搭建

      IP Hostname   192.168.1.23 k8s-etcd-01 etcd集群节点,默认关于ETCD所有操作均在此节点上操作 192.168.1.24 k8s-etcd-02 etcd ...

  6. etcd 集群搭建

    现有三台机器 CentOS7 node1 10.2.0.10 node2 10.2.0.11 node3 10.2.0.12  1 源码解压命令行方式 node1 ./etcd --name infr ...

  7. etcd集群的搭建

    由于最近在学习kubernetes,etcd作为kubernetes集群的主数据库,必须先启动. etcds实例名称 IP地址 Hostname etcd 1 192.168.142.161 kube ...

  8. etcd集群部署与遇到的坑

    在k8s集群中使用了etcd作为数据中心,在实际操作中遇到了一些坑.今天记录一下,为了以后更好操作. ETCD参数说明 —data-dir 指定节点的数据存储目录,这些数据包括节点ID,集群ID,集群 ...

  9. Kubernetes集群部署之三ETCD集群部署

    kuberntes 系统使用 etcd 存储所有数据,本文档介绍部署一个三节点高可用 etcd 集群的步骤,这三个节点复用 kubernetes 集群机器k8s-master.k8s-node-1.k ...

随机推荐

  1. Adapter.notifyDataSetChanged()源码分析以及与ListView.setAdapter的区别

    一直很好奇,notifyDataSetChanged究竟是重绘了整个ListView还是只重绘了被修改的那些Item,它与重新设置适配器即调用setAdapter的区别在哪里?所以特地追踪了一下源码, ...

  2. mysql热数据加载管理

    5.6版本之后,提供了一个新特性来快速预热buffer_pool缓冲池.在my.cnf里面加入几个参数: innodb_buffer_pool_dump_at_shutdown = 1   --在关闭 ...

  3. Azkaban(二)CentOS7.5安装Azkaban

    1.软件介绍 Azkaban Web 服务器:azkaban-web-server-2.5.0.tar.gz Azkaban Excutor 执行服务器:azkaban-executor-server ...

  4. WangSql 3.0源码共享(WangSql 1.0重大升级到3.0)

    WangSql 1.0博文阅读: http://www.cnblogs.com/deeround/p/6204610.html 基于1.0做了以下重大改动: 1.多数据实现方式调整 2.使用EmitM ...

  5. MyEclipse个性化设置

    1.修改项目文件默认编码 Note:myEclipse默认的编码是GBK, 也就是未设置编码格式的文件都默认使用GBK进行编码, 而更糟糕的是JSP.JavaScriptt默认编码竟然是ISO-885 ...

  6. SPOJ - SUBLEX 后缀自动机

    SPOJ - SUBLEX 思路:求第k大字串,求出sam上每个节点开始能识别多少字串,然后从起点开始跑就好啦. #include<bits/stdc++.h> #define LL lo ...

  7. php实现微信分享朋友圈

    class JSSDK {  private $appId;  private $appSecret; public function __construct($appId, $appSecret) ...

  8. Hadoop CapacitySchedule配置

    下面是Hadoop中CapacitySchedule配置,包含了新建队列和子队列 <configuration> <property> <name>yarn.sch ...

  9. JAVA基础知识之jdk下载与安装

    一.下载JDK 下载网址:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 如果 ...

  10. Java 中的定时任务(一)

    定时任务简单来说就是在指定时间,指定的频率来执行一个方法,而在 Java 中我们又该如何实现呢? 想来主要有 3 种方式,最原始的方式肯定是开启一个线程,让它睡一会跑一次睡一会跑一次这也就达到了定频率 ...