etcd介绍,以及适用场景,参考:http://www.infoq.com/cn/articles/etcd-interpretation-application-scenario-implement-principle

etcd的项目:https://github.com/coreos/etcd/releases/tag/v3.2.10   并且下载最新版本etcd

环境:

master:192.168.101.14,node1:192.168.101.15,node2:192.168.101.19

1、在三个节点上下载etcd-v3.2.10-linux-amd64.tar.gz,并提供etcd命令,具体操作如下

[root@docker ~]# tar xf etcd-v3.2.10-linux-amd64.tar.gz
[root@docker ~]# cd etcd-v3.2.10-linux-amd64
[root@docker etcd-v3.2.10-linux-amd64]# cp etcd etcdctl /usr/local/bin/

2、在三个节点上创建etcd数据目录:

# mkdir -p /var/lib/etcd

3、在每个节点上创建etcd的systemd unit文件/usr/lib/systemd/system/etcd.service

master节点:
[root@docker ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd --name master --initial-advertise-peer-urls http://192.168.101.14:2380 --listen-peer-urls http://192.168.101.14:2380 --listen-client-urls http://192.168.101.14:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.101.14:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380,node2=http://192.168.101.19:2380 --initial-cluster-state new --data-dir=/var/lib/etcd Restart=on-failure
RestartSec=
LimitNOFILE= [Install]
WantedBy=multi-user.target
node1节点:
[root@localhost ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd --name node1 --initial-advertise-peer-urls http://192.168.101.15:2380 --listen-peer-urls http://192.168.101.15:2380 --listen-client-urls http://192.168.101.15:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.101.15:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380,node2=http://192.168.101.19:2380 --initial-cluster-state new --data-dir=/var/lib/etcd Restart=on-failure
RestartSec=
LimitNOFILE= [Install]
WantedBy=multi-user.target
node2节点:
[root@localhost ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd --name node2 --initial-advertise-peer-urls http://192.168.101.19:2380 --listen-peer-urls http://192.168.101.19:2380 --listen-client-urls http://192.168.101.19:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.101.19:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380,node2=http://192.168.101.19:2380 --initial-cluster-state new --data-dir=/var/lib/etcd Restart=on-failure
RestartSec=
LimitNOFILE= [Install]
WantedBy=multi-user.target

由于并没有配置TSL认证,所以都是http而不是https,etcd客户端监听在2379,服务端监听在2380

三个节点配置完成了服务文件,于是加载该文件并启动服务:

# systemctl daemon-reload
# systemctl enable etcd
# systemctl start etcd
# systemctl status etcd

查看三个节点的状态:

[root@docker ~]# systemctl status etcd
● etcd.service - etcd server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu -- :: CST; 12s ago
Main PID: (etcd)
CGroup: /system.slice/etcd.service
└─ /usr/local/bin/etcd --name master --initial-advertise-peer-urls http://192.168.101.14:2380 --listen-peer-urls http://192.168.101.... Nov :: docker etcd[]: enabled capabilities for version 3.0
Nov :: docker etcd[]: health check for peer 192d36c71643c39d could not connect: dial tcp 192.168.101.19:: getsockopt: con...n refused
Nov :: docker etcd[]: peer 192d36c71643c39d became active
Nov :: docker etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream Message writer)
Nov :: docker etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream MsgApp v2 writer)
Nov :: docker etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream MsgApp v2 reader)
Nov :: docker etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream Message reader)
Nov :: docker etcd[]: updating the cluster version from 3.0 to 3.2
Nov :: docker etcd[]: updated the cluster version from 3.0 to 3.2
Nov :: docker etcd[]: enabled capabilities for version 3.2
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost ~]# systemctl status etcd
● etcd.service - etcd server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu -- :: CST; 15s ago
Main PID: (etcd)
CGroup: /system.slice/etcd.service
└─ /usr/local/bin/etcd --name node1 --initial-advertise-peer-urls http://192.168.101.15:2380 --listen-peer-urls http://192.168.101.1... Nov :: localhost.localdomain systemd[]: Started etcd server.
Nov :: localhost.localdomain etcd[]: set the initial cluster version to 3.0
Nov :: localhost.localdomain etcd[]: enabled capabilities for version 3.0
Nov :: localhost.localdomain etcd[]: peer 192d36c71643c39d became active
Nov :: localhost.localdomain etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream Message writer)
Nov :: localhost.localdomain etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream MsgApp v2 reader)
Nov :: localhost.localdomain etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream MsgApp v2 writer)
Nov :: localhost.localdomain etcd[]: established a TCP streaming connection with peer 192d36c71643c39d (stream Message reader)
Nov :: localhost.localdomain etcd[]: updated the cluster version from 3.0 to 3.2
Nov :: localhost.localdomain etcd[]: enabled capabilities for version 3.2
[root@localhost ~]# systemctl status etcd
● etcd.service - etcd server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu -- :: CST; 17s ago
Main PID: (etcd)
CGroup: /system.slice/etcd.service
└─ /usr/local/bin/etcd --name node2 --initial-advertise-peer-urls http://192.168.101.19:2380 --listen-peer-urls http://192.168.101.1... Nov :: localhost.localdomain etcd[]: dialing to target with scheme: ""
Nov :: localhost.localdomain etcd[]: could not get resolver for scheme: ""
Nov :: localhost.localdomain etcd[]: serving insecure client requests on 127.0.0.1:, this is strongly discouraged!
Nov :: localhost.localdomain etcd[]: ready to serve client requests
Nov :: localhost.localdomain etcd[]: dialing to target with scheme: ""
Nov :: localhost.localdomain etcd[]: could not get resolver for scheme: ""
Nov :: localhost.localdomain etcd[]: serving insecure client requests on 192.168.101.19:, this is strongly discouraged!
Nov :: localhost.localdomain systemd[]: Started etcd server.
Nov :: localhost.localdomain etcd[]: updated the cluster version from 3.0 to 3.2
Nov :: localhost.localdomain etcd[]: enabled capabilities for version 3.2

没有报错问题后查看集群中的成员,任意节点执行就行:

[root@docker ~]# etcdctl member list
192d36c71643c39d: name=node2 peerURLs=http://192.168.101.19:2380 clientURLs=http://192.168.101.19:2379 isLeader=false
5f3835545a5f41e4: name=master peerURLs=http://192.168.101.14:2380 clientURLs=http://192.168.101.14:2379 isLeader=true
77c1ac60c5100363: name=node1 peerURLs=http://192.168.101.15:2380 clientURLs=http://192.168.101.15:2379 isLeader=false

可以看见集群中自动推选了一个节点作为leader,然后查看集群健康状态:

[root@docker ~]# etcdctl cluster-health
member 192d36c71643c39d is healthy: got healthy result from http://192.168.101.19:2379
member 5f3835545a5f41e4 is healthy: got healthy result from http://192.168.101.14:2379
member 77c1ac60c5100363 is healthy: got healthy result from http://192.168.101.15:2379
cluster is healthy

使用etcd进行操作数据:

[root@docker ~]# etcdctl set name wadeson
wadeson

在node1、node2节点上进行查看:

[root@localhost ~]# etcdctl get name
wadeson
[root@localhost ~]# etcdctl get name
wadeson

下面也是查看集群健康状态的一种:

[root@docker ~]# curl http://192.168.101.14:2379/health
{"health": "true"}[root@docker ~]# curl http://192.168.101.15:2379/health
{"health": "true"}[root@docker ~]# curl http://192.168.101.19:2379/health
{"health": "true"}

4、对etcd集群中的member进行操作:

4.1、更新advertise client urls

  直接修改参数--advertise-client-urls或者ETCD_ADVERTISE_CLIENT_URLS,然后重启该member

4.2、更新advertise peer urls:

  $ etcdctl member update 192d36c71643c39d http://192.168.101.15:2380

  更新要修改的peer urls的member id和peer urls,然后重启member
 4.3、移除集群中的某一个节点member:
首先查看member列表:
[root@docker ~]# etcdctl member list
192d36c71643c39d: name=node2 peerURLs=http://192.168.101.19:2380 clientURLs=http://192.168.101.19:2379 isLeader=false
5f3835545a5f41e4: name=master peerURLs=http://192.168.101.14:2380 clientURLs=http://192.168.101.14:2379 isLeader=true
77c1ac60c5100363: name=node1 peerURLs=http://192.168.101.15:2380 clientURLs=http://192.168.101.15:2379 isLeader=false

现在将node2也就是192.168.101.19这个节点移除该集群:

[root@docker ~]# etcdctl member remove 192d36c71643c39d
Removed member 192d36c71643c39d from cluster
[root@docker ~]# etcdctl member list
5f3835545a5f41e4: name=master peerURLs=http://192.168.101.14:2380 clientURLs=http://192.168.101.14:2379 isLeader=true
77c1ac60c5100363: name=node1 peerURLs=http://192.168.101.15:2380 clientURLs=http://192.168.101.15:2379 isLeader=false

集群针对的操作是对member_id进行操作

4.4、向集群增加一个member,现在集群只有两个member:

[root@docker ~]# etcdctl member list
5f3835545a5f41e4: name=master peerURLs=http://192.168.101.14:2380 clientURLs=http://192.168.101.14:2379 isLeader=true
77c1ac60c5100363: name=node1 peerURLs=http://192.168.101.15:2380 clientURLs=http://192.168.101.15:2379 isLeader=false

现在向该集群增加一个member,也就是增加一个节点:

[root@docker ~]# etcdctl member add node2 http://192.168.101.19:2380
Added member named node2 with ID 4edc521f6598ba03 to cluster ETCD_NAME="node2"
ETCD_INITIAL_CLUSTER="node2=http://192.168.101.19:2380,master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

然后安装提示操作:

[root@docker ~]# export ETCD_NAME="node2"
[root@docker ~]# export ETCD_INITIAL_CLUSTER="node2=http://192.168.101.19:2380,master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380"
[root@docker ~]# export ETCD_INITIAL_CLUSTER_STATE="existing"
[root@docker ~]# etcd --listen-client-urls http://192.168.101.19:2379 --advertise-client-urls http://192.168.101.19:2379 --listen-peer-urls http://192.168.101.19:2380 --initial-advertise-peer-urls http://192.168.101.19:2380 --data-dir %data_dir%
-- ::33.649640 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=node2=http://192.168.101.19:2380,master=http://192.168.101.14:2380,node1=http://192.168.101.15:2380
-- ::33.649694 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing
-- ::33.649706 I | pkg/flags: recognized and used environment variable ETCD_NAME=node2
-- ::33.649748 I | etcdmain: etcd Version: 3.2.
-- ::33.649752 I | etcdmain: Git SHA: 694728c
-- ::33.649754 I | etcdmain: Go Version: go1.8.5
-- ::33.649757 I | etcdmain: Go OS/Arch: linux/amd64
-- ::33.649760 I | etcdmain: setting maximum number of CPUs to , total number of available CPUs is
-- ::33.649850 C | etcdmain: listen tcp 192.168.101.19:: bind: cannot assign requested address

不知道为何最后绑定不到192.168.101.19上,这里是一个问题。。。。。。。待后面解决

etcd集群搭建的更多相关文章

  1. Docker下ETCD集群搭建

    搭建集群之前首先准备两台安装了CentOS 7的主机,并在其上安装好Docker. Master 10.100.97.46 Node 10.100.97.64 ETCD集群搭建有三种方式,分别是Sta ...

  2. [Kubernetes]CentOS7下Etcd集群搭建

    Etcd简要介绍 Etcd是Kubernetes集群中的一个十分重要的组件,用于保存集群所有的网络配置和对象的状态信息 Etcd构建自身高可用集群主要有三种形式: ①静态发现: 预先已知 Etcd 集 ...

  3. Kubernetes-3.3:ETCD集群搭建及使用(https认证+数据备份恢复)

    etcd集群搭建 环境介绍 基于CentOS Linux release 7.9.2009 (Core) ip hostname role 172.17.0.4 cd782d0a790b etcd1 ...

  4. Centos7下Etcd集群搭建

    一.简介 "A highly-available key value store for shared configuration and service discovery." ...

  5. kubeadm 线上集群部署(一) 外部 ETCD 集群搭建

      IP Hostname   192.168.1.23 k8s-etcd-01 etcd集群节点,默认关于ETCD所有操作均在此节点上操作 192.168.1.24 k8s-etcd-02 etcd ...

  6. etcd 集群搭建

    现有三台机器 CentOS7 node1 10.2.0.10 node2 10.2.0.11 node3 10.2.0.12  1 源码解压命令行方式 node1 ./etcd --name infr ...

  7. etcd集群的搭建

    由于最近在学习kubernetes,etcd作为kubernetes集群的主数据库,必须先启动. etcds实例名称 IP地址 Hostname etcd 1 192.168.142.161 kube ...

  8. etcd集群部署与遇到的坑

    在k8s集群中使用了etcd作为数据中心,在实际操作中遇到了一些坑.今天记录一下,为了以后更好操作. ETCD参数说明 —data-dir 指定节点的数据存储目录,这些数据包括节点ID,集群ID,集群 ...

  9. Kubernetes集群部署之三ETCD集群部署

    kuberntes 系统使用 etcd 存储所有数据,本文档介绍部署一个三节点高可用 etcd 集群的步骤,这三个节点复用 kubernetes 集群机器k8s-master.k8s-node-1.k ...

随机推荐

  1. 国际混淆C代码大赛获奖作品部分源码

    国际C语言混乱代码大赛(IOCCC, The International Obfuscated C Code Contest)是一项国际编程赛事,从1984年开始,每年举办一次(1997年.1999年 ...

  2. easyui tree tabs

    <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title> ...

  3. USACO 5.1 Fencing the Cows

    Fencing the CowsHal Burch Farmer John wishes to build a fence to contain his cows, but he's a bit sh ...

  4. span 超出内容自动换行

    <span style="width:80%;word-break:normal;display:block;word-warp:break-word;overflow:hidden; ...

  5. html中元素的id和name的区别(2016-1-22)

    HTML中元素的Id和Name属性区别 一直以来一直以为在html中,name和id没什么区别,今天遇到一个坑才发现(PHP获取不到表单数据,原因:元素没有name,只定义了id),这两者差别还是很大 ...

  6. 这套完美的Java环境安装教程,完整,详细,清晰可观,让你一目了然,简单易懂。⊙﹏⊙

    JDK下载与安装教程 2017年06月18日 22:53:16 Danishlyy1995 阅读数:349980  版权声明:本文为博主原创文章,未经博主允许不得转载. https://blog.cs ...

  7. redis集群错误解决:/usr/lib/ruby/gems/1.8/gems/redis-3.0.0/lib/redis/client.rb:79:in `call': ERR Slot 15495 is already busy (Redis::CommandError)

    错误信息: /usr/lib/ruby/gems/1.8/gems/redis-3.0.0/lib/redis/client.rb:79:in `call': ERR Slot 15495 is al ...

  8. JAVA内存泄漏解决办法

    JVM调优工具 Jconsole,jProfile,VisualVM Jconsole : jdk自带,功能简单,但是可以在系统有一定负荷的情况下使用.对垃圾回收算法有很详细的跟踪.详细说明参考这里 ...

  9. 机器学习之路: python 实践 word2vec 词向量技术

    git: https://github.com/linyi0604/MachineLearning 词向量技术 Word2Vec 每个连续词汇片段都会对后面有一定制约 称为上下文context 找到句 ...

  10. BZOJ.5407.girls(容斥 三元环)

    题目链接 CF 原题 \(Description\) 有n个点,其中有m条边连接两个点.每一个没有连边的三元组\((i,j,k)(i<j<k)\)对答案的贡献为\(A*i+B*j+C*k\ ...