安装etcd

二进制包下载地址:https://github.com/etcd-io/etcd/releases/tag/v3.2.12

[root@master ~]# GOOGLE_URL=https://storage.googleapis.com/etcd
[root@master ~]# GITHUB_URL=https://github.com/coreos/etcd/releases/download
[root@master ~]# DOWNLOAD_URL=${GOOGLE_URL}
[root@master ~]# ETCD_VER=v3.2.12
[root@master ~]# curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10.0M 100 10.0M 0 0 2161k 0 0:00:04 0:00:04 --:--:-- 2789k
[root@master ~]# ls /tmp
etcd-v3.2.12-linux-amd64.tar.gz
解压
[root@master ~]# tar -zxf /tmp/etcd-v3.2.12-linux-amd64.tar.gz
[root@master ~]# ls
etcd-v3.2.12-linux-amd64
创建集群部署目录
[root@master ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl}
[root@master ~]# tree /opt/kubernetes
/opt/kubernetes
├── bin
├── cfg
└── ssl
[root@master ~]# mv etcd-v3.2.12-linux-amd64/etcd /opt/kubernetes/bin
[root@master ~]# mv etcd-v3.2.12-linux-amd64/etcdctl /opt/kubernetes/bin
[root@master ~]# ls /opt/kubernetes/bin
etcd etcdctl
添加配置文件
[root@master ~]# cat /opt/kubernetes/cfg/etcd
#[Member]
#指定etcd名称
ETCD_NAME="etcd03"
#数据目录
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#监听集群端口
ETCD_LISTEN_PEER_URLS="https://192.168.238.130:2380"
#监听数据端口
ETCD_LISTEN_CLIENT_URLS="https://192.168.238.130:2379" #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.238.130:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.238.130:2379"
#集群节点信息
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380"
#token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new" [root@master ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
ExecStart=/opt/kubernetes/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTENT_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-state=new \
--cert-file=/opt/kubernetes/ssl/server.pem \
--key-file=/opt/kubernetes/ssl/server-key.pem \
--peer-cert-file=/opt/kubernetes/ssl/server.pem \
--peer-key-file=/opt/kubernetes/ssl/server-key.pem \
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65535 [Install]
WantedBy=multi-user.target 证书存放到指定目录
[root@master ~]# cp ssl/server*pem ssl/ca*pem /opt/kubernetes/ssl/
[root@master ~]# ls /opt/kubernetes/ssl/
ca-key.pem ca.pem server-key.pem server.pem
启动etcd
[root@master ~]# systemctl start etcd
Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details.
启动失败查看日志
[root@master ~]# journalctl -u etcd
-- Logs begin at Tue 2019-07-02 17:22:07 EDT, end at Tue 2019-07-02 17:58:00 EDT. --
Jul 02 17:57:59 master systemd[1]: Starting Etcd Server...
Jul 02 17:57:59 master etcd[8172]: invalid value ",http://127.0.0.1:2379" for flag -listen-
Jul 02 17:57:59 master etcd[8172]: usage: etcd [flags]
Jul 02 17:57:59 master etcd[8172]: start an etcd server
Jul 02 17:57:59 master etcd[8172]: etcd --version
Jul 02 17:57:59 master etcd[8172]: show the version of etcd
Jul 02 17:57:59 master etcd[8172]: etcd -h | --help
Jul 02 17:57:59 master etcd[8172]: show the help information about etcd
Jul 02 17:57:59 master etcd[8172]: etcd --config-file
Jul 02 17:57:59 master etcd[8172]: path to the server configuration file
Jul 02 17:57:59 master etcd[8172]: etcd gateway
Jul 02 17:57:59 master etcd[8172]: run the stateless pass-through etcd TCP connection forwa
Jul 02 17:57:59 master etcd[8172]: etcd grpc-proxy
Jul 02 17:57:59 master etcd[8172]: run the stateless etcd v3 gRPC L7 reverse proxy
Jul 02 17:57:59 master systemd[1]: etcd.service: main process exited, code=exited, status=2
Jul 02 17:57:59 master systemd[1]: Failed to start Etcd Server.
Jul 02 17:57:59 master systemd[1]: Unit etcd.service entered failed state.
Jul 02 17:57:59 master systemd[1]: etcd.service failed.
Jul 02 17:57:59 master systemd[1]: etcd.service holdoff time over, scheduling restart.
Jul 02 17:57:59 master systemd[1]: Stopped Etcd Server.
Jul 02 17:57:59 master systemd[1]: Starting Etcd Server...
Jul 02 17:57:59 master etcd[8176]: invalid value ",http://127.0.0.1:2379" for flag -listen-
Jul 02 17:57:59 master etcd[8176]: usage: etcd [flags]
Jul 02 17:57:59 master etcd[8176]: start an etcd server
Jul 02 17:57:59 master etcd[8176]: etcd --version
Jul 02 17:57:59 master etcd[8176]: show the version of etcd
Jul 02 17:57:59 master etcd[8176]: etcd -h | --help
Jul 02 17:57:59 master etcd[8176]: show the help information about etcd
Jul 02 17:57:59 master etcd[8176]: etcd --config-file
Jul 02 17:57:59 master etcd[8176]: path to the server configuration file
Jul 02 17:57:59 master etcd[8176]: etcd gateway
Jul 02 17:57:59 master etcd[8176]: run the stateless pass-through etcd TCP connection forwa
Jul 02 17:57:59 master etcd[8176]: etcd grpc-proxy
Jul 02 17:57:59 master etcd[8176]: run the stateless etcd v3 gRPC L7 reverse proxy
Jul 02 17:57:59 master systemd[1]: etcd.service: main process exited, code=exited, status=2
Jul 02 17:57:59 master systemd[1]: Failed to start Etcd Server.
Jul 02 17:57:59 master systemd[1]: Unit etcd.service entered failed state.
Jul 02 17:57:59 master systemd[1]: etcd.service failed.
Jul 02 17:57:59 master systemd[1]: etcd.service holdoff time over, scheduling restart.
Jul 02 17:57:59 master systemd[1]: Stopped Etcd Server.
Jul 02 17:57:59 master systemd[1]: Starting Etcd Server...
lines 1-42 [root@master ~]# tail -n 20 /var/log/messages
Jul 2 17:58:00 localhost etcd: etcd --version
Jul 2 17:58:00 localhost etcd: show the version of etcd
Jul 2 17:58:00 localhost etcd: etcd -h | --help
Jul 2 17:58:00 localhost etcd: show the help information about etcd
Jul 2 17:58:00 localhost etcd: etcd --config-file
Jul 2 17:58:00 localhost etcd: path to the server configuration file
Jul 2 17:58:00 localhost etcd: etcd gateway
Jul 2 17:58:00 localhost etcd: run the stateless pass-through etcd TCP connection forwarding proxy
Jul 2 17:58:00 localhost etcd: etcd grpc-proxy
Jul 2 17:58:00 localhost etcd: run the stateless etcd v3 gRPC L7 reverse proxy
Jul 2 17:58:00 localhost systemd: etcd.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Jul 2 17:58:00 localhost systemd: Failed to start Etcd Server.
Jul 2 17:58:00 localhost systemd: Unit etcd.service entered failed state.
Jul 2 17:58:00 localhost systemd: etcd.service failed.
Jul 2 17:58:00 localhost systemd: etcd.service holdoff time over, scheduling restart.
Jul 2 17:58:00 localhost systemd: Stopped Etcd Server.
Jul 2 17:58:00 localhost systemd: start request repeated too quickly for etcd.service
Jul 2 17:58:00 localhost systemd: Failed to start Etcd Server.
Jul 2 17:58:00 localhost systemd: Unit etcd.service entered failed state.
Jul 2 17:58:00 localhost systemd: etcd.service failed. [root@master ~]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: activating (start) since Tue 2019-07-02 18:32:55 EDT; 16s ago
Main PID: 8138 (etcd)
Memory: 20.5M
CGroup: /system.slice/etcd.service
└─8138 /opt/kubernetes/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.130:2380 --listen-client-urls=https://192.168.238.13... Jul 02 18:33:09 master etcd[8138]: a7e9807772a004c5 received MsgVoteResp from a7e9807772a004c5 at term 72
Jul 02 18:33:09 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 72
Jul 02 18:33:09 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to c858c42725f38881 at term 72
Jul 02 18:33:10 master etcd[8138]: health check for peer 203750a5948d27da could not connect: dial tcp 192.168.238.128:2380: i/o timeout
Jul 02 18:33:10 master etcd[8138]: health check for peer c858c42725f38881 could not connect: dial tcp 192.168.238.129:2380: i/o timeout
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 is starting a new election at term 72
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 became candidate at term 73
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 received MsgVoteResp from a7e9807772a004c5 at term 73
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 73
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to c858c42725f38881 at term 73
[root@master ~]# ps -ef|grep etcd
root 8138 1 0 18:32 ? 00:00:00 /opt/kubernetes/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.130:2380 --listen-client-urls=https://192.168.238.130:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.238.130:2379 --initial-advertise-peer-urls=https://192.168.238.130:2380 --initial-cluster=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-token=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
root 8147 8085 0 18:34 pts/0 00:00:00 grep --color=auto etcd
到此主节点部署完成
生成节点间免密登陆密钥
[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
1b:b9:49:23:fc:32:64:6f:72:bd:77:d5:98:28:d4:a0 root@master
The key's randomart image is:
+--[ RSA 2048]----+
| |
| . |
| . o |
| . E.. . |
| = S. . o.|
| o = B. . o o|
| + O .. . |
| * .. . |
| .. . |
+-----------------+
[root@master ~]# ls /root/.ssh/
id_rsa id_rsa.pub
分发密钥到各个节点
[root@master ~]# ssh-copy-id root@192.168.238.129
The authenticity of host '192.168.238.129 (192.168.238.129)' can't be established.
ECDSA key fingerprint is d2:7e:40:ca:2b:fb:be:53:f3:2c:8c:e7:54:08:3d:d4.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.238.129's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@192.168.238.129'"
and check to make sure that only the key(s) you wanted were added. [root@master ~]# ssh-copy-id root@192.168.238.128
The authenticity of host '192.168.238.128 (192.168.238.128)' can't be established.
ECDSA key fingerprint is d2:7e:40:ca:2b:fb:be:53:f3:2c:8c:e7:54:08:3d:d4.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.238.128's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@192.168.238.128'"
and check to make sure that only the key(s) you wanted were added.
测试免密登陆
[root@master ~]# ssh root@192.168.238.129
Last login: Tue Jul 2 17:23:09 2019 from 192.168.238.1
[root@node01 ~]# hostname
node01
节点1创建etcd安装目录
[root@node01 ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl}
主节点发送二进制包至node01
[root@master ~]# scp -r /opt/kubernetes/bin/ root@192.168.238.129:/opt/kubernetes/
etcd 100% 17MB 17.0MB/s 00:00
etcdctl 100% 15MB 14.5MB/s 00:01
node01查看文件
[root@node01 ~]# ls /opt/kubernetes/bin/
etcd etcdctl
主节点发送配置文件至node01
[root@master ~]# scp -r /opt/kubernetes/cfg/ root@192.168.238.129:/opt/kubernetes/
etcd
[root@master ~]# scp -r /usr/lib/systemd/system/etcd.service root@192.168.238.129:/usr/lib/systemd/system
etcd.service
node01查看文件
[root@node01 ~]# ls /opt/kubernetes/cfg/
etcd
[root@node01 ~]# ll /usr/lib/systemd/system/etcd.service
-rw-r--r-- 1 root root 996 Jul 2 20:55 /usr/lib/systemd/system/etcd.service
主节点发送数字证书至node01
[root@master ~]# scp -r /opt/kubernetes/ssl/ root@192.168.238.129:/opt/kubernetes/
server-key.pem 100% 1675 1.6KB/s 00:00
server.pem 100% 1489 1.5KB/s 00:00
ca-key.pem 100% 1679 1.6KB/s 00:00
ca.pem
node01查看文件
[root@node01 ~]# ls /opt/kubernetes/ssl/
ca-key.pem ca.pem server-key.pem server.pem
修改配置文件
[root@node01 ~]# cat /opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.238.129:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.238.129:2379" #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.238.129:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.238.129:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
启动
[root@node01 ~]# systemctl start etcd
[root@node01 ~]# ps -ef|grep etcd
root 8702 1 0 21:01 ? 00:00:00 /opt/kubernetes/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.129:2380 --listen-client-urls=https://192.168.238.129:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.238.129:2379 --initial-advertise-peer-urls=https://192.168.238.129:2380 --initial-cluster=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-token=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
root 8709 7875 0 21:02 pts/0 00:00:00 grep --color=auto etcd
[root@node01 ~]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: activating (start) since Tue 2019-07-02 21:01:39 EDT; 54s ago
Main PID: 8702 (etcd)
Memory: 6.2M
CGroup: /system.slice/etcd.service
└─8702 /opt/kubernetes/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.129:2380 --listen-client-urls=https://192.168.238.12... Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 is starting a new election at term 36
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 became candidate at term 37
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 received MsgVoteResp from c858c42725f38881 at term 37
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 37
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to a7e9807772a004c5 at term 37
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 is starting a new election at term 37
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 became candidate at term 38
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 received MsgVoteResp from c858c42725f38881 at term 38
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 38
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to a7e9807772a004c5 at term 38
设置开机自启动
[root@node01 ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
同理部署node02

查看集群状态

设置环境变量
[root@master ~]# tail -n 1 /etc/profile
PATH=/opt/kubernetes/bin:$PATH
[root@master ~]# source /etc/profile
[root@master ~]# which etcd
/opt/kubernetes/bin/etcd
[root@master ~]# which etcdctl
/opt/kubernetes/bin/etcdctl
[root@master ~]# etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" cluster-health
cluster may be unhealthy: failed to list members
Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://192.168.238.130:2379 exceeded header timeout
; error #1: client: endpoint https://192.168.238.128:2379 exceeded header timeout
; error #2: client: endpoint https://192.168.238.129:2379 exceeded header timeout error #0: client: endpoint https://192.168.238.130:2379 exceeded header timeout
error #1: client: endpoint https://192.168.238.128:2379 exceeded header timeout
error #2: client: endpoint https://192.168.238.129:2379 exceeded header timeout
失败的原因可能是防火墙或者selinux导致 [root@master ~]# etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" cluster-health
member 203750a5948d27da is healthy: got healthy result from https://192.168.238.128:2379
member a7e9807772a004c5 is healthy: got healthy result from https://192.168.238.130:2379
member c858c42725f38881 is healthy: got healthy result from https://192.168.238.129:2379
cluster is healthy

kubernetes容器集群部署Etcd集群的更多相关文章

  1. Kubernetes后台数据库etcd:安装部署etcd集群,数据备份与恢复

    目录 一.系统环境 二.前言 三.etcd数据库 3.1 概述 四.安装部署etcd单节点 4.1 环境介绍 4.2 配置节点的基本环境 4.3 安装部署etcd单节点 4.4 使用客户端访问etcd ...

  2. 部署etcd集群

    部署etcd集群 第一步:先拉取etcd二进制压缩包 wget https://github.com/coreos/etcd/releases/download/v3.3.2/etcd-v3.3.2- ...

  3. suse 12 二进制部署 Kubernetets 1.19.7 - 第02章 - 部署etcd集群

    文章目录 1.2.部署etcd集群 1.2.0.下载etcd二进制文件 1.2.1.创建etcd证书和私钥 1.2.2.生成etcd证书和私钥 1.2.3.配置etcd为systemctl管理 1.2 ...

  4. 使用二进制文件部署Etcd集群

    Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当 ...

  5. 基于已有集群动态发现方式部署 Etcd 集群

    etcd提供了多种部署集群的方式,在「通过静态发现方式部署etcd集群」 一文中我们介绍了如何通过静态发现方式部署集群. 不过很多时候,你只知道你要搭建一个多大(包含多少节点)的集群,但是并不能事先知 ...

  6. 基于 DNS 动态发现方式部署 Etcd 集群

    使用discovery的方式来搭建etcd集群方式有两种:etcd discovery和DNS discovery.在 「基于已有集群动态发现方式部署etcd集群」一文中讲解了etcd discove ...

  7. 基于Docker部署ETCD集群

    基于Docker部署ETCD集群 关于ETCD要不要使用TLS? 首先TLS的目的是为了鉴权为了防止别人任意的连接上你的etcd集群.其实意思就是说如果你要放到公网上的ETCD集群,并开放端口,我建议 ...

  8. 2、二进制安装K8s 之 部署ETCD集群

    二进制安装K8s 之 部署ETCD集群 一.下载安装cfssl,用于k8s证书签名 二进制包地址:https://pkg.cfssl.org/ 所需软件包: cfssl 1.6.0 cfssljson ...

  9. kubernetes部署 etcd 集群

    本文档介绍部署一个三节点高可用 etcd 集群的步骤: etcd 集群各节点的名称和 IP 如下: kube-node0:192.168.111.10kube-node1:192.168.111.11 ...

随机推荐

  1. 6层PCB设计技巧和步骤

    6层PCB设计技巧和步骤 一.原理图的编辑  6层板由于PCB板中可以有两层地,所以可以将模拟地和数字地分开.对于统一地还是分开地,涉及到电磁干扰中信号的最小回流路径问题,绘制完原理图,别忘检查错误和 ...

  2. Sass:RGB颜色函数-Red()、Green()、Blue()函数

    Red() 函数 red() 函数非常简单,其主要用来获取一个颜色当中的红色值.假设有一个 #f36 的颜色,如果你想得到 #f36 中的 red 值是多少,这个时候使用 red() 函数就能很简单获 ...

  3. Sass函数:Sass Maps的函数-map-get($map,$key)

    map-get($map,$key) 函数的作用是根据 $key 参数,返回 $key 在 $map 中对应的 value 值.如果 $key 不存在 $map中,将返回 null 值.此函数包括两个 ...

  4. Java字节数组流学习

    字节数组流 基于内存操作,内部维护着一个字节数组,我们可以利用流的读取机制来处理字符串.无需关闭,不会报IOException. ByteArrayInputStream 包含一个内部缓冲区,该缓冲区 ...

  5. 设置Oracle PL/SQL时间显示格式NLS_TIMESTAMP_FORMAT

    Oracle中TIMESTAMP时间的显示格式   Oracle数据库的时间字段我们通常是使用timestamp 格式,在未做设置前, 查询出来的数据类似于“27-1月 -08 12.04.35.87 ...

  6. jenkins 更改端口

    方法一 在Jenkins目录下,运行一下命令: java -jar jenkins.war --ajp13Port=-1 --httpPort=8081 出现了错误: C:\Program Files ...

  7. Heartbeat安装及配置

    1.yum源安装 yum -y install heartbeat 更新yum源 yum install epel-release -y yum源有问题,改用下载rpm包安装 2.rpm安装 下载rp ...

  8. 使用vue-i18n实现项目的国际化 以及iview的国际化

    一:项目的国际化 vue-i18n官网 1. 在src中新建一个language文件夹(包含index.js.US.js.CN.js) (1)US.js 保存变量的英文,内容: export defa ...

  9. JPush极光推送Java服务器端实例

    import cn.jpush.api.JPushClient; import cn.jpush.api.common.resp.APIConnectionException; import cn.j ...

  10. [CF852H]Bob and stages

    题意:给出平面上\(n\)个点,要求选出\(k\)个点,使得这些点形成一个凸包,且凸包内部没有点,求最大面积.无解输出\(0\). 题解:枚举凸包最左的点\(p\),删除所有在\(p\)左边的点,然后 ...