etcd集群部署
,创建etcd可执行文件,配置文件,证书文件存放目录
mkdir /opt/etcd/{bin,cfg,ssl} -p ,创建包文件存放目录
mkdir /soft -p ,解压etcd包。并将可执行文件移动到/opt/etcd/bin
tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ ,etcd配置文件
$ cat etcd
#[Member]
ETCD_NAME="etcd01" #节点名称,如果有多个节点,这里必须要改,etcd02,etcd03
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #数据目录
ETCD_LISTEN_PEER_URLS="https://192.168.1.63:2380" #集群沟通端口2380
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.63:2379" #客户端沟通端口2379 #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.63:2380" #集群通告地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.63:2379" #客户端通告地址
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.63:2380,etcd02=https://192.168.1.65:2380,etcd03=https://192.168.1.66:2380" #这个集群中所有节点,每个节点都要有
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群token
ETCD_INITIAL_CLUSTER_STATE="new" #新创建集群,existing表示加入已有集群
root@k8s-master: /opt/etcd/cfg ::
$ ,systemd管理etcd
#里面的参数都是需要引用主配置文件的变量,所有如果报错,尝试查看一下主配置文件是否配置出错,/opt/etcd/cfs/etcd
root@k8s-master: /opt/etcd/cfg ::
$ cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE= [Install]
WantedBy=multi-user.target
root@k8s-master: /opt/etcd/cfg ::
$ ,重新加载配置文件并启动
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd ,查看启动日志
tail -f /var/log/messages #会出现与node01和node02无法沟通的状况
#看下边日志,这是因为客户端并没有配置etcd节点文件和ssl,所以会一直报错,systemctl start etcd其实是启动成功,但是沟通不到,所以会启动很长时间
Mar :: localhost etcd: health check for peer 472edcb0986774fe could not connect: dial tcp 192.168.1.65:: connect: connection refused (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: health check for peer 89e49aedde68fee4 could not connect: dial tcp 192.168.1.66:: connect: connection refused (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: health check for peer 472edcb0986774fe could not connect: dial tcp 192.168.1.65:: connect: connection refused (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: health check for peer 89e49aedde68fee4 could not connect: dial tcp 192.168.1.66:: connect: connection refused (prober "ROUND_TRIPPER_SNAPSHOT") ,node01,node02操作 #将master节点配置文件scp到node01,node02 #将/opt/etcd/下的配置文件文件,文件夹递归传到node01,node02的opt下
scp -r /opt/etcd/ root@192.168.1.66:/opt
scp -r /opt/etcd/ root@192.168.1.65:/opt #将systemctl下的etcd.service传到node01,node02的/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.1.65:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.1.66:/usr/lib/systemd/system/ #这时在tail -f /var/log/messages
ps:
#由于环境是虚拟机环境所以,以下日志是master和node节点时间不同步造成的ntpdate time.windows.com Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.792944111s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.861673928s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.858782669s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.793075827s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.795990455s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.858938895s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.861743791s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.796159244s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.792476037s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE") $ crontab -l
* * * * ntpdate time.windows.com >/dev/null >& ,最后测试一下集群节点状态
(完成)
#如果输出下面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd
root@k8s-master: ~ ::
$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.1.63:2379,https://192.168.1.65:2379,https://192.168.1.66:2379" cluster-health
member 472edcb0986774fe is healthy: got healthy result from https://192.168.1.65:2379
member 89e49aedde68fee4 is healthy: got healthy result from https://192.168.1.66:2379
member ddaf91a76208ea00 is healthy: got healthy result from https://192.168.1.63:2379
cluster is healthy
root@k8s-master: ~ ::
$

k8s集群———etcd-三节点部署的更多相关文章

  1. k8s集群之master节点部署

    apiserver的部署 api-server的部署脚本 [root@mast-1 k8s]# cat apiserver.sh #!/bin/bash MASTER_ADDRESS=$1 主节点IP ...

  2. k8s集群节点更换ip 或者 k8s集群添加新节点

    1.需求情景:机房网络调整,突然要回收我k8s集群上一台node节点机器的ip,并调予新的ip到这台机器上,所以有了k8s集群节点更换ip一说:同时,k8s集群节点更换ip也相当于k8s集群添加新节点 ...

  3. 记录一个奇葩的问题:k8s集群中master节点上部署一个单节点的nacos,导致master节点状态不在线

    情况详细描述; k8s集群,一台master,两台worker 在master节点上部署一个单节点的nacos,导致master节点状态不在线(不论是否修改nacos的默认端口号都会导致master节 ...

  4. k8s集群———单master节点2node节点

    #部署node节点 ,将kubelet-bootstrap用户绑定到系统集群角色中(颁发证书的最小权限) kubectl create clusterrolebinding kubelet-boots ...

  5. 使用KubeOperator扩展k8s集群的worker节点

    官方文档网址:https://kubeoperator.io/docs/installation/install/ 背景说明 原先是一个三节点的k8s集群,一个master,三个woker(maste ...

  6. K8S集群etcd备份与恢复

    参考链接: K8S集群多master:Etcd v3备份与恢复 K8S集群单master:Kubernetes Etcd 数据备份与恢复 ETCD系列之一:简介:https://developer.a ...

  7. 使用kubeadm安装k8s集群故障处理三则

    最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考 ...

  8. 高可用etcd集群(三节点) + ssl双向认证

    # etcd下载地址 https://github.com/etcd-io/etcd/tags wget https://github.com/etcd-io/etcd/releases/downlo ...

  9. 8.实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署

    1.基础架构 主机名 角色 ip HDSS7-11.host.com K8S代理节点1,zk1 10.4.7.11 HDSS7-12.host.com K8S代理节点2,zk2 10.4.7.12 H ...

  10. k8s集群添加node节点(使用kubeadm搭建的集群)

    1.安装docker.kubelet.kubectl.kubeadm.socat # cat kubernets.repo[kubernetes]name=Kubernetesbaseurl=http ...

随机推荐

  1. @codeforces - 1205B@ Shortest Cycle

    目录 @description@ @solution@ @accepted code@ @details@ @description@ 给定一个长度为 n 的正整数序列 a1, a2, ..., an ...

  2. Android 申请权限示例

    1.在Mainifest.xml中添加 <?xml version="1.0" encoding="utf-8"?> <manifest xm ...

  3. 《attention is all you need》解读

    Motivation: 靠attention机制,不使用rnn和cnn,并行度高 通过attention,抓长距离依赖关系比rnn强 创新点: 通过self-attention,自己和自己做atten ...

  4. day1-初识Python之变量

    1.python安装与环境配置 1.1.Windows下的python解释器安装 打开官网 https://www.python.org/downloads/windows/ 下载中心 测试安装是否成 ...

  5. DTCC 2019 | 深度解码阿里数据库实现 数据库内核——基于HLC的分布式事务实现深度剖析

    摘要:分布式事务是分布式数据库最难攻克的技术之一,分布式事务为分布式数据库提供一致性数据访问的支持,保证全局读写原子性和隔离性,提供一体化分布式数据库的用户体验.本文主要分享分布式数据库中的时钟解决方 ...

  6. IE下form表单密码输入框可以输入中文问题

    今天遇到了一个问题: 在IE浏览器登录界面,密码输入框,切换到中文输入法,竟然可以输入中文,已经设置过了input的type="password". 解决方法: 可以给input设 ...

  7. Java反射机制(一):认识Class类

    一. 认识Class类 1.1 正常我们再使用一个类时,大多情况是先获取类的对象,然后通过对象去操作类中的属性或方法. 那,大家有没有想过,如果我们已经有了一个类的对象,我能否通过该对象去获取到类的信 ...

  8. Python--day67--内容回顾

  9. H3C 配置PAP验证

  10. [转]Jquery属性选择器(同时匹配多个条件,与或非)(附样例)

    1. 前言 为了处理除了两项不符合条件外的选择,需要用到jquery选择器的多个条件匹配来处理,然后整理了一下相关的与或非的条件及其组合. 作为笔记记录. 2. 代码 1 2 3 4 5 6 7 8 ...