引言

kubeadm提供了两种不同的高可用方案。

堆叠方案:etcd服务和控制平面被部署在同样的节点中,对基础设施的要求较低,对故障的应对能力也较低

堆叠方案

最小三个Master(也称工作平面),因为Etcd使用RAFT算法选主,节点数量需要为2n+1个。

外置etcd方案:etcd和控制平面被分离,需要更多的硬件,也有更好的保障能力

外置etcd方案

一、资源环境

主机名 系统版本 IP地址 主机配置 备注
MASTER1 CentOS8 10.0.0.11 4核4G 主备1
MASTER2 CentOS8 10.0.0.12 4核4G 主备2
MASTER3 CentOS8 10.0.0.13 4核4G 主备3
NODE1 CentOS8 10.0.0.14 4核2G 节点1
NODE2 CentOS8 10.0.0.15 4核2G 节点2
NODE3 CentOS8 10.0.0.16 4核2G 节点3
VIP CentOS8 10.0.0.66 2核1G 主备节点飘

下面采用的是kubeadm的堆叠方案搭建k8s集群,也就是说如果3台Master宕了2台时,集群将不可用,可能收到如下错误信息"Error from server: etcdserver: request timed out"。

二、系统设置(所有主机)

设置主机名

hostnamectl set-hostname master-*
hostnamectl set-hostname node-*

设置静态IP

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens18
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens18
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=10.0.0.11
NETMASK=255.0.0.0
GATEWAY=10.0.0.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens18
UUID=555fe27b-19eb-4958-aca7-c9c71365432f
DEVICE=ens18
ONBOOT=yes
[root@localhost ~]# reboot

配置主机名

[root@master-01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.11 master-01
10.0.0.12 master-02
10.0.0.13 master-03
10.0.0.14 node-01
10.0.0.15 master-01
10.0.0.16 master-01

安装依赖

[root@node-01 ~]# yum update -y
Repository AppStream is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository PowerTools is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Last metadata expiration check: 0:19:42 ago on Sat 28 Nov 2020 04:25:04 PM CST.
Dependencies resolved.
Nothing to do.
Complete! [root@node-01 ~]# yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bind-utils
...

关闭防火墙、swap、selinux

[root@master-01 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master-01 ~]# swapoff -a
[root@master-01 ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
[root@master-01 ~]# sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
[root@master-01 ~]# cat /etc/fstab #
# /etc/fstab
# Created by anaconda on Mon Nov 23 08:19:33 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root / xfs defaults 0 0
UUID=46ea6159-eda5-4931-ae11-73095cf284c1 /boot ext4 defaults 1 2
#/dev/mapper/cl-swap swap swap defaults 0 0
[root@master-01 ~]# setenforce 0
[root@master-01 ~]# vim /etc/sysconfig/selinux
[root@master-01 ~]# cat /etc/sysconfig/selinux # This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

系统参数设置

# 开启ipvs模块
[root@master-01 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> modprobe br_netfilter
> EOF # 生效文件
[root@master-01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_defrag_ipv6 20480 2 nf_conntrack_ipv6,ip_vs
nf_conntrack_ipv4 16384 1
nf_defrag_ipv4 16384 1 nf_conntrack_ipv4
nf_conntrack 155648 7 nf_conntrack_ipv6,nf_conntrack_ipv4,nf_nat,nft_ct,nf_nat_ipv6,nf_nat_ipv4,ip_vs
libcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs # 制作配置文件
[root@master-01 ~]# cat > /etc/sysctl.d/kubernetes.conf <<EOF
> net.bridge.bridge-nf-call-iptables=1
> net.bridge.bridge-nf-call-ip6tables=1
> net.ipv4.ip_forward=1
> net.ipv4.tcp_tw_recycle=1 # 表示开启TCP连接中TIME-WAIT sockets的快速回收,默认为0,表示关闭。
> net.ipv4.tcp_keepalive_time=600 # 超过这个时间没有数据传输,就开始发送存活探测包
> net.ipv4.tcp_keepalive_intvl=15 # keepalive探测包的发送间隔
> net.ipv4.tcp_keepalive_probes=3 # 如果对方不予应答,探测包的发送次数
> vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
> vm.overcommit_memory=1 # 不检查物理内存是否够用
> vm.panic_on_oom=0 # 开启 OOM
> fs.inotify.max_user_instances=8192
> fs.inotify.max_user_watches=1048576
> fs.file-max=52706963
> fs.nr_open=52706963
> net.ipv6.conf.all.disable_ipv6=1
> net.netfilter.nf_conntrack_max=2310720
> EOF # 生效配置文件
[root@master-01 ~]# sysctl -p /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
sysctl: cannot stat /proc/sys/net/ipv4/tcp_tw_recycle: No such file or directory
net.ipv4.tcp_keepalive_time = 600 # 超过这个时间没有数据传输,就开始发送存活探测包
net.ipv4.tcp_keepalive_intvl = 15 # keepalive探测包的发送间隔
net.ipv4.tcp_keepalive_probes = 3 # 如果对方不予应答,探测包的发送次数
vm.swappiness = 0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory = 1 # 不检查物理内存是否够用
vm.panic_on_oom = 0 # 开启 OOM
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 1048576
fs.file-max = 52706963
fs.nr_open = 52706963
net.ipv6.conf.all.disable_ipv6 = 1
net.netfilter.nf_conntrack_max = 2310720 # 调整系统 TimeZone
[root@master-01 ~]# timedatectl set-timezone Asia/Shanghai # 将当前的 UTC 时间写入硬件时钟
[root@master-01 ~]# timedatectl set-local-rtc 0 # 重启依赖于系统时间的服务
[root@master-01 ~]# systemctl restart rsyslog && systemctl restart crond # 关闭无关的服务
[root@master-01 ~]# systemctl stop postfix && systemctl disable postfix
Failed to stop postfix.service: Unit postfix.service not loaded.
# 设置 rsyslogd 和 systemd journald
[root@master-01 ~]# mkdir /var/log/journal
[root@master-01 ~]# mkdir /etc/systemd/journald.conf.d [root@master-01 ~]# cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
> [Journal]
> # 持久化保存到磁盘
> Storage=persistent
>
> # 压缩历史日志
> Compress=yes
>
> SyncIntervalSec=5m
> RateLimitInterval=30s
> RateLimitBurst=1000
>
> # 最大占用空间 10G
> SystemMaxUse=10G
>
> # 单日志文件最大 200M
> SystemMaxFileSize=200M
>
> # 日志保存时间 2 周
> MaxRetentionSec=2week
>
> # 不将日志转发到 syslog
> ForwardToSyslog=no
> EOF
[root@master-01 ~]#

三、安装docker

[root@master-02 ~]# wget https://download.docker.com/linux/centos/8/x86_64/stable/Packages/containerd.io-1.3.7-3.1.el8.x86_64.rpm
--2020-11-28 17:47:12-- https://download.docker.com/linux/centos/8/x86_64/stable/Packages/containerd.io-1.3.7-3.1.el8.x86_64.rpm
Resolving download.docker.com (download.docker.com)... 99.84.206.7, 99.84.206.109, 99.84.206.25, ...
Connecting to download.docker.com (download.docker.com)|99.84.206.7|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 30388860 (29M) [binary/octet-stream]
Saving to: ‘containerd.io-1.3.7-3.1.el8.x86_64.rpm’ containerd.io-1.3.7-3.1 100%[===============================>] 28.98M 188KB/s in 3m 15s 2020-11-28 17:50:27 (153 KB/s) - ‘containerd.io-1.3.7-3.1.el8.x86_64.rpm’ saved [30388860/30388860] [root@node-02 ~]# yum install ./containerd.io-1.3.7-3.1.el8.x86_64.rpm
Repository AppStream is listed more than once in the configuration
Repository extras is listed more than once in the configuration
... [root@node-01 ~]# sudo yum -y install docker-ce
Repository AppStream is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository PowerTools is listed more than once in the configuration
... [root@master-01 ~]# systemctl start docker && systemctl enable docker

四、安装必要工具,在主节点安装kubectl即可,其他节点无需进行安装kubectl

[root@master-01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF [root@master-01 ~]# yum install -y kubelet kubeadm kubectl
[root@master-01 ~]# systemctl enable kubelet && systemctl start kubelet

五、安装LVS和keepalived

[root@vip ~]# yum -y install keepalived

# 备份并编辑
[root@vip ~]# cp /etc/keepalived/keepalived.conf{,.back}
[root@vip ~]# vim /etc/keepalived/keepalived.conf [root@vip ~]# echo "" > /etc/keepalived/keepalived.conf
[root@vip ~]# vim /etc/keepalived/keepalived.conf
[root@vip ~]# systemctl enable keepalived && service keepalived start
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
Redirecting to /bin/systemctl start keepalived.service
[root@vip ~]#
[root@vip ~]#
[root@vip ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
router_id keepalived-master
} vrrp_instance vip_1 {
state MASTER
! 注意这是网卡名称,使用ip a命令查看自己的局域网网卡名称
interface ens18
! keepalived主备router_id必须一致
virtual_router_id 88
! 优先级,keepalived主节点优先级要比备节点高
priority 100
advert_int 3
! 配置虚拟ip地址
virtual_ipaddress {
10.0.0.99
}
} virtual_server 10.0.0.99 6443 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 0
protocol TCP real_server 10.0.0.12 6443 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 6443
}
}
real_server 10.0.0.13 6443 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 6443
}
}
real_server 10.0.0.11 6443 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 6443
}
}
}

添加本地回环

[root@master-01 rs]# vim /opt/rs/rs.sh
[root@master-01 rs]# cat /opt/rs/rs.sh
#!/bin/bash
# 虚拟ip
vip=10.0.0.99
# 停止以前的lo:0
ifconfig lo:0 down
echo "1" > /proc/sys/net/ipv4/ip_forward
echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce
# 启动一个回环地址并绑定给vip
ifconfig lo:0 $vip broadcast $vip netmask 255.0.0.0 up
route add -host $vip dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
# ens33是主网卡名
echo "1" >/proc/sys/net/ipv4/conf/ens18/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/ens18/arp_announce # 脚本不可以的话,使用命令吧
ifconfig lo:0 10.0.0.99 broadcast 10.0.0.99 netmask 255.255.255.255 up
route add -host 10.0.0.99 dev lo:0 # 设置开机自启
[root@vip ~]# echo '/opt/rs/rs.sh' >> /etc/rc.d/rc.local
[root@vip ~]# chmod +x /etc/rc.d/rc.local

keepalived backup 设置

[root@vip ~]#
[root@vip ~]# vim /etc/keepalived/keepalived.conf
[root@vip ~]# systemctl enable keepalived && service keepalived start
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
Redirecting to /bin/systemctl start keepalived.service
[root@vip ~]#
[root@vip ~]#
[root@vip ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
router_id keepalived-master
} vrrp_instance vip_1 {
state BACKUP
! 注意这是网卡名称,使用ip a命令查看自己的局域网网卡名称
interface ens18
! keepalived主备router_id必须一致
virtual_router_id 88
! 优先级,keepalived主节点优先级要比备节点高
priority 99
advert_int 3
! 配置虚拟ip地址
virtual_ipaddress {
10.0.0.99
}
} virtual_server 10.0.0.99 6443 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 0
protocol TCP real_server 10.0.0.12 6443 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 6443
}
}
real_server 10.0.0.13 6443 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 6443
}
}
real_server 10.0.0.11 6443 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 6443
}
}
}

六、kubeadm搭建集群(区分节点)

master-01

[root@master-01 ~]# cd /opt/kubernetes/
[root@master-01 kubernetes]#
[root@master-01 kubernetes]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
# k8s的版本号,必须跟安装的Kubeadm版本等保持一致,否则启动报错
kubernetesVersion: v1.19.4
# docker镜像仓库地址,k8s.gcr.io需要FQ才可以下载镜像,这里使用镜像服务器下载http://mirror.azure.cn/help/gcr-proxy-cache.html
# imageRepository: k8s.gcr.io/google_containers
# 集群名称
clusterName: kubernetes
# apiServer的集群访问地址,填写vip地址即可 #
controlPlaneEndpoint: "10.0.0.99:6443"
networking:
# pod的网段
podSubnet: 10.10.0.0/16
serviceSubnet: 10.96.0.0/12
dnsDomain: cluster.local
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# kube-proxy模式指定为ipvs,需要提前在节点上安装ipvs的依赖并开启相关模块
mode: ipvs # 拉去镜像
[root@master-01 kubernetes]# kubeadm config images pull
W1128 20:33:21.822265    4536 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.19.4
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.19.4
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.19.4
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.19.4
[config/images] Pulled k8s.gcr.io/pause:3.2
[config/images] Pulled k8s.gcr.io/etcd:3.4.13-0 # 记得:
[root@master-01 kubernetes]# swapoff -a && kubeadm reset && systemctl daemon-reload && systemctl restart kubelet && iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X # 初始化
[root@master-01 kubernetes]# kubeadm init --config=kubeadm-config.yaml --upload-certs
...
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 10.0.0.99:6443 --token dtkoyq.8ciqez70nj1ysdix \
--discovery-token-ca-cert-hash sha256:f65ee972a9e9d0b8784f7db583a9cdf9865253459aa96a9b3529be2517570155 \
--control-plane --certificate-key 0dc20030f8dfdede8cbb3b0906eda1a3a140e91f7e6ebb6eac1ad02ac65389d3 Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.99:6443 --token dtkoyq.8ciqez70nj1ysdix \
--discovery-token-ca-cert-hash sha256:f65ee972a9e9d0b8784f7db583a9cdf9865253459aa96a9b3529be2517570155 # 安装网络组件
[root@master-01 kubernetes]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master-01 kubernetes]#
[root@master-01 kubernetes]#
[root@master-01 kubernetes]#
[root@master-01 kubernetes]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-2hs76 0/1 Pending 0 5m18s
kube-system coredns-f9fd979d6-5j4w8 0/1 Pending 0 5m18s
kube-system etcd-master-01 1/1 Running 0 5m29s
kube-system kube-apiserver-master-01 1/1 Running 0 5m30s
kube-system kube-controller-manager-master-01 1/1 Running 0 5m30s
kube-system kube-flannel-ds-grhh6 0/1 Init:0/1 0 5s
kube-system kube-proxy-pl74w 1/1 Running 0 5m18s
kube-system kube-scheduler-master-01 1/1 Running 0 5m30s

配置master-02和03

# 加入master组
[root@master-03 ~]# kubeadm join 10.0.0.99:6443 --token dtkoyq.8ciqez70nj1ysdix --discovery-token-ca-cert-hash sha256:f65ee972a9e9d0b8784f7db583a9cdf9865253459aa96a9b3529be2517570155 --control-plane --certificate-key 0dc20030f8dfdede8cbb3b0906eda1a3a140e91f7e6ebb6eac1ad02ac65389d3
...
等到pull镜像比较慢,耐心等待一下
... # 加入完成后
[root@master-03 ~]#  mkdir -p $HOME/.kube
[root@master-03 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-03 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master-01 kubernetes]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-2hs76 1/1 Running 0 45m
kube-system coredns-f9fd979d6-5j4w8 1/1 Running 0 45m
kube-system etcd-master-01 1/1 Running 0 45m
kube-system etcd-master-02 1/1 Running 0 17m
kube-system etcd-master-03 0/1 Running 0 49s
kube-system kube-apiserver-master-01 1/1 Running 0 45m
kube-system kube-apiserver-master-02 1/1 Running 0 17m
kube-system kube-apiserver-master-03 1/1 Running 0 51s
kube-system kube-controller-manager-master-01 1/1 Running 1 45m
kube-system kube-controller-manager-master-02 1/1 Running 0 17m
kube-system kube-controller-manager-master-03 0/1 Running 0 51s
kube-system kube-flannel-ds-76vcb 0/1 Init:0/1 0 17s
kube-system kube-flannel-ds-8tqlh 1/1 Running 0 17m
kube-system kube-flannel-ds-fq8kz 0/1 Init:0/1 0 17s
kube-system kube-flannel-ds-grhh6 1/1 Running 0 40m
kube-system kube-flannel-ds-hqj25 1/1 Running 0 52s
kube-system kube-flannel-ds-rlg4z 0/1 Init:0/1 0 17s
kube-system kube-proxy-8kf2r 1/1 Running 0 17m
kube-system kube-proxy-9n6p4 0/1 ContainerCreating 0 17s
kube-system kube-proxy-9xdrl 1/1 Running 0 52s
kube-system kube-proxy-pl74w 1/1 Running 0 45m
kube-system kube-proxy-vtm97 0/1 ContainerCreating 0 17s
kube-system kube-proxy-wdrpx 0/1 ContainerCreating 0 17s
kube-system kube-scheduler-master-01 1/1 Running 1 45m
kube-system kube-scheduler-master-02 1/1 Running 0 17m
kube-system kube-scheduler-master-03 0/1 Running 0 51s

node节点进行加入

[root@node-01 ~]# kubeadm join 10.0.0.99:6443 --token dtkoyq.8ciqez70nj1ysdix \
> --discovery-token-ca-cert-hash sha256:f65ee972a9e9d0b8784f7db583a9cdf9865253459aa96a9b3529be2517570155 [root@node-01 ~]#  mkdir -p $HOME/.kube
[root@node-01 ~]#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node-01 ~]#  sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@master-01 kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-01 Ready master 46m v1.19.4
master-02 Ready master 18m v1.19.4
master-03 Ready master 107s v1.19.4
node-01 Ready <none> 72s v1.19.4
node-02 Ready <none> 72s v1.19.4
node-03 Ready <none> 72s v1.19.4

至此,高可用集群已部署完毕。

七、部署Dashboard管理k8s集群

[root@master-01 ~]# wget -P /etc/kubernetes/addons https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc1/aio/deploy/recommended.yaml && cd /etc/kubernetes/addons

[root@master-01 addons]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

部署管理员

[root@master-01 addons]# cat <<E0F > dashboard-adminuser.yaml
> apiVersion: v1
> kind: ServiceAccount
> metadata:
> name: admin-user
> namespace: kubernetes-dashboard
>
> ---
>
> apiVersion: rbac.authorization.k8s.io/v1
> kind: ClusterRoleBinding
> metadata:
> name: admin-user
> roleRef:
> apiGroup: rbac.authorization.k8s.io
> kind: ClusterRole
> name: cluster-admin
> subjects:
> - kind: ServiceAccount
> name: admin-user
> namespace: kubernetes-dashboard
> E0F
[root@master-01 addons]# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created 查看token
[root@master-01 addons]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') | grep -E '^token' | awk '{print $2}'

利用 kubeadm 创建 kubernetes (k8s) 的高可用集群的更多相关文章

  1. Kubernetes实战:高可用集群的搭建和部署

    摘要:官方只提到了一句"使用负载均衡器将 apiserver 暴露给工作节点",而这恰恰是部署过程中需要解决的重点问题. 本文分享自华为云社区<Kubernetes 高可用集 ...

  2. 使用Ansible部署etcd 3.2高可用集群

    之前写过一篇手动搭建etcd 3.1集群的文章<etcd 3.1 高可用集群搭建>,最近要初始化一套新的环境,考虑用ansible自动化部署整套环境, 先从部署etcd 3.2集群开始. ...

  3. kubeadm部署高可用集群Kubernetes 1.14.1版本

    Kubernetes高可用集群部署 部署架构: Master 组件: kube-apiserver Kubernetes API,集群的统一入口,各组件协调者,以HTTP API提供接口服务,所有对象 ...

  4. [K8s 1.9实践]Kubeadm 1.9 HA 高可用 集群 本地离线镜像部署

    k8s介绍 k8s 发展速度很快,目前很多大的公司容器集群都基于该项目,如京东,腾讯,滴滴,瓜子二手车,北森等等. kubernetes1.9版本发布2017年12月15日,每是那三个月一个迭代, W ...

  5. kubeadm使用外部etcd部署kubernetes v1.17.3 高可用集群

    文章转载自:https://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247483891&idx=1&sn=17dcd7cd ...

  6. kubeadm 使用 Calico CNI 以及外部 etcd 部署 kubernetes v1.23.1 高可用集群

    文章转载自:https://mp.weixin.qq.com/s/2sWHt6SeCf7GGam0LJEkkA 一.环境准备 使用服务器 Centos 8.4 镜像,默认操作系统版本 4.18.0-3 ...

  7. kubernetes kubeadm部署高可用集群

    k8s kubeadm部署高可用集群 kubeadm是官方推出的部署工具,旨在降低kubernetes使用门槛与提高集群部署的便捷性. 同时越来越多的官方文档,围绕kubernetes容器化部署为环境 ...

  8. 使用kubeadm安装kubernetes高可用集群

    kubeadm安装kubernetes高可用集群搭建  第一步:首先搭建etcd集群 yum install -y etcd 配置文件 /etc/etcd/etcd.confETCD_NAME=inf ...

  9. .Net Core2.1 秒杀项目一步步实现CI/CD(Centos7.2)系列一:k8s高可用集群搭建总结以及部署API到k8s

    前言:本系列博客又更新了,是博主研究很长时间,亲自动手实践过后的心得,k8s集群是购买了5台阿里云服务器部署的,这个集群差不多搞了一周时间,关于k8s的知识点,我也是刚入门,这方面的知识建议参考博客园 ...

  10. Kubeadm 1.9 HA 高可用集群本地离线镜像部署【已验证】

    k8s介绍 k8s 发展速度很快,目前很多大的公司容器集群都基于该项目,如京东,腾讯,滴滴,瓜子二手车,易宝支付,北森等等. kubernetes1.9版本发布2017年12月15日,每三个月一个迭代 ...

随机推荐

  1. element-ui跨行

    1 <template> 2 <el-table :data="scheduleList" :span-method="objectSpanMethod ...

  2. edge 浏览器部分功能

    模拟打印情况的调试

  3. Python 使用json存储数据

    一.前言 很多程序都要求用户输入某种信息,如让用户存储游戏首选项或提供要可视化的数据.不管专注的是什么,程序都把用户提供的信息存储在列表和字典等数据结构中.用户关闭程序时,你几乎总是要保存他们提供的信 ...

  4. 十大经典排序之归并排序(C++实现)

    归并排序 思路:(分而治之的思想) 1.申请空间,使其大小为两个已经排序序列之和,该空间用来存放合并后的序列: 2.设定两个指针,最初位置分别为两个已经排序序列的起始位置: 3.比较两个指针所指向的元 ...

  5. PHP做API开发该如何设计签名验证

    前言 开发过程中,我们经常会与接口打交道,有的时候是调取别人网站的接口,有的时候是为他人提供自己网站的接口,但是在这调取的过程中都离不开签名验证. 我们在设计签名验证的时候,请注意要满足以下几点: 可 ...

  6. js对象常用的方法

    1. Object.assign()方法用于将所有可枚举属性的值从一个或多个源对象复制到目标对象,它将返回目标对象.     语法: Object.assign(target, ...sources) ...

  7. CF1430

    CF1430 那个博客搭好遥遥无期. A: 看代码. #include<bits/stdc++.h> using namespace std; int main() { int t;sca ...

  8. 磊磊零基础打卡算法:day20 c++dfs树的深度优先遍历

    5.24 dfs深度优先搜索: 思想比较简单,就是一条路走到底,走到最深点处再回退一步,再看有没有路可以走,没有的话再回退一步,重复此步骤: 也是人们常讲的暴搜. 主要的用法: 通常需要一个状态数组来 ...

  9. 深入理解css 笔记(9)

    模块化 CSS 是指把页面分割成不同的组成部分,这些组成部分可以在多种上下文中重复使用,并且互相之间没有依赖关系.最终目的是,当我们修改其中一部分 css 时,不会对其他部分产生意料之外的影响.    ...

  10. 2022.07.25 TypeScript基础类型介绍

    基础类型: 字符串(string)(String) let first: string = 'niu' // 直接赋值 let fourth: string = `niu` // 模板字符串 let ...