1.1 安装前必读

请不要使用带中文的服务器和克隆的虚拟机。

生产环境建议使用二进制的方式安装。

文档中的IP地址要更换成自己的IP地址,要谨记!!!

1.2 基本环境配置

kubeadm安装方式自1.14版本以后,安装方法几乎没有任何变化,此文档可以尝试安装最新的K8S集群,centos采用的是7.x版本。

K8S官网:

https://kubernetes.io/docs/setup/

最新版高可用安装:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

1.2.1 高可用Kubernetes集群规划

主机名 IP地址 说明
k8s-master01 10.3.50.11 master节点1
k8s-master02 10.3.50.12 master节点2
k8s-master03 10.3.50.13 master节点3
k8s-master-lb 10.3.50.100 keepalived虚拟IP
k8s-node01 10.3.50.14 work节点
k8s-node02 10.3.50.15 work节点
配置信息 备注
系统版本 CentOS 7.9
Docker版本 20.10.x
Pod网段 10.16.0.0/12
Service网段 10.244.0.0/16

注意:宿主机网段、k8s Service网段、Pod网段不能重复!!!

VIP(虚拟IP)不要和公司内网IP重复,首先去ping一下,不通才可用。VIP需要和你的主机在同一个局域网内(不是直接用上述IP,要和本主机网段相同)!

公有云上搭建VIP是公有云的负载均衡的IP,比如阿里云的内网SLB的地址,腾讯云内网ELB的地址。不需要再搭建keepalived和haproxy!

1.2.2 基本环境配置

所有节点配置hosts,修改/etc/hosts如下:

注意用自己本机的IP地址!!!

  1. [root@k8s-master01 ~]# vim /etc/hosts
    10.3.50.11 k8s-master01
    10.3.50.12 k8s-master02
    10.3.50.13 k8s-master03
    10.3.50.100 k8s-master-lb      # 如果不是高可用集群,该IP为master01的IP!
    10.3.50.14 k8s-node01
    10.3.50.15 k8s-node02

CentOS 7安装yum源如下:

  1. curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
  2. yum install -y yum-utils device-mapper-persistent-data lvm2
  3. yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

  10. sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

必备工具安装

  1. yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

所有节点关闭防火墙、selinux、dnsmasq、swap服务器配置如下:

  1. systemctl disable --now firewalld
  2. systemctl disable --now dnsmasq
  3. systemctl disable --now NetworkManager      # 公有云不要关闭
  4.  
  5. setenforce 0
  6. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
  7. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭swap分区

  1. swapoff -a && sysctl -w vm.swappiness=0
  2. sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

安装ntpdate

  1. rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
  2. yum install ntpdate -y

所有节点同步时间。时间同步配置如下:

  1. ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  2. echo 'Asia/Shanghai' >/etc/timezone
  3. ntpdate time2.aliyun.com
    crntab -e
  4. # 加入到crontab
  5. */5 * * * * /usr/sbin/ntpdate time2.aliyun.com

所有节点配置limit:

  1. ulimit -SHn 65535
  1. vim /etc/security/limits.conf
  2. # 末尾添加如下内容
  3. * soft nofile 65536
  4. * hard nofile 131072
  5. * soft nproc 65535
  6. * hard nproc 655350
  7. * soft memlock unlimited
  8. * hard memlock unlimited

master01节点免密钥登陆其他节点,安装过程中生成配置文件和证书均在master上操作,集群管理也在master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:

  1. ssh-keygen -t rsa
  2. for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

下载安装所有的源码文件:

  1. cd /root/
  2. git clone https://github.com/dotbalo/k8s-ha-install.git    
  3. git clone https://gitee.com/dukuan/k8s-ha-install.git      # 如果上面的无法下载就使用这个

如果无法下载就下载:

https://gitee.com/dukuan/k8s-ha-install.git

所有节点升级系统并重启,此处没有升级内核。

  1. yum update -y --exclude=kernel* && reboot      # CentOS 7需要升级,CentOS 8可以按需升级系统

1.3 内核配置

CentOS 7需要升级内核至4.18+,本地升级的版本为4.19。

在master01节点下载内核!

  1. cd /root
  2. wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
  3. wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

从master01节点传到其他节点:

  1. for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done

所有节点安装内核

  1. cd /root && yum localinstall -y kernel-ml*

所有节点更改内核启动顺序

  1. grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
  2. grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

检查默认内核是不是4.19

  1. [root@k8s-master01 ~]# grubby --default-kernel
  2. /boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64

所有节点重启,然后检查内核是不是4.19

  1. [root@k8s-master02 ~]# uname -a
  2. Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

所有节点安装ipvsadm:

  1. yum install ipvsadm ipset sysstat conntrack libseccomp -y

所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack,4.18以下使用nf_conntrack_ipv4即可:

  1. modprobe -- ip_vs
  2. modprobe -- ip_vs_rr
  3. modprobe -- ip_vs_wrr
  4. modprobe -- ip_vs_sh
  5. modprobe -- nf_conntrack
  6. vim /etc/modules-load.d/ipvs.conf
  7. # 加入以下内容
  8. ip_vs
  9. ip_vs_lc
  10. ip_vs_wlc
  11. ip_vs_rr
  12. ip_vs_wrr
  13. ip_vs_lblc
  14. ip_vs_lblcr
  15. ip_vs_dh
  16. ip_vs_sh
  17. ip_vs_fo
  18. ip_vs_nq
  19. ip_vs_sed
  20. ip_vs_ftp
  21. ip_vs_sh
  22. nf_conntrack
  23. ip_tables
  24. ip_set
  25. xt_set
  26. ipt_set
  27. ipt_rpfilter
  28. ipt_REJECT
  29. ipip

然后执行systemctl enable --now systemd-modules-load.service即可

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.ipv4.ip_forward = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. net.bridge.bridge-nf-call-ip6tables = 1
  5. fs.may_detach_mounts = 1
  6. net.ipv4.conf.all.route_localnet = 1
  7. vm.overcommit_memory=1
  8. vm.panic_on_oom=0
  9. fs.inotify.max_user_watches=89100
  10. fs.file-max=52706963
  11. fs.nr_open=52706963
  12. net.netfilter.nf_conntrack_max=2310720
  13. net.ipv4.tcp_keepalive_time = 600
  14. net.ipv4.tcp_keepalive_probes = 3
  15. net.ipv4.tcp_keepalive_intvl =15
  16. net.ipv4.tcp_max_tw_buckets = 36000
  17. net.ipv4.tcp_tw_reuse = 1
  18. net.ipv4.tcp_max_orphans = 327680
  19. net.ipv4.tcp_orphan_retries = 3
  20. net.ipv4.tcp_syncookies = 1
  21. net.ipv4.tcp_max_syn_backlog = 16384
  22. net.ipv4.ip_conntrack_max = 65536
  23. net.ipv4.tcp_max_syn_backlog = 16384
  24. net.ipv4.tcp_timestamps = 0
  25. net.core.somaxconn = 16384
  26. EOF
  27. sysctl --system

所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

  1. reboot
  2. lsmod | grep --color=auto -e ip_vs -e nf_conntrack

1.4 k8s组件和Runtime安装

如果安装的版本低于1.24,选择Docker和Containerd均可,高于1.24选择Containerd作为Runtime。

注意:Runtime安装选择两个小节的其中一个小节即可。

1.4.1 Containerd作为Runtime

所有节点安装docker-ce-20.10:

  1. yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

可以无需启动Docker,只需要配置和启动Containerd即可。

首先配置Containerd所需的模块(所有节点):

  1. cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
  2. overlay
  3. br_netfilter
  4. EOF

所有节点加载模块:

  1. modprobe -- overlay
  2. modprobe -- br_netfilter

所有节点,配置Containerd所需的内核:

  1. cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
  2. net.bridge.bridge-nf-call-iptables = 1
  3. net.ipv4.ip_forward = 1
  4. net.bridge.bridge-nf-call-ip6tables = 1
  5. EOF

所有节点加载内核:

  1. sysctl --system

所有节点配置Containerd的配置文件:

  1. mkdir -p /etc/containerd
  2. containerd config default | tee /etc/containerd/config.toml

所有节点将Containerd的Ggroup改为Systemd:

  1. vim /etc/containerd/config.toml

找到containerd.runtimes.runc.options,添加SystemdCgroup = true(如果已存在直接修改,否则会报错),如下图:

所有节点将sandbox_image的Pause镜像改成符合自己版本的地址registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6:

所有节点启动Containerd,并配置开机自启动:

  1. systemctl daemon-reload
  2. systemctl enable --now containerd

所有节点配置crictl客户端连接的运行时位置:

  1. cat > /etc/crictl.yaml <<EOF
  2. runtime-endpoint: unix:///run/containerd/containerd.sock
  3. image-endpoint: unix:///run/containerd/containerd.sock
  4. timeout: 10
  5. debug: false
  6. EOF

1.4.2 Docker作为Runtime(版本小于1.24)

如果选择Docker作为Runtime,安装步骤较Containerd较为简单,只需要安装并启动即可。

所有节点安装docker-ce 20.10:

  1. yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

由于新版kubelet建议使用systemd,所以把Docker的CgroupDriver也改成systemd:

  1. mkdir /etc/docker
  2. cat > /etc/docker/daemon.json <<EOF
  3. {
  4. "exec-opts": ["native.cgroupdriver=systemd"]
  5. }
  6. EOF

所有节点设置开机自启动Docker:

  1. systemctl daemon-reload && systemctl enable --now docker

1.5 安装Kubernetes组件

首先在master01节点查看最新的Kubernetes版本是多少:

  1. yum list kubeadm.x86_64 --showduplicates | sort -r

所有节点安装1.23最新版本kubeadm、kubelet和kubectl:

  1. yum install kubeadm-1.23* kubelet-1.23* kubectl-1.23* -y

如果选择的是Containerd作为Runtime的,需要更改Kubelet的配置使用Containerd作为Runtime:

  1. cat >/etc/sysconfig/kubelet<<EOF
  2. KUBELET_KUBEADM_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
  3. EOF

注意:如果不是采用Containerd作为Runtime的,请不要执行上述命令!!!

所有节点设置kubelet开机自启动(由于还未初始化,没有kubelet的配置文件,此时kubelet无法启动,无需管理):

  1. systemctl daemon-reload
  2. systemctl enable --now kubelet

此时kubelet是起不来的,日志会有报错不影响!

1.6 高可用组件安装

(注意:如果不是高可用集群,haproxy和keepalived无需安装)

公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubelet控制端不能放在master节点,推荐使用腾讯云,因为阿里云的SLB有回环的问题,也就是SLB代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。

所有master节点通过yum安装haproxy和keepalived:

  1. yum install keepalived haproxy -y

所有master节点配置haproxy(详细配置参考haproxy文档,所有master节点的haproxy配置相同):

  1. [root@k8s-master01 etc]# mkdir /etc/haproxy
  2. [root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg
  3. global
  4. maxconn 2000
  5. ulimit-n 16384
  6. log 127.0.0.1 local0 err
  7. stats timeout 30s
  8.  
  9. defaults
  10. log global
  11. mode http
  12. option httplog
  13. timeout connect 5000
  14. timeout client 50000
  15. timeout server 50000
  16. timeout http-request 15s
  17. timeout http-keep-alive 15s
  18.  
  19. frontend monitor-in
  20. bind *:33305
  21. mode http
  22. option httplog
  23. monitor-uri /monitor
  24.  
  25. frontend k8s-master
  26. bind 0.0.0.0:16443
  27. bind 127.0.0.1:16443
  28. mode tcp
  29. option tcplog
  30. tcp-request inspect-delay 5s
  31. default_backend k8s-master
  32.  
  33. backend k8s-master
  34. mode tcp
  35. option tcplog
  36. option tcp-check
  37. balance roundrobin
  38. default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  39. server k8s-master01 10.3.50.11:6443 check
  40. server k8s-master02 10.103.236.202:6443 check
  41. server k8s-master03 10.103.236.203:6443 check

所有master节点配置keepalived,配置不一样,注意区分:

  1. [root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf      # 注意每个节点的IP和网卡(interface参数)

master01节点的配置:

  1. [root@k8s-master01 etc]# mkdir /etc/keepalived
  2.  
  3. [root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf
  4. ! Configuration File for keepalived
  5. global_defs {
  6. router_id LVS_DEVEL
  7. script_user root
  8. enable_script_security
  9. }
  10. vrrp_script chk_apiserver {
  11. script "/etc/keepalived/check_apiserver.sh"
  12. interval 5
  13. weight -5
  14. fall 2
  15. rise 1
  16. }
  17. vrrp_instance VI_1 {
  18. state MASTER
  19. interface ens33
  20. mcast_src_ip 10.3.50.11
  21. virtual_router_id 51
  22. priority 101
  23. advert_int 2
  24. authentication {
  25. auth_type PASS
  26. auth_pass K8SHA_KA_AUTH
  27. }
  28. virtual_ipaddress {
  29. 10.3.50.100
  30. }
  31. track_script {
  32. chk_apiserver
  33. }
  34. }

master02节点的配置:

  1. ! Configuration File for keepalived
  2. global_defs {
  3. router_id LVS_DEVEL
  4. script_user root
  5. enable_script_security
  6. }
  7. vrrp_script chk_apiserver {
  8. script "/etc/keepalived/check_apiserver.sh"
  9. interval 5
  10. weight -5
  11. fall 2
  12. rise 1
  13. }
  14. vrrp_instance VI_1 {
  15. state BACKUP
  16. interface ens33
  17. mcast_src_ip 10.103.236.202
  18. virtual_router_id 51
  19. priority 100
  20. advert_int 2
  21. authentication {
  22. auth_type PASS
  23. auth_pass K8SHA_KA_AUTH
  24. }
  25. virtual_ipaddress {
  26. 10.3.50.100
  27. }
  28. track_script {
  29. chk_apiserver
  30. }
  31. }

master03节点的配置:

  1. ! Configuration File for keepalived
  2. global_defs {
  3. router_id LVS_DEVEL
  4. script_user root
  5. enable_script_security
  6. }
  7. vrrp_script chk_apiserver {
  8. script "/etc/keepalived/check_apiserver.sh"
  9. interval 5
  10. weight -5
  11. fall 2
  12. rise 1
  13. }
  14. vrrp_instance VI_1 {
  15. state BACKUP
  16. interface ens33
  17. mcast_src_ip 10.103.236.203
  18. virtual_router_id 51
  19. priority 100
  20. advert_int 2
  21. authentication {
  22. auth_type PASS
  23. auth_pass K8SHA_KA_AUTH
  24. }
  25. virtual_ipaddress {
  26. 10.3.50.100
  27. }
  28. track_script {
  29. chk_apiserver
  30. }
  31. }

所有master节点配置keepalived健康检查文件:

  1. [root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh
  2. #!/bin/bash
  3.  
  4. err=0
  5. for k in $(seq 1 3)
  6. do
  7. check_code=$(pgrep haproxy)
  8. if [[ $check_code == "" ]]; then
  9. err=$(expr $err + 1)
  10. sleep 1
  11. continue
  12. else
  13. err=0
  14. break
  15. fi
  16. done
  17.  
  18. if [[ $err != "0" ]]; then
  19. echo "systemctl stop keepalived"
  20. /usr/bin/systemctl stop keepalived
  21. exit 1
  22. else
  23. exit 0
  24. fi
  1. chmod +x /etc/keepalived/check_apiserver.sh

启动haproxy和keepalived

  1. [root@k8s-master01 keepalived]# systemctl daemon-reload
  2. [root@k8s-master01 keepalived]# systemctl enable --now haproxy
  3. [root@k8s-master01 keepalived]# systemctl enable --now keepalived

重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的

  1. 测试VIP
  2. [root@k8s-master01 ~]# ping 10.3.50.100 -c 4
  3. PING 10.3.50.100 (10.3.50.100) 56(84) bytes of data.
  4. 64 bytes from 10.3.50.100: icmp_seq=1 ttl=64 time=0.464 ms
  5. 64 bytes from 10.3.50.100: icmp_seq=2 ttl=64 time=0.063 ms
  6. 64 bytes from 10.3.50.100: icmp_seq=3 ttl=64 time=0.062 ms
  7. 64 bytes from 10.3.50.100: icmp_seq=4 ttl=64 time=0.063 ms
  8.  
  9. --- 10.3.50.100 ping statistics ---
  10. 4 packets transmitted, 4 received, 0% packet loss, time 3106ms
  11. rtt min/avg/max/mdev = 0.062/0.163/0.464/0.173 ms
  12. [root@k8s-master01 ~]# telnet 10.3.50.100 16443
  13. Trying 10.3.50.100...
  14. Connected to 10.3.50.100.
  15. Escape character is '^]'.
  16. Connection closed by foreign host.

如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux、haproxy和keepalived的状态,监听端口等

所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld

所有节点查看selinux状态,必须为disable;getenforce

master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy

master节点查看监听端口:netstat -lntp

1.7 集群初始化

官方初始化文档:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

以下操作只在master01节点执行

master01节点创建kubeadm-config.yaml配置文件如下:

master01:(# 注意,如果不是高可用集群,10.3.50.100:16443改为master01的地址,16443改为apiserver的端口,默认是6443,注意更改kubernetesVersion的值和自己服务器kubeadm的版本一致:kubeadm version)

以下文件内容,宿主机网段、podSubnet网段、serviceSubnet网段不能重复。

以下操作在master01:

  1. vim kubeadm-config.yaml
  1. apiVersion: kubeadm.k8s.io/v1beta2
  2. bootstrapTokens:
  3. - groups:
  4. - system:bootstrappers:kubeadm:default-node-token
  5. token: 7t2weq.bjbawausm0jaxury
  6. ttl: 24h0m0s
  7. usages:
  8. - signing
  9. - authentication
  10. kind: InitConfiguration
  11. localAPIEndpoint:
  12. advertiseAddress: 10.3.50.11
  13. bindPort: 6443
  14. nodeRegistration:
  15. # criSocket: /var/run/dockershim.sock      # 如果是Docker作为Runtime配置此项
  16. criSocket: /run/containerd/containerd.sock      # 如果是Containerd作为Runtime配置此项
  17. name: k8s-master01
  18. taints:
  19. - effect: NoSchedule
  20. key: node-role.kubernetes.io/master
  21. ---
  22. apiServer:
  23. certSANs:
  24. - 10.3.50.100
  25. timeoutForControlPlane: 4m0s
  26. apiVersion: kubeadm.k8s.io/v1beta2
  27. certificatesDir: /etc/kubernetes/pki
  28. clusterName: kubernetes
  29. controlPlaneEndpoint: 10.3.50.100:16443
  30. controllerManager: {}
  31. dns:
  32. type: CoreDNS
  33. etcd:
  34. local:
  35. dataDir: /var/lib/etcd
  36. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
  37. kind: ClusterConfiguration
  38. kubernetesVersion: v1.23.0      # 更改此处的版本号和kubeadm version一致
  39. networking:
  40. dnsDomain: cluster.local
  41. podSubnet: 10.16.0.0/12
  42. serviceSubnet: 10.244.0.0/16
  43. scheduler: {}

更新kubeadm文件

  1. kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

将new.yaml文件复制到其他master节点

  1. for i in k8s-master02 k8s-master03; do scp new.yaml $i:/root/; done

之前所有master节点提前下载镜像,可以节省初始化时间(其他节点不需要更改任何配置,包括IP地址也不需要更改);

  1. kubeadm config images pull --config /root/new.yaml

所有节点设置开机自启动kubelet

  1. systemctl enable --now kubelet      #(如果启动失败无需管理,初始化成功以后即可启动)

master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他master节点加入master01即可:

  1. kubeadm init --config /root/new.yaml --upload-certs

如果初始化失败,重置后再次初始化,命令如下(没有失败不要执行):

  1. kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的Token值(令牌值):

  1. Your Kubernetes control-plane has initialized successfully!
  2.  
  3. To start using your cluster, you need to run the following as a regular user:
  4.  
  5. mkdir -p $HOME/.kube
  6. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  7. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  8.  
  9. Alternatively, if you are the root user, you can run:
  10.  
  11. export KUBECONFIG=/etc/kubernetes/admin.conf
  12.  
  13. You should now deploy a pod network to the cluster.
  14. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  15. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  16.  
  17. You can now join any number of the control-plane node running the following command on each as root:
  18.  
  19. kubeadm join 10.3.50.100:16443 --token 7t2weq.bjbawausm0jaxury \
  20. --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
  21. --control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c
  22.  
  23. Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
  24. As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
  25. "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
  26.  
  27. Then you can join any number of worker nodes by running the following on each as root:
  28.  
  29. kubeadm join 10.3.50.100:16443 --token 7t2weq.bjbawausm0jaxury \
  30. --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94

master01节点配置环境变量,用于访问Kubernetes集群:

  1. cat <<EOF >> /root/.bashrc
  2. export KUBECONFIG=/etc/kubernetes/admin.conf
  3. EOF
  4. source /root/.bashrc

查看节点状态:

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命令空间内,此时可以查看Pod状态:

1.8 高可用Master

注意:以下步骤是上述init命令产生的Token过期了才需要执行以下步骤,如果没有过期不需要执行,直接join即可

  1. Token过期后生成新的token
  2. kubeadm token create --print-join-command
  3.  
  4. Master需要生成--certificate-key
  5. kubeadm init phase upload-certs --upload-certs

Token没有过期直接执行Join就行了

其他master加入集群,master02和master03分别执行

  1. kubeadm join 10.3.50.100:16443 --token 7t2weq.bjbawausm0jaxury \
  2. --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
  3. --control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c

查看当前状态:

1.9 Node节点的配置

Node节点上主要部署公司的一些业务应用,生产环境中不建议master节点部署系统组件之外的其他Pod,测试环境可以允许master节点部署Pod以节省系统资源。

  1. kubeadm join 10.3.50.100:16443 --token 7t2weq.bjbawausm0jaxury \
  2. --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94

所有节点初始化完成后,查看集群状态

2.0 Calico组件的安装

以下步骤只在master01执行

  1. cd /root/k8s-ha-install && git checkout manual-installation-v1.23.x && cd calico/

修改Pod网段:

  1. POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
  1. sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yaml
  2. kubectl apply -f calico.yaml

查看容器和节点状态:

2.1 Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

将master01节点的front-proxy-ca.crt复制到所有Node节点

  1. scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
  2. scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node(其他节点自行拷贝):/etc/kubernetes/pki/front-proxy-ca.crt

安装metrics server

  1. cd /root/k8s-ha-install/kubeadm-metrics-server
  2.  
  3. # kubectl create -f comp.yaml
  4. serviceaccount/metrics-server created
  5. clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
  6. clusterrole.rbac.authorization.k8s.io/system:metrics-server created
  7. rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
  8. clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
  9. clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
  10. service/metrics-server created
  11. deployment.apps/metrics-server created
  12. apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看状态

  1. kubectl get po -n kube-system -l k8s-app=metrics-server

变成1/1     Running后

  1. # kubectl top node
  2. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  3. k8s-master01 153m 3% 1701Mi 44%
  4. k8s-master02 125m 3% 1693Mi 44%
  5. k8s-master03 129m 3% 1590Mi 41%
  6. k8s-node01 73m 1% 989Mi 25%
  7. k8s-node02 64m 1% 950Mi 24%
  8. # kubectl top po -A
  9. NAMESPACE NAME CPU(cores) MEMORY(bytes)
  10. kube-system calico-kube-controllers-66686fdb54-74xkg 2m 17Mi
  11. kube-system calico-node-6gqpb 21m 85Mi
  12. kube-system calico-node-bmvjt 29m 76Mi
  13. kube-system calico-node-hdp9c 15m 82Mi
  14. kube-system calico-node-wwrfv 23m 86Mi
  15. kube-system calico-node-zzv88 22m 84Mi
  16. kube-system calico-typha-67c6dc57d6-hj6l4 2m 23Mi
  17. kube-system calico-typha-67c6dc57d6-jm855 2m 22Mi
  18. kube-system coredns-7d89d9b6b8-sr6mf 1m 16Mi
  19. kube-system coredns-7d89d9b6b8-xqwjk 1m 16Mi
  20. kube-system etcd-k8s-master01 24m 96Mi
  21. kube-system etcd-k8s-master02 20m 91Mi
  22. kube-system etcd-k8s-master03 21m 92Mi
  23. kube-system kube-apiserver-k8s-master01 41m 502Mi
  24. kube-system kube-apiserver-k8s-master02 35m 476Mi
  25. kube-system kube-apiserver-k8s-master03 71m 480Mi
  26. kube-system kube-controller-manager-k8s-master01 15m 65Mi
  27. kube-system kube-controller-manager-k8s-master02 1m 26Mi
  28. kube-system kube-controller-manager-k8s-master03 2m 27Mi
  29. kube-system kube-proxy-8lt45 1m 18Mi
  30. kube-system kube-proxy-d6jfh 1m 18Mi
  31. kube-system kube-proxy-hfnvz 1m 19Mi
  32. kube-system kube-proxy-nsms8 1m 18Mi
  33. kube-system kube-proxy-xmlhq 3m 21Mi
  34. kube-system kube-scheduler-k8s-master01 2m 26Mi
  35. kube-system kube-scheduler-k8s-master02 2m 24Mi
  36. kube-system kube-scheduler-k8s-master03 2m 24Mi
  37. kube-system metrics-server-d54b585c4-4dqpf 46m 16Mi

2.2 Dashboard部署

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

2.2.1 安装执行版本dashboard

  1. cd /root/k8s-ha-install/dashboard/
  2.  
  3. [root@k8s-master01 dashboard]# kubectl create -f .
  4. serviceaccount/admin-user created
  5. clusterrolebinding.rbac.authorization.k8s.io/admin-user created
  6. namespace/kubernetes-dashboard created
  7. serviceaccount/kubernetes-dashboard created
  8. service/kubernetes-dashboard created
  9. secret/kubernetes-dashboard-certs created
  10. secret/kubernetes-dashboard-csrf created
  11. secret/kubernetes-dashboard-key-holder created
  12. configmap/kubernetes-dashboard-settings created
  13. role.rbac.authorization.k8s.io/kubernetes-dashboard created
  14. clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
  15. rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
  16. clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
  17. deployment.apps/kubernetes-dashboard created
  18. service/dashboard-metrics-scraper created
  19. deployment.apps/dashboard-metrics-scraper created

2.2.2 安装最新版

官方GitHub地址:

https://github.com/kubernetes/dashboard

可以在官方dashboard查看到最新版dashboard

  1. kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

2.0.3以具体版本号为准

  1. vim admin.yaml
  2.  
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6. name: admin-user
  7. namespace: kube-system
  8. ---
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. kind: ClusterRoleBinding
  11. metadata:
  12. name: admin-user
  13. annotations:
  14. rbac.authorization.kubernetes.io/autoupdate: "true"
  15. roleRef:
  16. apiGroup: rbac.authorization.k8s.io
  17. kind: ClusterRole
  18. name: cluster-admin
  19. subjects:
  20. - kind: ServiceAccount
  21. name: admin-user
  22. namespace: kube-system
  1. kubectl apply -f admin.yaml -n kube-system

2.2.3 登陆dashboard

在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图1-1:

  1. --test-type --ignore-certificate-errors

更改dashboard的svc为NodePort:

  1. kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

将ChusterIP更改为NodePort(如果已经为NodePort忽略此步骤):

查看端口号:

  1. kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

根据自己的实例端口号,通过任意安装了kube-proxy的宿主机的IP+端口即可访问到dashboard:

访问Dashboard:

https://10.3.50.11:18282(请更改18282为自己的端口)选择登陆方式为令牌(即Token方式),参考图1-2

查看Token值:

  1. [root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
  2. Name: admin-user-token-r4vcp
  3. Namespace: kube-system
  4. Labels: <none>
  5. Annotations: kubernetes.io/service-account.name: admin-user
  6. kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023
  7.  
  8. Type: kubernetes.io/service-account-token
  9.  
  10. Data
  11. ====
  12. ca.crt: 1025 bytes
  13. namespace: 11 bytes
  14. token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w

将Token值输入到令牌后,单机登陆即可访问Dashboard,参考图1-3:

2.2.4 【必看】一些必须的配置更改

将kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:

在master01节点执行

  1. kubectl edit cm kube-proxy -n kube-system
  2. mode: ipvs

更新kube-proxy的Pod:

  1. kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

验证kube-proxy模式

  1. [root@k8s-master01 1.1.1]# curl 127.0.0.1:10249/proxyMode
  2. ipvs

2.3 【必看】注意事项

注意:kubeadm安装的集群,证书有效期默认是一年。master节点的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器运行的。可以通过kubectl get po -n kube-system查看。

启动和二进制不同的是,kubelet的配置文件在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml,修改后需要重启kubelet进程。

其他组件的配置文件在/etc/kubernetes/manifests目录下,比如kube-apiserver.yaml,该yaml文件更改后,kubelet会自动刷新配置,也就是会重启Pod。不能再次创建该文件kube-proxy的配置在kube-system明明空间下的configmap中,可以通过

  1. kubectl edit cm kube-proxy -n kube-system

进行更改,更改完成后,可以通过patch重启kube-proxy

  1. kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

kubeadm安装后,master节点默认不允许部署Pod,可以通过以下方式打开:

查看Taints:

  1. [root@k8s-master01 ~]# kubectl describe node -l node-role.kubernetes.io/master= | grep Taints
  2. Taints: node-role.kubernetes.io/master:NoSchedule
  3. Taints: node-role.kubernetes.io/master:NoSchedule
  4. Taints: node-role.kubernetes.io/master:NoSchedule

删除Taint:

  1. [root@k8s-master01 ~]# kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
  2. node/k8s-master01 untainted
  3. node/k8s-master02 untainted
  4. node/k8s-master03 untainted
  5. [root@k8s-master01 ~]# kubectl describe node -l node-role.kubernetes.io/master= | grep Taints
  6. Taints: <none>
  7. Taints: <none>
  8. Taints: <none>

第一章 1.1.1节 Kubeadm安装K8S高可用集群的更多相关文章

  1. 使用kubeadm安装kubernetes高可用集群

    kubeadm安装kubernetes高可用集群搭建  第一步:首先搭建etcd集群 yum install -y etcd 配置文件 /etc/etcd/etcd.confETCD_NAME=inf ...

  2. kubernetes之kubeadm 安装kubernetes 高可用集群

    1. 架构信息 系统版本:CentOS 7.6 内核:3.10.0-957.el7.x86_64 Kubernetes: v1.14.1 Docker-ce: 18.09.5 推荐硬件配置:4核8G ...

  3. kubeadm实现k8s高可用集群环境部署与配置

    高可用架构 k8s集群的高可用实际是k8s各核心组件的高可用,这里使用主备模式,架构如下: 主备模式高可用架构说明: 核心组件 高可用模式 高可用实现方式 apiserver 主备 keepalive ...

  4. 【工具-Nginx】从入门安装到高可用集群搭建

    文章已收录至https://lichong.work,转载请注明原文链接. ps:欢迎关注公众号"Fun肆编程"或添加我的私人微信交流经验 一.Nginx安装配置及常用命令 1.环 ...

  5. 大数据高可用集群环境安装与配置(09)——安装Spark高可用集群

    1. 获取spark下载链接 登录官网:http://spark.apache.org/downloads.html 选择要下载的版本 2. 执行命令下载并安装 cd /usr/local/src/ ...

  6. Linux源码安装RabbitMQ高可用集群

    1.环境说明 linux版本:CentOS Linux release 7.9.2009 erlang版本:erlang-24.0 rabbitmq版本:rabbitmq_server-3.9.13 ...

  7. 三、k8s集群可用性验证与调参(第一章、k8s高可用集群安装)

    作者:北京小远 出处:http://www.cnblogs.com/bj-xy/ 参考课程:Kubernetes全栈架构师(电脑端购买优惠) 文档禁止转载,转载需标明出处,否则保留追究法律责任的权利! ...

  8. rabbitmq安装与高可用集群配置

    rabbitmq版本:3.6.12 rabbitmq安装 1.安装openssl wget http://www.openssl.org/source/openssl-1.0.0a.tar.gz &a ...

  9. 大数据高可用集群环境安装与配置(07)——安装HBase高可用集群

    1. 下载安装包 登录官网获取HBase安装包下载地址 https://hbase.apache.org/downloads.html 2. 执行命令下载并安装 cd /usr/local/src/ ...

  10. 大数据高可用集群环境安装与配置(06)——安装Hadoop高可用集群

    下载Hadoop安装包 登录 https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/ 镜像站,找到我们要安装的版本,点击进去复制下载链接 ...

随机推荐

  1. 在Windows系统上安装和配置Jenkins自动发布

    一.安装jenkins的流程转载于: https://www.jianshu.com/p/de9c4f5ae7fa 二.在window中执行批处理文件bat或者powershell可以成功,但是Jen ...

  2. 由浇花工具开始IOT物联网平台之开始前言篇【1】

    在2020年时,突然有个想法,就是做个浇花工具,因为平时喜欢养花,有时忘记浇花,有时感觉手动浇花太麻烦,所以做个这个小玩意,是用.NET 开发的WinForm小程序,来控制单片机,带动水泵浇花,还可以 ...

  3. POJ I Think I Need a Houseboat

    I Think I Need a Houseboat 思路:距离问题,一道水题 代码: #include <iostream> #include <cmath> using n ...

  4. springboot修改默认端口

    方案一: src/main/resuorces 文件夹下新建application.properties 文件 并添加内容server.port=8011即可 方案二: 使用EmbeddedServl ...

  5. vs2013如何添加扩展库函数

    本文仅针对C和C++ vs2013下载C/C++编译器后,能够包含常见的头文件,stdlib.h,stdio.h,math.h这些.如果有其他需求例如:调用GL/glfw32.h,freeglut.h ...

  6. Windows 10 ~ Jenkins 安装

    首先: jenkins是由java写的,所以在使用之前请安装好JDK(最好安装JDK1.8) 下载jenkins.war包并放到一个自己创建的目录D:\jenkins下:https://mirrors ...

  7. AIGC 至少能在两个方面改变当前的世界-纯银

    互联网圈一个正在形成的共识是,web3 只是金融领域的创新,还没有任何征兆能进入大众社会,但 AIGC 对世界的改变正在眼前发生.AIGC 至少能在两个方面改变当前的世界.1.对于缺乏创造力的(文字) ...

  8. mockjs 加上 json-server 快速生成前端数据

    const mock = require('mockjs'); // 引入mockjs const data = mock.mock({ "data|20": [{ "i ...

  9. 关于vue模版动态加载按照指定条件

    一.在data中定义要作为模版的变量,当前定义了两个 menuNavigation 和menuDetails 二.模版使用方式使用component中的 用v-bind:is 来使用其参数

  10. 20200921--同行列对角线的格(奥赛一本通P89 2 二维数组)

    输入三个自然数n,i,j(1<=i<=n,1<=j<=n),输出在一个n*n格的棋盘中(行列均从1开始编号),与格子(i,j)同行,同列,同一对角线的所有格子的位置. 如:n= ...