二进制安装k8s v1.25.4 IPv4/IPv6双栈

https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了

介绍

kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。

我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。

若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。

不配置IPV6,不影响后续,不过集群依旧是支持IPv6的。为后期留有扩展可能性。

若不要IPv6 ,不给网卡配置IPv6即可,不要对IPv6相关配置删除或操作,否则会出问题。

强烈建议在Github上查看文档 !!!!!!

Github出问题会更新文档,并且后续尽可能第一时间更新新版本文档 !!!

手动项目地址:https://github.com/cby-chen/Kubernetes

1.环境

主机名称 IP地址 说明 软件
Master01 192.168.8.61 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Master02 192.168.8.62 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Master03 192.168.8.63 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Node01 192.168.8.64 node节点 kubelet、kube-proxy、nfs-client、nginx
Node02 192.168.8.65 node节点 kubelet、kube-proxy、nfs-client、nginx

192.168.8.66 VIP
软件 版本
kernel 6.0.11
CentOS 8 v8、 v7、Ubuntu
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy v1.25.4
etcd v3.5.6
containerd v1.6.10
docker v20.10.21
cfssl v1.6.3
cni v1.1.1
crictl v1.25.0
haproxy v1.8.27
keepalived v2.1.5

网段

物理主机:192.168.1.0/24

service:10.96.0.0/12

pod:172.16.0.0/12

安装包已经整理好:https://github.com/cby-chen/Kubernetes/releases/download/v1.25.0/kubernetes-v1.25.0.tar

1.1.k8s基础系统环境配置

1.2.配置IP

  1. ssh root@192.168.8.157 "nmcli con mod ens33 ipv4.addresses 192.168.8.61/24; nmcli con mod ens33 ipv4.gateway 192.168.8.1; nmcli con mod ens33 ipv4.method manual; nmcli con mod ens33 ipv4.dns "8.8.8.8"; nmcli con up ens33"
  2. ssh root@192.168.8.158 "nmcli con mod ens33 ipv4.addresses 192.168.8.62/24; nmcli con mod ens33 ipv4.gateway 192.168.8.1; nmcli con mod ens33 ipv4.method manual; nmcli con mod ens33 ipv4.dns "8.8.8.8"; nmcli con up ens33"
  3. ssh root@192.168.8.160 "nmcli con mod ens33 ipv4.addresses 192.168.8.63/24; nmcli con mod ens33 ipv4.gateway 192.168.8.1; nmcli con mod ens33 ipv4.method manual; nmcli con mod ens33 ipv4.dns "8.8.8.8"; nmcli con up ens33"
  4. ssh root@192.168.8.161 "nmcli con mod ens33 ipv4.addresses 192.168.8.64/24; nmcli con mod ens33 ipv4.gateway 192.168.8.1; nmcli con mod ens33 ipv4.method manual; nmcli con mod ens33 ipv4.dns "8.8.8.8"; nmcli con up ens33"
  5. ssh root@192.168.8.162 "nmcli con mod ens33 ipv4.addresses 192.168.8.65/24; nmcli con mod ens33 ipv4.gateway 192.168.8.1; nmcli con mod ens33 ipv4.method manual; nmcli con mod ens33 ipv4.dns "8.8.8.8"; nmcli con up ens33"
  6. # 没有IPv6选择不配置即可
  7. ssh root@192.168.8.61 "nmcli con mod ens33 ipv6.addresses fc00:43f4:1eea:1::10; nmcli con mod ens33 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod ens33 ipv6.method manual; nmcli con mod ens33 ipv6.dns "2400:3200::1"; nmcli con up ens33"
  8. ssh root@192.168.8.62 "nmcli con mod ens33 ipv6.addresses fc00:43f4:1eea:1::20; nmcli con mod ens33 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod ens33 ipv6.method manual; nmcli con mod ens33 ipv6.dns "2400:3200::1"; nmcli con up ens33"
  9. ssh root@192.168.8.63 "nmcli con mod ens33 ipv6.addresses fc00:43f4:1eea:1::30; nmcli con mod ens33 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod ens33 ipv6.method manual; nmcli con mod ens33 ipv6.dns "2400:3200::1"; nmcli con up ens33"
  10. ssh root@192.168.8.64 "nmcli con mod ens33 ipv6.addresses fc00:43f4:1eea:1::40; nmcli con mod ens33 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod ens33 ipv6.method manual; nmcli con mod ens33 ipv6.dns "2400:3200::1"; nmcli con up ens33"
  11. ssh root@192.168.8.65 "nmcli con mod ens33 ipv6.addresses fc00:43f4:1eea:1::50; nmcli con mod ens33 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod ens33 ipv6.method manual; nmcli con mod ens33 ipv6.dns "2400:3200::1"; nmcli con up ens33"
  12. # 查看网卡配置
  13. [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 
  14. TYPE=Ethernet
  15. PROXY_METHOD=none
  16. BROWSER_ONLY=no
  17. BOOTPROTO=none
  18. DEFROUTE=yes
  19. IPV4_FAILURE_FATAL=no
  20. IPV6INIT=yes
  21. IPV6_AUTOCONF=no
  22. IPV6_DEFROUTE=yes
  23. IPV6_FAILURE_FATAL=no
  24. IPV6_ADDR_GEN_MODE=stable-privacy
  25. NAME=ens33
  26. UUID=424fd260-c480-4899-97e6-6fc9722031e8
  27. DEVICE=ens33
  28. ONBOOT=yes
  29. IPADDR=192.168.8.61
  30. PREFIX=24
  31. GATEWAY=192.168.8.1
  32. DNS1=8.8.8.8
  33. IPV6ADDR=fc00:43f4:1eea:1::10/128
  34. IPV6_DEFAULTGW=fc00:43f4:1eea:1::1
  35. DNS2=2400:3200::1
  36. [root@localhost ~]#

1.3.设置主机名

  1. hostnamectl set-hostname k8s-master01
  2. hostnamectl set-hostname k8s-master02
  3. hostnamectl set-hostname k8s-master03
  4. hostnamectl set-hostname k8s-node01
  5. hostnamectl set-hostname k8s-node02

1.4.配置yum源

  1. # 对于 Ubuntu
  2. sed -'s/cn.archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list
  3. # 对于 CentOS 7
  4. sudo sed -'s|^mirrorlist=|#mirrorlist=|g' \
  5.          -'s|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \
  6.          -i.bak \
  7.          /etc/yum.repos.d/CentOS-*.repo
  8. # 对于 CentOS 8
  9. sudo sed -'s|^mirrorlist=|#mirrorlist=|g' \
  10.          -'s|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \
  11.          -i.bak \
  12.          /etc/yum.repos.d/CentOS-*.repo
  13. # 对于私有仓库
  14. sed -'s|^mirrorlist=|#mirrorlist=|g' -'s|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak  /etc/yum.repos.d/CentOS-*.repo

1.5.安装一些必备工具

  1. # 对于 Ubuntu
  2. apt update && apt upgrade -&& apt install -y wget psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl
  3. # 对于 CentOS 7
  4. yum update -&& yum -y install  wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl
  5. # 对于 CentOS 8
  6. yum update -&& yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl

1.6.选择性下载需要工具

  1. 1.下载kubernetes1.25.+的二进制包
  2. github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md
  3. wget https://dl.k8s.io/v1.25.4/kubernetes-server-linux-amd64.tar.gz
  4. 2.下载etcdctl二进制包
  5. github二进制包下载地址:https://github.com/etcd-io/etcd/releases
  6. wget https://ghproxy.com/https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
  7. 3.docker二进制包下载
  8. 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
  9. wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.21.tgz
  10. 4.下载cri-docker 
  11. 二进制包下载地址:https://github.com/Mirantis/cri-dockerd/releases/
  12. wget  https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6.amd64.tgz
  13. 4.containerd下载时下载带cni插件的二进制包。
  14. github下载地址:https://github.com/containerd/containerd/releases
  15. wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-cni-1.6.10-linux-amd64.tar.gz
  16. 5.下载cfssl二进制包
  17. github二进制包下载地址:https://github.com/cloudflare/cfssl/releases
  18. wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64
  19. wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssljson_1.6.3_linux_amd64
  20. wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl-certinfo_1.6.3_linux_amd64
  21. 6.cni插件下载
  22. github下载地址:https://github.com/containernetworking/plugins/releases
  23. wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
  24. 7.crictl客户端二进制下载
  25. github下载:https://github.com/kubernetes-sigs/cri-tools/releases
  26. wget https://ghproxy.com/https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-amd64.tar.gz

1.7.关闭防火墙

  1. # Ubuntu忽略,CentOS执行
  2. systemctl disable --now firewalld

1.8.关闭SELinux

  1. # Ubuntu忽略,CentOS执行
  2. setenforce 0
  3. sed -'s#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.9.关闭交换分区

  1. sed -ri 's/.*swap.*/#&/' /etc/fstab
  2. swapoff -&& sysctl -w vm.swappiness=0
  3. cat /etc/fstab
  4. # /dev/mapper/centos-swap swap                    swap    defaults        0 0

1.10.网络配置(俩种方式二选一)

  1. # Ubuntu忽略,CentOS执行
  2. # 方式一
  3. # systemctl disable --now NetworkManager
  4. # systemctl start network && systemctl enable network
  5. # 方式二
  6. cat > /etc/NetworkManager/conf.d/calico.conf << EOF 
  7. [keyfile]
  8. unmanaged-devices=interface-name:cali*;interface-name:tunl*
  9. EOF
  10. systemctl restart NetworkManager

1.11.进行时间同步

  1. # 服务端
  2. # apt install chrony -y
  3. yum install chrony -y
  4. cat > /etc/chrony.conf << EOF 
  5. pool ntp.aliyun.com iburst
  6. driftfile /var/lib/chrony/drift
  7. makestep 1.0 3
  8. rtcsync
  9. allow 192.168.8.0/24
  10. local stratum 10
  11. keyfile /etc/chrony.keys
  12. leapsectz right/UTC
  13. logdir /var/log/chrony
  14. EOF
  15. systemctl restart chronyd ; systemctl enable chronyd
  16. # 客户端
  17. # apt install chrony -y
  18. yum install chrony -y
  19. cat > /etc/chrony.conf << EOF 
  20. pool 192.168.8.61 iburst
  21. driftfile /var/lib/chrony/drift
  22. makestep 1.0 3
  23. rtcsync
  24. keyfile /etc/chrony.keys
  25. leapsectz right/UTC
  26. logdir /var/log/chrony
  27. EOF
  28. systemctl restart chronyd ; systemctl enable chronyd
  29. #使用客户端进行验证
  30. chronyc sources -v

1.12.配置ulimit

  1. ulimit -SHn 65535
  2. cat >> /etc/security/limits.conf <<EOF
  3. * soft nofile 655360
  4. * hard nofile 131072
  5. * soft nproc 655350
  6. * hard nproc 655350
  7. * seft memlock unlimited
  8. * hard memlock unlimitedd
  9. EOF

1.13.配置免密登录

  1. # apt install -y sshpass
  2. yum install -y sshpass
  3. ssh-keygen -/root/.ssh/id_rsa -''
  4. export IP="192.168.8.61 192.168.8.62 192.168.8.63 192.168.8.64 192.168.8.65"
  5. export SSHPASS=123123
  6. for HOST in $IP;do
  7.      sshpass -e ssh-copy-id -StrictHostKeyChecking=no $HOST
  8. done

1.14.添加启用源

  1. # Ubuntu忽略,CentOS执行
  2. # 为 RHEL-8或 CentOS-8配置源
  3. yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y 
  4. sed -"s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo 
  5. sed -"s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo 
  6. # 为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo 
  7. yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y 
  8. sed -"s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo 
  9. sed -"s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo 
  10. # 查看可用安装包
  11. yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available

1.15.升级内核至4.18版本以上

  1. # Ubuntu忽略,CentOS执行
  2. # 安装最新的内核
  3. # 我这里选择的是稳定版kernel-ml   如需更新长期维护版本kernel-lt  
  4. yum ---enablerepo=elrepo-kernel  install  kernel-ml
  5. # 查看已安装那些内核
  6. rpm -qa | grep kernel
  7. # 查看默认内核
  8. grubby --default-kernel
  9. # 若不是最新的使用命令设置
  10. grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo)
  11. # 重启生效
  12. reboot
  13. # v8 整合命令为:
  14. yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --default-kernel ; reboot 
  15. # v7 整合命令为:
  16. yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot

1.16.安装ipvsadm

  1. # 对于 Ubuntu
  2. # apt install ipvsadm ipset sysstat conntrack -y
  3. # 对于 CentOS
  4. yum install ipvsadm ipset sysstat conntrack libseccomp -y
  5. cat >> /etc/modules-load.d/ipvs.conf <<EOF 
  6. ip_vs
  7. ip_vs_rr
  8. ip_vs_wrr
  9. ip_vs_sh
  10. nf_conntrack
  11. ip_tables
  12. ip_set
  13. xt_set
  14. ipt_set
  15. ipt_rpfilter
  16. ipt_REJECT
  17. ipip
  18. EOF
  19. systemctl restart systemd-modules-load.service
  20. lsmod | grep -e ip_vs -e nf_conntrack
  21. ip_vs_sh               16384  0
  22. ip_vs_wrr              16384  0
  23. ip_vs_rr               16384  0
  24. ip_vs                 180224  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
  25. nf_conntrack          176128  1 ip_vs
  26. nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
  27. nf_defrag_ipv4         16384  1 nf_conntrack
  28. libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

1.17.修改内核参数

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.ipv4.ip_forward = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. fs.may_detach_mounts = 1
  5. vm.overcommit_memory=1
  6. vm.panic_on_oom=0
  7. fs.inotify.max_user_watches=89100
  8. fs.file-max=52706963
  9. fs.nr_open=52706963
  10. net.netfilter.nf_conntrack_max=2310720
  11. net.ipv4.tcp_keepalive_time = 600
  12. net.ipv4.tcp_keepalive_probes = 3
  13. net.ipv4.tcp_keepalive_intvl =15
  14. net.ipv4.tcp_max_tw_buckets = 36000
  15. net.ipv4.tcp_tw_reuse = 1
  16. net.ipv4.tcp_max_orphans = 327680
  17. net.ipv4.tcp_orphan_retries = 3
  18. net.ipv4.tcp_syncookies = 1
  19. net.ipv4.tcp_max_syn_backlog = 16384
  20. net.ipv4.ip_conntrack_max = 65536
  21. net.ipv4.tcp_max_syn_backlog = 16384
  22. net.ipv4.tcp_timestamps = 0
  23. net.core.somaxconn = 16384
  24. net.ipv6.conf.all.disable_ipv6 = 0
  25. net.ipv6.conf.default.disable_ipv6 = 0
  26. net.ipv6.conf.lo.disable_ipv6 = 0
  27. net.ipv6.conf.all.forwarding = 1
  28. EOF
  29. sysctl --system

1.18.所有节点配置hosts本地解析

  1. cat > /etc/hosts <<EOF
  2. 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.8.61 k8s-master01
  5. 192.168.8.62 k8s-master02
  6. 192.168.8.63 k8s-master03
  7. 192.168.8.64 k8s-node01
  8. 192.168.8.65 k8s-node02
  9. 192.168.8.66 lb-vip
  10. EOF

2.k8s基本组件安装

注意 :  2.1 和 2.2 二选其一即可

2.1.安装Containerd作为Runtime (推荐)

  1. # wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
  2. cd kubernetes-v1.25.4/cby/
  3. #创建cni插件所需目录
  4. mkdir -/etc/cni/net./opt/cni/bin 
  5. #解压cni二进制包
  6. tar xf cni-plugins-linux-amd64-v*.tgz -/opt/cni/bin/
  7. # wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-cni-1.6.8-linux-amd64.tar.gz
  8. #解压
  9. tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -/
  10. #创建服务启动文件
  11. cat > /etc/systemd/system/containerd.service <<EOF
  12. [Unit]
  13. Description=containerd container runtime
  14. Documentation=https://containerd.io
  15. After=network.target local-fs.target
  16. [Service]
  17. ExecStartPre=-/sbin/modprobe overlay
  18. ExecStart=/usr/local/bin/containerd
  19. Type=notify
  20. Delegate=yes
  21. KillMode=process
  22. Restart=always
  23. RestartSec=5
  24. LimitNPROC=infinity
  25. LimitCORE=infinity
  26. LimitNOFILE=infinity
  27. TasksMax=infinity
  28. OOMScoreAdjust=-999
  29. [Install]
  30. WantedBy=multi-user.target
  31. EOF

2.1.1配置Containerd所需的模块

  1. cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
  2. overlay
  3. br_netfilter
  4. EOF

2.1.2加载模块

  1. systemctl restart systemd-modules-load.service

2.1.3配置Containerd所需的内核

  1. cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
  2. net.bridge.bridge-nf-call-iptables  = 1
  3. net.ipv4.ip_forward                 = 1
  4. net.bridge.bridge-nf-call-ip6tables = 1
  5. EOF
  6. # 加载内核
  7. sysctl --system

2.1.4创建Containerd的配置文件

  1. # 创建默认配置文件
  2. mkdir -/etc/containerd
  3. containerd config default | tee /etc/containerd/config.toml
  4. # 修改Containerd的配置文件
  5. sed -"s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
  6. cat /etc/containerd/config.toml | grep SystemdCgroup
  7. sed -"s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml
  8. cat /etc/containerd/config.toml | grep sandbox_image
  9. sed -"s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml
  10. cat /etc/containerd/config.toml | grep certs.d
  11. mkdir /etc/containerd/certs.d/docker.io -pv
  12. cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
  13. server = "https://docker.io"
  14. [host."https://hub-mirror.c.163.com"]
  15.   capabilities = ["pull", "resolve"]
  16. EOF

2.1.5启动并设置为开机启动

  1. systemctl daemon-reload
  2. systemctl enable --now containerd
  3. systemctl restart containerd

2.1.6配置crictl客户端连接的运行时位置

  1. # wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz
  2. #解压
  3. tar xf crictl-v*-linux-amd64.tar.gz -/usr/bin/
  4. #生成配置文件
  5. cat > /etc/crictl.yaml <<EOF
  6. runtime-endpoint: unix:///run/containerd/containerd.sock
  7. image-endpoint: unix:///run/containerd/containerd.sock
  8. timeout: 10
  9. debug: false
  10. EOF
  11. #测试
  12. systemctl restart  containerd
  13. crictl info

2.2 安装docker作为Runtime (不推荐)

2.2.1 安装docker

  1. # 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
  2. # wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.21.tgz
  3. #解压
  4. tar xf docker-*.tgz 
  5. #拷贝二进制文件
  6. cp docker/* /usr/bin/
  7. #创建containerd的service文件,并且启动
  8. cat >/etc/systemd/system/containerd.service <<EOF
  9. [Unit]
  10. Description=containerd container runtime
  11. Documentation=https://containerd.io
  12. After=network.target local-fs.target
  13. [Service]
  14. ExecStartPre=-/sbin/modprobe overlay
  15. ExecStart=/usr/bin/containerd
  16. Type=notify
  17. Delegate=yes
  18. KillMode=process
  19. Restart=always
  20. RestartSec=5
  21. LimitNPROC=infinity
  22. LimitCORE=infinity
  23. LimitNOFILE=1048576
  24. TasksMax=infinity
  25. OOMScoreAdjust=-999
  26. [Install]
  27. WantedBy=multi-user.target
  28. EOF
  29. systemctl enable --now containerd.service
  30. #准备docker的service文件
  31. cat > /etc/systemd/system/docker.service <<EOF
  32. [Unit]
  33. Description=Docker Application Container Engine
  34. Documentation=https://docs.docker.com
  35. After=network-online.target firewalld.service containerd.service
  36. Wants=network-online.target
  37. Requires=docker.socket containerd.service
  38. [Service]
  39. Type=notify
  40. ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
  41. ExecReload=/bin/kill -s HUP $MAINPID
  42. TimeoutSec=0
  43. RestartSec=2
  44. Restart=always
  45. StartLimitBurst=3
  46. StartLimitInterval=60s
  47. LimitNOFILE=infinity
  48. LimitNPROC=infinity
  49. LimitCORE=infinity
  50. TasksMax=infinity
  51. Delegate=yes
  52. KillMode=process
  53. OOMScoreAdjust=-500
  54. [Install]
  55. WantedBy=multi-user.target
  56. EOF
  57. #准备docker的socket文件
  58. cat > /etc/systemd/system/docker.socket <<EOF
  59. [Unit]
  60. Description=Docker Socket for the API
  61. [Socket]
  62. ListenStream=/var/run/docker.sock
  63. SocketMode=0660
  64. SocketUser=root
  65. SocketGroup=docker
  66. [Install]
  67. WantedBy=sockets.target
  68. EOF
  69. #创建docker组
  70. groupadd docker
  71. #启动docker
  72. systemctl enable --now docker.socket  && systemctl enable --now docker.service
  73. #验证
  74. docker info
  75. cat >/etc/docker/daemon.json <<EOF
  76. {
  77.   "exec-opts": ["native.cgroupdriver=systemd"],
  78.   "registry-mirrors": [
  79.     "https://docker.mirrors.ustc.edu.cn",
  80.     "http://hub-mirror.c.163.com"
  81.   ],
  82.   "max-concurrent-downloads": 10,
  83.   "log-driver": "json-file",
  84.   "log-level": "warn",
  85.   "log-opts": {
  86.     "max-size": "10m",
  87.     "max-file": "3"
  88.     },
  89.   "data-root": "/var/lib/docker"
  90. }
  91. EOF
  92. systemctl restart docker

2.2.2 安装cri-docker

  1. # 由于1.24以及更高版本不支持docker所以安装cri-docker
  2. # 下载cri-docker 
  3. # wget  https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cri-dockerd-0.2.5.amd64.tgz
  4. # 解压cri-docker
  5. tar xvf cri-dockerd-*.amd64.tgz 
  6. cp cri-dockerd/cri-dockerd  /usr/bin/
  7. # 写入启动配置文件
  8. cat >  /usr/lib/systemd/system/cri-docker.service <<EOF
  9. [Unit]
  10. Description=CRI Interface for Docker Application Container Engine
  11. Documentation=https://docs.mirantis.com
  12. After=network-online.target firewalld.service docker.service
  13. Wants=network-online.target
  14. Requires=cri-docker.socket
  15. [Service]
  16. Type=notify
  17. ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
  18. ExecReload=/bin/kill -s HUP $MAINPID
  19. TimeoutSec=0
  20. RestartSec=2
  21. Restart=always
  22. StartLimitBurst=3
  23. StartLimitInterval=60s
  24. LimitNOFILE=infinity
  25. LimitNPROC=infinity
  26. LimitCORE=infinity
  27. TasksMax=infinity
  28. Delegate=yes
  29. KillMode=process
  30. [Install]
  31. WantedBy=multi-user.target
  32. EOF
  33. # 写入socket配置文件
  34. cat > /usr/lib/systemd/system/cri-docker.socket <<EOF
  35. [Unit]
  36. Description=CRI Docker Socket for the API
  37. PartOf=cri-docker.service
  38. [Socket]
  39. ListenStream=%t/cri-dockerd.sock
  40. SocketMode=0660
  41. SocketUser=root
  42. SocketGroup=docker
  43. [Install]
  44. WantedBy=sockets.target
  45. EOF
  46. # 进行启动cri-docker
  47. systemctl daemon-reload ; systemctl enable cri-docker --now

2.3.k8s与etcd下载及安装(仅在master01操作)

2.3.1解压k8s安装包

  1. # 下载安装包
  2. # wget https://dl.k8s.io/v1.25.4/kubernetes-server-linux-amd64.tar.gz
  3. # wget https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
  4. # 解压k8s安装文件
  5. cd cby
  6. tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -/usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
  7. # 解压etcd安装文件
  8. tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/
  9. # 查看/usr/local/bin下内容
  10. ls /usr/local/bin/
  11. containerd  containerd-shim-runc-v1  containerd-stress  critest  ctr   etcdctl  kube-controller-manager  kubelet  kube-scheduler  containerd-shim  containerd-shim-runc-v2  crictl  ctd-decoder  etcd  kube-apiserver  kubectl  kube-proxy

2.3.2查看版本

  1. [root@k8s-master01 ~]#  kubelet --version
  2. Kubernetes v1.25.4
  3. [root@k8s-master01 ~]# etcdctl version
  4. etcdctl version: 3.5.6
  5. API version: 3.5
  6. [root@k8s-master01 ~]#

2.3.3将组件发送至其他k8s节点

  1. Master='k8s-master02 k8s-master03'
  2. Work='k8s-node01 k8s-node02'
  3. for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
  4. for NODE in $Work; do     scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
  5. mkdir -/opt/cni/bin

2.3创建证书相关文件

  1. mkdir pki
  2. cd pki
  3. cat > admin-csr.json << EOF 
  4. {
  5.   "CN": "admin",
  6.   "key": {
  7.     "algo": "rsa",
  8.     "size": 2048
  9.   },
  10.   "names": [
  11.     {
  12.       "C": "CN",
  13.       "ST": "Beijing",
  14.       "L": "Beijing",
  15.       "O": "system:masters",
  16.       "OU": "Kubernetes-manual"
  17.     }
  18.   ]
  19. }
  20. EOF
  21. cat > ca-config.json << EOF 
  22. {
  23.   "signing": {
  24.     "default": {
  25.       "expiry": "876000h"
  26.     },
  27.     "profiles": {
  28.       "kubernetes": {
  29.         "usages": [
  30.             "signing",
  31.             "key encipherment",
  32.             "server auth",
  33.             "client auth"
  34.         ],
  35.         "expiry": "876000h"
  36.       }
  37.     }
  38.   }
  39. }
  40. EOF
  41. cat > etcd-ca-csr.json  << EOF 
  42. {
  43.   "CN": "etcd",
  44.   "key": {
  45.     "algo": "rsa",
  46.     "size": 2048
  47.   },
  48.   "names": [
  49.     {
  50.       "C": "CN",
  51.       "ST": "Beijing",
  52.       "L": "Beijing",
  53.       "O": "etcd",
  54.       "OU": "Etcd Security"
  55.     }
  56.   ],
  57.   "ca": {
  58.     "expiry": "876000h"
  59.   }
  60. }
  61. EOF
  62. cat > front-proxy-ca-csr.json  << EOF 
  63. {
  64.   "CN": "kubernetes",
  65.   "key": {
  66.      "algo": "rsa",
  67.      "size": 2048
  68.   },
  69.   "ca": {
  70.     "expiry": "876000h"
  71.   }
  72. }
  73. EOF
  74. cat > kubelet-csr.json  << EOF 
  75. {
  76.   "CN": "system:node:\$NODE",
  77.   "key": {
  78.     "algo": "rsa",
  79.     "size": 2048
  80.   },
  81.   "names": [
  82.     {
  83.       "C": "CN",
  84.       "L": "Beijing",
  85.       "ST": "Beijing",
  86.       "O": "system:nodes",
  87.       "OU": "Kubernetes-manual"
  88.     }
  89.   ]
  90. }
  91. EOF
  92. cat > manager-csr.json << EOF 
  93. {
  94.   "CN": "system:kube-controller-manager",
  95.   "key": {
  96.     "algo": "rsa",
  97.     "size": 2048
  98.   },
  99.   "names": [
  100.     {
  101.       "C": "CN",
  102.       "ST": "Beijing",
  103.       "L": "Beijing",
  104.       "O": "system:kube-controller-manager",
  105.       "OU": "Kubernetes-manual"
  106.     }
  107.   ]
  108. }
  109. EOF
  110. cat > apiserver-csr.json << EOF 
  111. {
  112.   "CN": "kube-apiserver",
  113.   "key": {
  114.     "algo": "rsa",
  115.     "size": 2048
  116.   },
  117.   "names": [
  118.     {
  119.       "C": "CN",
  120.       "ST": "Beijing",
  121.       "L": "Beijing",
  122.       "O": "Kubernetes",
  123.       "OU": "Kubernetes-manual"
  124.     }
  125.   ]
  126. }
  127. EOF
  128. cat > ca-csr.json   << EOF 
  129. {
  130.   "CN": "kubernetes",
  131.   "key": {
  132.     "algo": "rsa",
  133.     "size": 2048
  134.   },
  135.   "names": [
  136.     {
  137.       "C": "CN",
  138.       "ST": "Beijing",
  139.       "L": "Beijing",
  140.       "O": "Kubernetes",
  141.       "OU": "Kubernetes-manual"
  142.     }
  143.   ],
  144.   "ca": {
  145.     "expiry": "876000h"
  146.   }
  147. }
  148. EOF
  149. cat > etcd-csr.json << EOF 
  150. {
  151.   "CN": "etcd",
  152.   "key": {
  153.     "algo": "rsa",
  154.     "size": 2048
  155.   },
  156.   "names": [
  157.     {
  158.       "C": "CN",
  159.       "ST": "Beijing",
  160.       "L": "Beijing",
  161.       "O": "etcd",
  162.       "OU": "Etcd Security"
  163.     }
  164.   ]
  165. }
  166. EOF
  167. cat > front-proxy-client-csr.json  << EOF 
  168. {
  169.   "CN": "front-proxy-client",
  170.   "key": {
  171.      "algo": "rsa",
  172.      "size": 2048
  173.   }
  174. }
  175. EOF
  176. cat > kube-proxy-csr.json  << EOF 
  177. {
  178.   "CN": "system:kube-proxy",
  179.   "key": {
  180.     "algo": "rsa",
  181.     "size": 2048
  182.   },
  183.   "names": [
  184.     {
  185.       "C": "CN",
  186.       "ST": "Beijing",
  187.       "L": "Beijing",
  188.       "O": "system:kube-proxy",
  189.       "OU": "Kubernetes-manual"
  190.     }
  191.   ]
  192. }
  193. EOF
  194. cat > scheduler-csr.json << EOF 
  195. {
  196.   "CN": "system:kube-scheduler",
  197.   "key": {
  198.     "algo": "rsa",
  199.     "size": 2048
  200.   },
  201.   "names": [
  202.     {
  203.       "C": "CN",
  204.       "ST": "Beijing",
  205.       "L": "Beijing",
  206.       "O": "system:kube-scheduler",
  207.       "OU": "Kubernetes-manual"
  208.     }
  209.   ]
  210. }
  211. EOF
  212. cd ..
  213. mkdir bootstrap
  214. cd bootstrap
  215. cat > bootstrap.secret.yaml << EOF 
  216. apiVersion: v1
  217. kind: Secret
  218. metadata:
  219.   name: bootstrap-token-c8ad9c
  220.   namespace: kube-system
  221. type: bootstrap.kubernetes.io/token
  222. stringData:
  223.   description: "The default bootstrap token generated by 'kubelet '."
  224.   token-id: c8ad9c
  225.   token-secret: 2e4d610cf3e7426e
  226.   usage-bootstrap-authentication: "true"
  227.   usage-bootstrap-signing: "true"
  228.   auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
  229. ---
  230. apiVersion: rbac.authorization.k8s.io/v1
  231. kind: ClusterRoleBinding
  232. metadata:
  233.   name: kubelet-bootstrap
  234. roleRef:
  235.   apiGroup: rbac.authorization.k8s.io
  236.   kind: ClusterRole
  237.   name: system:node-bootstrapper
  238. subjects:
  239. - apiGroup: rbac.authorization.k8s.io
  240.   kind: Group
  241.   name: system:bootstrappers:default-node-token
  242. ---
  243. apiVersion: rbac.authorization.k8s.io/v1
  244. kind: ClusterRoleBinding
  245. metadata:
  246.   name: node-autoapprove-bootstrap
  247. roleRef:
  248.   apiGroup: rbac.authorization.k8s.io
  249.   kind: ClusterRole
  250.   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
  251. subjects:
  252. - apiGroup: rbac.authorization.k8s.io
  253.   kind: Group
  254.   name: system:bootstrappers:default-node-token
  255. ---
  256. apiVersion: rbac.authorization.k8s.io/v1
  257. kind: ClusterRoleBinding
  258. metadata:
  259.   name: node-autoapprove-certificate-rotation
  260. roleRef:
  261.   apiGroup: rbac.authorization.k8s.io
  262.   kind: ClusterRole
  263.   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
  264. subjects:
  265. - apiGroup: rbac.authorization.k8s.io
  266.   kind: Group
  267.   name: system:nodes
  268. ---
  269. apiVersion: rbac.authorization.k8s.io/v1
  270. kind: ClusterRole
  271. metadata:
  272.   annotations:
  273.     rbac.authorization.kubernetes.io/autoupdate: "true"
  274.   labels:
  275.     kubernetes.io/bootstrapping: rbac-defaults
  276.   name: system:kube-apiserver-to-kubelet
  277. rules:
  278.   - apiGroups:
  279.       - ""
  280.     resources:
  281.       - nodes/proxy
  282.       - nodes/stats
  283.       - nodes/log
  284.       - nodes/spec
  285.       - nodes/metrics
  286.     verbs:
  287.       - "*"
  288. ---
  289. apiVersion: rbac.authorization.k8s.io/v1
  290. kind: ClusterRoleBinding
  291. metadata:
  292.   name: system:kube-apiserver
  293.   namespace: ""
  294. roleRef:
  295.   apiGroup: rbac.authorization.k8s.io
  296.   kind: ClusterRole
  297.   name: system:kube-apiserver-to-kubelet
  298. subjects:
  299.   - apiGroup: rbac.authorization.k8s.io
  300.     kind: User
  301.     name: kube-apiserver
  302. EOF
  303. cd ..
  304. mkdir coredns
  305. cd coredns
  306. cat > coredns.yaml << EOF 
  307. apiVersion: v1
  308. kind: ServiceAccount
  309. metadata:
  310.   name: coredns
  311.   namespace: kube-system
  312. ---
  313. apiVersion: rbac.authorization.k8s.io/v1
  314. kind: ClusterRole
  315. metadata:
  316.   labels:
  317.     kubernetes.io/bootstrapping: rbac-defaults
  318.   name: system:coredns
  319. rules:
  320.   - apiGroups:
  321.     - ""
  322.     resources:
  323.     - endpoints
  324.     - services
  325.     - pods
  326.     - namespaces
  327.     verbs:
  328.     - list
  329.     - watch
  330.   - apiGroups:
  331.     - discovery.k8s.io
  332.     resources:
  333.     - endpointslices
  334.     verbs:
  335.     - list
  336.     - watch
  337. ---
  338. apiVersion: rbac.authorization.k8s.io/v1
  339. kind: ClusterRoleBinding
  340. metadata:
  341.   annotations:
  342.     rbac.authorization.kubernetes.io/autoupdate: "true"
  343.   labels:
  344.     kubernetes.io/bootstrapping: rbac-defaults
  345.   name: system:coredns
  346. roleRef:
  347.   apiGroup: rbac.authorization.k8s.io
  348.   kind: ClusterRole
  349.   name: system:coredns
  350. subjects:
  351. - kind: ServiceAccount
  352.   name: coredns
  353.   namespace: kube-system
  354. ---
  355. apiVersion: v1
  356. kind: ConfigMap
  357. metadata:
  358.   name: coredns
  359.   namespace: kube-system
  360. data:
  361.   Corefile: |
  362.     .:53 {
  363.         errors
  364.         health {
  365.           lameduck 5s
  366.         }
  367.         ready
  368.         kubernetes cluster.local in-addr.arpa ip6.arpa {
  369.           fallthrough in-addr.arpa ip6.arpa
  370.         }
  371.         prometheus :9153
  372.         forward . /etc/resolv.conf {
  373.           max_concurrent 1000
  374.         }
  375.         cache 30
  376.         loop
  377.         reload
  378.         loadbalance
  379.     }
  380. ---
  381. apiVersion: apps/v1
  382. kind: Deployment
  383. metadata:
  384.   name: coredns
  385.   namespace: kube-system
  386.   labels:
  387.     k8s-app: kube-dns
  388.     kubernetes.io/name: "CoreDNS"
  389. spec:
  390.   # replicas: not specified here:
  391.   # 1. Default is 1.
  392.   # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  393.   strategy:
  394.     type: RollingUpdate
  395.     rollingUpdate:
  396.       maxUnavailable: 1
  397.   selector:
  398.     matchLabels:
  399.       k8s-app: kube-dns
  400.   template:
  401.     metadata:
  402.       labels:
  403.         k8s-app: kube-dns
  404.     spec:
  405.       priorityClassName: system-cluster-critical
  406.       serviceAccountName: coredns
  407.       tolerations:
  408.         - key: "CriticalAddonsOnly"
  409.           operator: "Exists"
  410.       nodeSelector:
  411.         kubernetes.io/os: linux
  412.       affinity:
  413.          podAntiAffinity:
  414.            preferredDuringSchedulingIgnoredDuringExecution:
  415.            - weight: 100
  416.              podAffinityTerm:
  417.                labelSelector:
  418.                  matchExpressions:
  419.                    - key: k8s-app
  420.                      operator: In
  421.                      values: ["kube-dns"]
  422.                topologyKey: kubernetes.io/hostname
  423.       containers:
  424.       - name: coredns
  425.         image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 
  426.         imagePullPolicy: IfNotPresent
  427.         resources:
  428.           limits:
  429.             memory: 170Mi
  430.           requests:
  431.             cpu: 100m
  432.             memory: 70Mi
  433.         args: [ "-conf", "/etc/coredns/Corefile" ]
  434.         volumeMounts:
  435.         - name: config-volume
  436.           mountPath: /etc/coredns
  437.           readOnly: true
  438.         ports:
  439.         - containerPort: 53
  440.           name: dns
  441.           protocol: UDP
  442.         - containerPort: 53
  443.           name: dns-tcp
  444.           protocol: TCP
  445.         - containerPort: 9153
  446.           name: metrics
  447.           protocol: TCP
  448.         securityContext:
  449.           allowPrivilegeEscalation: false
  450.           capabilities:
  451.             add:
  452.             - NET_BIND_SERVICE
  453.             drop:
  454.             - all
  455.           readOnlyRootFilesystem: true
  456.         livenessProbe:
  457.           httpGet:
  458.             path: /health
  459.             port: 8080
  460.             scheme: HTTP
  461.           initialDelaySeconds: 60
  462.           timeoutSeconds: 5
  463.           successThreshold: 1
  464.           failureThreshold: 5
  465.         readinessProbe:
  466.           httpGet:
  467.             path: /ready
  468.             port: 8181
  469.             scheme: HTTP
  470.       dnsPolicy: Default
  471.       volumes:
  472.         - name: config-volume
  473.           configMap:
  474.             name: coredns
  475.             items:
  476.             - key: Corefile
  477.               path: Corefile
  478. ---
  479. apiVersion: v1
  480. kind: Service
  481. metadata:
  482.   name: kube-dns
  483.   namespace: kube-system
  484.   annotations:
  485.     prometheus.io/port: "9153"
  486.     prometheus.io/scrape: "true"
  487.   labels:
  488.     k8s-app: kube-dns
  489.     kubernetes.io/cluster-service: "true"
  490.     kubernetes.io/name: "CoreDNS"
  491. spec:
  492.   selector:
  493.     k8s-app: kube-dns
  494.   clusterIP: 10.96.0.10 
  495.   ports:
  496.   - name: dns
  497.     port: 53
  498.     protocol: UDP
  499.   - name: dns-tcp
  500.     port: 53
  501.     protocol: TCP
  502.   - name: metrics
  503.     port: 9153
  504.     protocol: TCP
  505. EOF
  506. cd ..
  507. mkdir metrics-server
  508. cd metrics-server
  509. cat > metrics-server.yaml << EOF 
  510. apiVersion: v1
  511. kind: ServiceAccount
  512. metadata:
  513.   labels:
  514.     k8s-app: metrics-server
  515.   name: metrics-server
  516.   namespace: kube-system
  517. ---
  518. apiVersion: rbac.authorization.k8s.io/v1
  519. kind: ClusterRole
  520. metadata:
  521.   labels:
  522.     k8s-app: metrics-server
  523.     rbac.authorization.k8s.io/aggregate-to-admin: "true"
  524.     rbac.authorization.k8s.io/aggregate-to-edit: "true"
  525.     rbac.authorization.k8s.io/aggregate-to-view: "true"
  526.   name: system:aggregated-metrics-reader
  527. rules:
  528. - apiGroups:
  529.   - metrics.k8s.io
  530.   resources:
  531.   - pods
  532.   - nodes
  533.   verbs:
  534.   - get
  535.   - list
  536.   - watch
  537. ---
  538. apiVersion: rbac.authorization.k8s.io/v1
  539. kind: ClusterRole
  540. metadata:
  541.   labels:
  542.     k8s-app: metrics-server
  543.   name: system:metrics-server
  544. rules:
  545. - apiGroups:
  546.   - ""
  547.   resources:
  548.   - pods
  549.   - nodes
  550.   - nodes/stats
  551.   - namespaces
  552.   - configmaps
  553.   verbs:
  554.   - get
  555.   - list
  556.   - watch
  557. ---
  558. apiVersion: rbac.authorization.k8s.io/v1
  559. kind: RoleBinding
  560. metadata:
  561.   labels:
  562.     k8s-app: metrics-server
  563.   name: metrics-server-auth-reader
  564.   namespace: kube-system
  565. roleRef:
  566.   apiGroup: rbac.authorization.k8s.io
  567.   kind: Role
  568.   name: extension-apiserver-authentication-reader
  569. subjects:
  570. - kind: ServiceAccount
  571.   name: metrics-server
  572.   namespace: kube-system
  573. ---
  574. apiVersion: rbac.authorization.k8s.io/v1
  575. kind: ClusterRoleBinding
  576. metadata:
  577.   labels:
  578.     k8s-app: metrics-server
  579.   name: metrics-server:system:auth-delegator
  580. roleRef:
  581.   apiGroup: rbac.authorization.k8s.io
  582.   kind: ClusterRole
  583.   name: system:auth-delegator
  584. subjects:
  585. - kind: ServiceAccount
  586.   name: metrics-server
  587.   namespace: kube-system
  588. ---
  589. apiVersion: rbac.authorization.k8s.io/v1
  590. kind: ClusterRoleBinding
  591. metadata:
  592.   labels:
  593.     k8s-app: metrics-server
  594.   name: system:metrics-server
  595. roleRef:
  596.   apiGroup: rbac.authorization.k8s.io
  597.   kind: ClusterRole
  598.   name: system:metrics-server
  599. subjects:
  600. - kind: ServiceAccount
  601.   name: metrics-server
  602.   namespace: kube-system
  603. ---
  604. apiVersion: v1
  605. kind: Service
  606. metadata:
  607.   labels:
  608.     k8s-app: metrics-server
  609.   name: metrics-server
  610.   namespace: kube-system
  611. spec:
  612.   ports:
  613.   - name: https
  614.     port: 443
  615.     protocol: TCP
  616.     targetPort: https
  617.   selector:
  618.     k8s-app: metrics-server
  619. ---
  620. apiVersion: apps/v1
  621. kind: Deployment
  622. metadata:
  623.   labels:
  624.     k8s-app: metrics-server
  625.   name: metrics-server
  626.   namespace: kube-system
  627. spec:
  628.   selector:
  629.     matchLabels:
  630.       k8s-app: metrics-server
  631.   strategy:
  632.     rollingUpdate:
  633.       maxUnavailable: 0
  634.   template:
  635.     metadata:
  636.       labels:
  637.         k8s-app: metrics-server
  638.     spec:
  639.       containers:
  640.       - args:
  641.         - --cert-dir=/tmp
  642.         - --secure-port=4443
  643.         - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  644.         - --kubelet-use-node-status-port
  645.         - --metric-resolution=15s
  646.         - --kubelet-insecure-tls
  647.         - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm
  648.         - --requestheader-username-headers=X-Remote-User
  649.         - --requestheader-group-headers=X-Remote-Group
  650.         - --requestheader-extra-headers-prefix=X-Remote-Extra-
  651.         image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0
  652.         imagePullPolicy: IfNotPresent
  653.         livenessProbe:
  654.           failureThreshold: 3
  655.           httpGet:
  656.             path: /livez
  657.             port: https
  658.             scheme: HTTPS
  659.           periodSeconds: 10
  660.         name: metrics-server
  661.         ports:
  662.         - containerPort: 4443
  663.           name: https
  664.           protocol: TCP
  665.         readinessProbe:
  666.           failureThreshold: 3
  667.           httpGet:
  668.             path: /readyz
  669.             port: https
  670.             scheme: HTTPS
  671.           initialDelaySeconds: 20
  672.           periodSeconds: 10
  673.         resources:
  674.           requests:
  675.             cpu: 100m
  676.             memory: 200Mi
  677.         securityContext:
  678.           readOnlyRootFilesystem: true
  679.           runAsNonRoot: true
  680.           runAsUser: 1000
  681.         volumeMounts:
  682.         - mountPath: /tmp
  683.           name: tmp-dir
  684.         - name: ca-ssl
  685.           mountPath: /etc/kubernetes/pki
  686.       nodeSelector:
  687.         kubernetes.io/os: linux
  688.       priorityClassName: system-cluster-critical
  689.       serviceAccountName: metrics-server
  690.       volumes:
  691.       - emptyDir: {}
  692.         name: tmp-dir
  693.       - name: ca-ssl
  694.         hostPath:
  695.           path: /etc/kubernetes/pki
  696. ---
  697. apiVersion: apiregistration.k8s.io/v1
  698. kind: APIService
  699. metadata:
  700.   labels:
  701.     k8s-app: metrics-server
  702.   name: v1beta1.metrics.k8s.io
  703. spec:
  704.   group: metrics.k8s.io
  705.   groupPriorityMinimum: 100
  706.   insecureSkipTLSVerify: true
  707.   service:
  708.     name: metrics-server
  709.     namespace: kube-system
  710.   version: v1beta1
  711.   versionPriority: 100
  712. EOF

3.相关证书生成

# master01节点下载证书生成工具
# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.2_linux_amd64" -O /usr/local/bin/cfssl
# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.2_linux_amd64" -O /usr/local/bin/cfssljson # 软件包内有
cp cfssl_*_linux_amd64 /usr/local/bin/cfssl
cp cfssljson_*_linux_amd64 /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

3.1.生成etcd证书

特别说明除外,以下操作在所有master节点操作

3.1.1所有master节点创建证书存放目录

mkdir /etc/etcd/ssl -p

3.1.2master01节点生成etcd证书

cd pki
# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)
# 若没有IPv6 可删除可保留 
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
cfssl gencert \
   -ca=/etc/etcd/ssl/etcd-ca.pem \
   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
   -config=ca-config.json \
   -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.8.61,192.168.8.62,192.168.8.63,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30 \
   -profile=kubernetes \
   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

3.1.3将证书复制到其他节点

Master='k8s-master02 k8s-master03'
for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

3.2.生成k8s相关证书

特别说明除外,以下操作在所有master节点操作

3.2.1所有k8s节点创建证书存放目录

mkdir -p /etc/kubernetes/pki

3.2.2master01节点生成k8s证书

cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

# 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备
# 10.96.0.1是service网段的第一个地址,需要计算,192.168.8.66为高可用vip地址
# 若没有IPv6 可删除可保留  cfssl gencert   \
-ca=/etc/kubernetes/pki/ca.pem   \
-ca-key=/etc/kubernetes/pki/ca-key.pem   \
-config=ca-config.json   \
-hostname=10.96.0.1,192.168.8.66,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.8.61,192.168.8.62,192.168.8.63,192.168.8.64,192.168.8.65,192.168.8.66,192.168.8.67,192.168.8.68,192.168.8.69,192.168.8.70,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30,fc00:43f4:1eea:1::40,fc00:43f4:1eea:1::50,fc00:43f4:1eea:1::60,fc00:43f4:1eea:1::70,fc00:43f4:1eea:1::80,fc00:43f4:1eea:1::90,fc00:43f4:1eea:1::100   \
-profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

3.2.3生成apiserver聚合证书

cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 

# 有一个警告,可以忽略

cfssl gencert  \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem   \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   \
-config=ca-config.json   \
-profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

3.2.4生成controller-manage的证书

在《5.高可用配置》选择使用那种高可用方案
若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443
若使用 nginx方案,那么为 --server=https://127.0.0.1:8443

cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # 设置一个集群项 # 在《5.高可用配置》选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://127.0.0.1:8443 \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
     --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置默认环境 kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler # 在《5.高可用配置》选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://127.0.0.1:8443 \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/etc/kubernetes/pki/scheduler.pem \
     --client-key=/etc/kubernetes/pki/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin # 在《5.高可用配置》选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes     \
  --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  --embed-certs=true     \
  --server=https://127.0.0.1:8443     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin  \
  --client-certificate=/etc/kubernetes/pki/admin.pem     \
  --client-key=/etc/kubernetes/pki/admin-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes    \
  --cluster=kubernetes     \
  --user=kubernetes-admin     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/etc/kubernetes/admin.kubeconfig

3.2.5创建kube-proxy证书

在《5.高可用配置》选择使用那种高可用方案
若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443
若使用 nginx方案,那么为 --server=https://127.0.0.1:8443

cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy # 在《5.高可用配置》选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes     \
  --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  --embed-certs=true     \
  --server=https://127.0.0.1:8443     \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-credentials kube-proxy  \
  --client-certificate=/etc/kubernetes/pki/kube-proxy.pem     \
  --client-key=/etc/kubernetes/pki/kube-proxy-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kube-proxy@kubernetes    \
  --cluster=kubernetes     \
  --user=kube-proxy     \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kube-proxy@kubernetes  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

3.2.5创建ServiceAccount Key ——secret

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

3.2.6将证书发送到其他master节点

#其他节点创建目录
# mkdir  /etc/kubernetes/pki/ -p for NODE in k8s-master02 k8s-master03; do  for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do  scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done;  for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do  scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done

3.2.7查看证书

ls /etc/kubernetes/pki/
admin.csr          controller-manager.csr      kube-proxy.csr
admin-key.pem      controller-manager-key.pem  kube-proxy-key.pem
admin.pem          controller-manager.pem      kube-proxy.pem
apiserver.csr      front-proxy-ca.csr          sa.key
apiserver-key.pem  front-proxy-ca-key.pem      sa.pub
apiserver.pem      front-proxy-ca.pem          scheduler.csr
ca.csr             front-proxy-client.csr      scheduler-key.pem
ca-key.pem         front-proxy-client-key.pem  scheduler.pem
ca.pem             front-proxy-client.pem # 一共26个就对了
ls /etc/kubernetes/pki/ |wc -l
26

4.k8s系统组件配置

4.1.etcd配置

4.1.1master01配置

# 如果要用IPv6那么把IPv4地址修改为IPv6即可
cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.8.61:2380'
listen-client-urls: 'https://192.168.8.61:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.8.61:2380'
advertise-client-urls: 'https://192.168.8.61:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.8.61:2380,k8s-master02=https://192.168.8.62:2380,k8s-master03=https://192.168.8.63:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.1.2master02配置

# 如果要用IPv6那么把IPv4地址修改为IPv6即可
cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.8.62:2380'
listen-client-urls: 'https://192.168.8.62:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.8.62:2380'
advertise-client-urls: 'https://192.168.8.62:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.8.61:2380,k8s-master02=https://192.168.8.62:2380,k8s-master03=https://192.168.8.63:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.1.3master03配置

# 如果要用IPv6那么把IPv4地址修改为IPv6即可
cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.8.63:2380'
listen-client-urls: 'https://192.168.8.63:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.8.63:2380'
advertise-client-urls: 'https://192.168.8.63:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.8.61:2380,k8s-master02=https://192.168.8.62:2380,k8s-master03=https://192.168.8.63:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.2.创建service(所有master节点操作)

4.2.1创建etcd.service并启动

cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target [Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
Alias=etcd3.service EOF

4.2.2创建etcd证书目录

mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd

4.2.3查看etcd状态

# 如果要用IPv6那么把IPv4地址修改为IPv6即可
export ETCDCTL_API=3
etcdctl --endpoints="192.168.8.63:2379,192.168.8.62:2379,192.168.8.61:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|    ENDPOINT    |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.8.63:2379 | c0c8142615b9523f |   3.5.6 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| 192.168.8.62:2379 | de8396604d2c160d |   3.5.6 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| 192.168.8.61:2379 | 33c9d6df0037ab97 |   3.5.6 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ [root@k8s-master01 pki]#

5.高可用配置(在Master服务器上操作)

注意* 5.1.1 和5.1.2 二选一即可

选择使用那种高可用方案

在《3.2.生成k8s相关证书》

若使用 nginx方案,那么为 --server=https://127.0.0.1:8443
若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443

5.1 NGINX高可用方案 (推荐)

5.1.1自己手动编译

在所有节点执行

# 安装编译环境
yum install gcc -y # 下载解压nginx二进制文件
wget http://nginx.org/download/nginx-1.22.1.tar.gz
tar xvf nginx-*.tar.gz
cd nginx-* # 进行编译
./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
make && make install

5.1.2使用我编译好的

# 使用我编译好的

cd kubernetes-v1.25.4/cby
# 拷贝我编译好的nginx
node='k8s-master02 k8s-master03 k8s-node01 k8s-node02'
for NODE in $node; do scp nginx.tar $NODE:/usr/local/; done # 其他节点上执行
cd /usr/local/
tar xvf nginx.tar

5.1.3写入启动配置

在所有主机上执行

# 写入nginx配置文件
cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF
worker_processes 1;
events {
    worker_connections  1024;
}
stream {
    upstream backend {
        least_conn;
        hash $remote_addr consistent;
        server 192.168.8.61:6443        max_fails=3 fail_timeout=30s;
        server 192.168.8.62:6443        max_fails=3 fail_timeout=30s;
        server 192.168.8.63:6443        max_fails=3 fail_timeout=30s;
    }
    server {
        listen 127.0.0.1:8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
EOF # 写入启动配置文件
cat > /etc/systemd/system/kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=forking
ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx
ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
EOF # 设置开机自启
systemctl enable --now  kube-nginx 
systemctl restart kube-nginx
systemctl status kube-nginx

5.2 keepalived和haproxy 高可用方案 (不推荐)

5.2.1安装keepalived和haproxy服务

systemctl disable --now firewalld

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config yum -y install keepalived haproxy

5.2.2修改haproxy配置文件(两台配置文件一样)

# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak

cat >/etc/haproxy/haproxy.cfg<<"EOF"
global
 maxconn 2000
 ulimit-n 16384
 log 127.0.0.1 local0 err
 stats timeout 30s defaults
 log global
 mode http
 option httplog
 timeout connect 5000
 timeout client 50000
 timeout server 50000
 timeout http-request 15s
 timeout http-keep-alive 15s frontend monitor-in
 bind *:33305
 mode http
 option httplog
 monitor-uri /monitor frontend k8s-master
 bind 0.0.0.0:8443
 bind 127.0.0.1:8443
 mode tcp
 option tcplog
 tcp-request inspect-delay 5s
 default_backend k8s-master backend k8s-master
 mode tcp
 option tcplog
 option tcp-check
 balance roundrobin
 default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
 server  k8s-master01  192.168.8.61:6443 check
 server  k8s-master02  192.168.8.62:6443 check
 server  k8s-master03  192.168.8.63:6443 check
EOF

5.2.3Master01配置keepalived master节点

#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    # 注意网卡名
    interface ens33 
    mcast_src_ip 192.168.8.61
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.8.66
    }
    track_script {
      chk_apiserver 
} } EOF

5.2.4Master02配置keepalived backup节点

# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1 }
vrrp_instance VI_1 {
    state BACKUP
    # 注意网卡名
    interface ens33
    mcast_src_ip 192.168.8.62
    virtual_router_id 51
    priority 80
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.8.66
    }
    track_script {
      chk_apiserver 
} } EOF

5.2.5Master03配置keepalived backup节点

# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1 }
vrrp_instance VI_1 {
    state BACKUP
    # 注意网卡名
    interface ens33
    mcast_src_ip 192.168.8.63
    virtual_router_id 51
    priority 50
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.8.66
    }
    track_script {
      chk_apiserver 
} } EOF

5.2.6健康检查脚本配置(两台lb主机)

cat >  /etc/keepalived/check_apiserver.sh << EOF
#!/bin/bash err=0
for k in \$(seq 1 3)
do
    check_code=\$(pgrep haproxy)
    if [[ \$check_code == "" ]]; then
        err=\$(expr \$err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done if [[ \$err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
EOF # 给脚本授权 chmod +x /etc/keepalived/check_apiserver.sh

5.2.7启动服务

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

5.2.8测试高可用

# 能ping同

[root@k8s-node02 ~]# ping 192.168.8.66

# 能telnet访问

[root@k8s-node02 ~]# telnet 192.168.8.66 8443

# 关闭主节点,看vip是否漂移到备节点

6.k8s组件配置(区别于第4点)

所有k8s节点创建以下目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

6.1.创建apiserver(所有master节点)

6.1.1master01节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target [Service]
ExecStart=/usr/local/bin/kube-apiserver \\
      --v=2  \\
      --logtostderr=true  \\
      --allow-privileged=true  \\
      --bind-address=0.0.0.0  \\
      --secure-port=6443  \\
      --advertise-address=192.168.8.61 \\
      --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112  \\
      --service-node-port-range=30000-32767  \\
      --etcd-servers=https://192.168.8.61:2379,https://192.168.8.62:2379,https://192.168.8.63:2379 \\
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User \\
      --enable-aggregator-routing=true
      # --feature-gates=IPv6DualStack=true
      # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure
RestartSec=10s
LimitNOFILE=65535 [Install]
WantedBy=multi-user.target EOF

6.1.2master02节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target [Service]
ExecStart=/usr/local/bin/kube-apiserver \\
      --v=2  \\
      --logtostderr=true  \\
      --allow-privileged=true  \\
      --bind-address=0.0.0.0  \\
      --secure-port=6443  \\
      --advertise-address=192.168.8.62 \\
      --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112  \\
      --service-node-port-range=30000-32767  \\
      --etcd-servers=https://192.168.8.61:2379,https://192.168.8.62:2379,https://192.168.8.63:2379 \\
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User \\
      --enable-aggregator-routing=true
      # --feature-gates=IPv6DualStack=true
      # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure
RestartSec=10s
LimitNOFILE=65535 [Install]
WantedBy=multi-user.target EOF

6.1.3master03节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service  << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target [Service]
ExecStart=/usr/local/bin/kube-apiserver \\
      --v=2  \\
      --logtostderr=true  \\
      --allow-privileged=true  \\
      --bind-address=0.0.0.0  \\
      --secure-port=6443  \\
      --advertise-address=192.168.8.63 \\
      --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112  \\
      --service-node-port-range=30000-32767  \\
      --etcd-servers=https://192.168.8.61:2379,https://192.168.8.62:2379,https://192.168.8.63:2379 \\
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User \\
      --enable-aggregator-routing=true
      # --feature-gates=IPv6DualStack=true
      # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure
RestartSec=10s
LimitNOFILE=65535 [Install]
WantedBy=multi-user.target EOF

6.1.4启动apiserver(所有master节点)

systemctl daemon-reload && systemctl enable --now kube-apiserver

# 注意查看状态是否启动正常
# systemctl status kube-apiserver

6.2.配置kube-controller-manager service

# 所有master节点配置,且配置相同
# 172.16.0.0/12为pod网段,按需求设置你自己的网段 cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target [Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
      --v=2 \\
      --logtostderr=true \\
      --bind-address=127.0.0.1 \\
      --root-ca-file=/etc/kubernetes/pki/ca.pem \\
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
      --leader-elect=true \\
      --use-service-account-credentials=true \\
      --node-monitor-grace-period=40s \\
      --node-monitor-period=5s \\
      --pod-eviction-timeout=2m0s \\
      --controllers=*,bootstrapsigner,tokencleaner \\
      --allocate-node-cidrs=true \\
      --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\
      --cluster-cidr=172.16.0.0/12,fc00:2222::/112 \\
      --node-cidr-mask-size-ipv4=24 \\
      --node-cidr-mask-size-ipv6=120 \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem 
      # --feature-gates=IPv6DualStack=true Restart=always
RestartSec=10s [Install]
WantedBy=multi-user.target EOF

6.2.1启动kube-controller-manager,并查看状态

systemctl daemon-reload
systemctl enable --now kube-controller-manager
# systemctl  status kube-controller-manager

6.3.配置kube-scheduler service

6.3.1所有master节点配置,且配置相同

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target [Service]
ExecStart=/usr/local/bin/kube-scheduler \\
      --v=2 \\
      --logtostderr=true \\
      --bind-address=127.0.0.1 \\
      --leader-elect=true \\
      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always
RestartSec=10s [Install]
WantedBy=multi-user.target EOF

6.3.2启动并查看服务状态

systemctl daemon-reload
systemctl enable --now kube-scheduler
# systemctl status kube-scheduler

7.TLS Bootstrapping配置

7.1在master01上配置

# 在《5.高可用配置》选择使用那种高可用方案
# 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443`
# 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` cd bootstrap kubectl config set-cluster kubernetes     \
--certificate-authority=/etc/kubernetes/pki/ca.pem     \
--embed-certs=true     --server=https://127.0.0.1:8443     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user     \
--token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes     \
--cluster=kubernetes     \
--user=tls-bootstrap-token-user     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

7.2查看集群状态,没问题的话继续后续操作

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-2               Healthy   {"health":"true","reason":""}   
etcd-1               Healthy   {"health":"true","reason":""}  # 切记执行,别忘记!!!
kubectl create -f bootstrap.secret.yaml

8.node节点配置

8.1.在master01上将证书复制到node节点

cd /etc/kubernetes/

for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

8.2.kubelet配置

注意 : 8.2.1 和 8.2.2 需要和 上方 2.1 和 2.2 对应起来

8.2.1当使用docker作为Runtime(不推荐)

cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes [Service]
ExecStart=/usr/local/bin/kubelet \\
    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \\
    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
    --config=/etc/kubernetes/kubelet-conf.yml \\
    --container-runtime-endpoint=unix:///run/cri-dockerd.sock  \\
    --node-labels=node.kubernetes.io/node= [Install]
WantedBy=multi-user.target
EOF

8.2.2当使用Containerd作为Runtime (推荐)

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/

# 所有k8s节点配置kubelet service
cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service [Service]
ExecStart=/usr/local/bin/kubelet \\
    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \\
    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
    --config=/etc/kubernetes/kubelet-conf.yml \\
    --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \\
    --node-labels=node.kubernetes.io/node=
    # --feature-gates=IPv6DualStack=true
    # --container-runtime=remote
    # --runtime-request-timeout=15m
    # --cgroup-driver=systemd [Install]
WantedBy=multi-user.target
EOF

8.2.3所有k8s节点创建kubelet的配置文件

cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

8.2.4启动kubelet

systemctl daemon-reload
systemctl restart kubelet
systemctl enable --now kubelet

8.2.5查看集群

[root@k8s-master01 ~]# kubectl  get node
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   Ready    <none>   18s   v1.25.4
k8s-master02   Ready    <none>   16s   v1.25.4
k8s-master03   Ready    <none>   16s   v1.25.4
k8s-node01     Ready    <none>   14s   v1.25.4
k8s-node02     Ready    <none>   14s   v1.25.4
[root@k8s-master01 ~]#

8.3.kube-proxy配置

8.3.1将kubeconfig发送至其他节点

for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done

for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig;  done

8.3.2所有k8s节点添加kube-proxy的service文件

cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target [Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy.yaml \\
  --v=2 Restart=always
RestartSec=10s [Install]
WantedBy=multi-user.target EOF

8.3.3所有k8s节点添加kube-proxy的配置

cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 172.16.0.0/12,fc00:2222::/112
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms EOF

8.3.4启动kube-proxy

systemctl daemon-reload
 systemctl restart kube-proxy
 systemctl enable --now kube-proxy

9.安装网络插件

注意 9.1 和 9.2 二选其一即可,建议在此处创建好快照后在进行操作,后续出问题可以回滚

** centos7 要升级libseccomp 不然 无法安装网络插件**

# https://github.com/opencontainers/runc/releases
# 升级runc
wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc
cp -p /usr/local/sbin/runc  /usr/local/bin/runc
cp -p /usr/local/sbin/runc  /usr/bin/runc #下载高于2.4以上的包
yum -y install http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm #查看当前版本
[root@k8s-master-1 ~]# rpm -qa | grep libseccomp
libseccomp-2.5.1-1.el8.x86_64

9.1安装Calico

9.1.1更改calico网段

# 本地没有公网 IPv6 使用 calico.yaml
kubectl apply -f calico.yaml # 本地有公网 IPv6 使用 calico-ipv6.yaml 
# kubectl apply -f calico-ipv6.yaml

9.1.2查看容器状态

# calico 初始化会很慢 需要耐心等待一下,大约十分钟左右
[root@k8s-master01 ~]# kubectl  get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6747f75cdc-fbvvc   1/1     Running   0          61s
kube-system   calico-node-fs7hl                          1/1     Running   0          61s
kube-system   calico-node-jqz58                          1/1     Running   0          61s
kube-system   calico-node-khjlg                          1/1     Running   0          61s
kube-system   calico-node-wmf8q                          1/1     Running   0          61s
kube-system   calico-node-xc6gn                          1/1     Running   0          61s
kube-system   calico-typha-6cdc4b4fbc-57snb              1/1     Running   0          61s

9.2 安装cilium

9.2.1 安装helm

# [root@k8s-master01 ~]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
# [root@k8s-master01 ~]# chmod 700 get_helm.sh
# [root@k8s-master01 ~]# ./get_helm.sh wget https://get.helm.sh/helm-canary-linux-amd64.tar.gz
tar xvf helm-canary-linux-amd64.tar.tar
cp linux-amd64/helm /usr/local/bin/

9.2.2 安装cilium

# 添加源
helm repo add cilium https://helm.cilium.io # 默认参数安装
helm install cilium cilium/cilium --namespace kube-system # 启用ipv6
# helm install cilium cilium/cilium --namespace kube-system --set ipv6.enabled=true # 启用路由信息和监控插件
# helm install cilium cilium/cilium --namespace kube-system --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.enabled=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}"

9.2.3 查看

[root@k8s-master01 ~]# kubectl  get pod -A | grep cil
kube-system   cilium-gmr6c                       1/1     Running       0             5m3s
kube-system   cilium-kzgdj                       1/1     Running       0             5m3s
kube-system   cilium-operator-69b677f97c-6pw4k   1/1     Running       0             5m3s
kube-system   cilium-operator-69b677f97c-xzzdk   1/1     Running       0             5m3s
kube-system   cilium-q2rnr                       1/1     Running       0             5m3s
kube-system   cilium-smx5v                       1/1     Running       0             5m3s
kube-system   cilium-tdjq4                       1/1     Running       0             5m3s
[root@k8s-master01 ~]#

9.2.4 下载专属监控面板

[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml
[root@k8s-master01 yaml]#
[root@k8s-master01 yaml]# kubectl  apply -f monitoring-example.yaml
namespace/cilium-monitoring created
serviceaccount/prometheus-k8s created
configmap/grafana-config created
configmap/grafana-cilium-dashboard created
configmap/grafana-cilium-operator-dashboard created
configmap/grafana-hubble-dashboard created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/grafana created
service/prometheus created
deployment.apps/grafana created
deployment.apps/prometheus created
[root@k8s-master01 yaml]#

9.2.5 下载部署测试用例

[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml

[root@k8s-master01 yaml]# sed -i "s#google.com#oiox.cn#g" connectivity-check.yaml

[root@k8s-master01 yaml]# kubectl  apply -f connectivity-check.yaml
deployment.apps/echo-a created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
deployment.apps/pod-to-a created
deployment.apps/pod-to-external-1111 created
deployment.apps/pod-to-a-denied-cnp created
deployment.apps/pod-to-a-allowed-cnp created
deployment.apps/pod-to-external-fqdn-allow-google-cnp created
deployment.apps/pod-to-b-multi-node-clusterip created
deployment.apps/pod-to-b-multi-node-headless created
deployment.apps/host-to-b-multi-node-clusterip created
deployment.apps/host-to-b-multi-node-headless created
deployment.apps/pod-to-b-multi-node-nodeport created
deployment.apps/pod-to-b-intra-node-nodeport created
service/echo-a created
service/echo-b created
service/echo-b-headless created
service/echo-b-host-headless created
ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created
[root@k8s-master01 yaml]#

9.2.6 查看pod

[root@k8s-master01 yaml]# kubectl  get pod -A
NAMESPACE           NAME                                                     READY   STATUS    RESTARTS      AGE
cilium-monitoring   grafana-59957b9549-6zzqh                                 1/1     Running   0             10m
cilium-monitoring   prometheus-7c8c9684bb-4v9cl                              1/1     Running   0             10m
default             chenby-75b5d7fbfb-7zjsr                                  1/1     Running   0             27h
default             chenby-75b5d7fbfb-hbvr8                                  1/1     Running   0             27h
default             chenby-75b5d7fbfb-ppbzg                                  1/1     Running   0             27h
default             echo-a-6799dff547-pnx6w                                  1/1     Running   0             10m
default             echo-b-fc47b659c-4bdg9                                   1/1     Running   0             10m
default             echo-b-host-67fcfd59b7-28r9s                             1/1     Running   0             10m
default             host-to-b-multi-node-clusterip-69c57975d6-z4j2z          1/1     Running   0             10m
default             host-to-b-multi-node-headless-865899f7bb-frrmc           1/1     Running   0             10m
default             pod-to-a-allowed-cnp-5f9d7d4b9d-hcd8x                    1/1     Running   0             10m
default             pod-to-a-denied-cnp-65cc5ff97b-2rzb8                     1/1     Running   0             10m
default             pod-to-a-dfc64f564-p7xcn                                 1/1     Running   0             10m
default             pod-to-b-intra-node-nodeport-677868746b-trk2l            1/1     Running   0             10m
default             pod-to-b-multi-node-clusterip-76bbbc677b-knfq2           1/1     Running   0             10m
default             pod-to-b-multi-node-headless-698c6579fd-mmvd7            1/1     Running   0             10m
default             pod-to-b-multi-node-nodeport-5dc4b8cfd6-8dxmz            1/1     Running   0             10m
default             pod-to-external-1111-8459965778-pjt9b                    1/1     Running   0             10m
default             pod-to-external-fqdn-allow-google-cnp-64df9fb89b-l9l4q   1/1     Running   0             10m
kube-system         cilium-7rfj6                                             1/1     Running   0             56s
kube-system         cilium-d4cch                                             1/1     Running   0             56s
kube-system         cilium-h5x8r                                             1/1     Running   0             56s
kube-system         cilium-operator-5dbddb6dbf-flpl5                         1/1     Running   0             56s
kube-system         cilium-operator-5dbddb6dbf-gcznc                         1/1     Running   0             56s
kube-system         cilium-t2xlz                                             1/1     Running   0             56s
kube-system         cilium-z65z7                                             1/1     Running   0             56s
kube-system         coredns-665475b9f8-jkqn8                                 1/1     Running   1 (36h ago)   36h
kube-system         hubble-relay-59d8575-9pl9z                               1/1     Running   0             56s
kube-system         hubble-ui-64d4995d57-nsv9j                               2/2     Running   0             56s
kube-system         metrics-server-776f58c94b-c6zgs                          1/1     Running   1 (36h ago)   37h
[root@k8s-master01 yaml]#

9.2.7 修改为NodePort

[root@k8s-master01 yaml]# kubectl  edit svc  -n kube-system hubble-ui
service/hubble-ui edited
[root@k8s-master01 yaml]#
[root@k8s-master01 yaml]# kubectl  edit svc  -n cilium-monitoring grafana
service/grafana edited
[root@k8s-master01 yaml]#
[root@k8s-master01 yaml]# kubectl  edit svc  -n cilium-monitoring prometheus
service/prometheus edited
[root@k8s-master01 yaml]# type: NodePort

9.2.8 查看端口

[root@k8s-master01 yaml]# kubectl get svc -A | grep monit
cilium-monitoring   grafana                NodePort    10.100.250.17    <none>        3000:30707/TCP           15m
cilium-monitoring   prometheus             NodePort    10.100.131.243   <none>        9090:31155/TCP           15m
[root@k8s-master01 yaml]#
[root@k8s-master01 yaml]# kubectl get svc -A | grep hubble
kube-system         hubble-metrics         ClusterIP   None             <none>        9965/TCP                 5m12s
kube-system         hubble-peer            ClusterIP   10.100.150.29    <none>        443/TCP                  5m12s
kube-system         hubble-relay           ClusterIP   10.109.251.34    <none>        80/TCP                   5m12s
kube-system         hubble-ui              NodePort    10.102.253.59    <none>        80:31219/TCP             5m12s
[root@k8s-master01 yaml]#

9.2.9 访问

http://192.168.8.61:30707
http://192.168.8.61:31155
http://192.168.8.61:31219

10.安装CoreDNS

10.1以下步骤只在master01操作

10.1.1修改文件

cd coredns/
cat coredns.yaml | grep clusterIP:
  clusterIP: 10.96.0.10

10.1.2安装

kubectl  create -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

11.安装Metrics Server

11.1以下步骤只在master01操作

11.1.1安装Metrics-server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

# 安装metrics server
cd metrics-server/ kubectl  apply -f metrics-server.yaml

11.1.2稍等片刻查看状态

kubectl  top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   154m         1%     1715Mi          21%       
k8s-master02   151m         1%     1274Mi          16%       
k8s-master03   523m         6%     1345Mi          17%       
k8s-node01     84m          1%     671Mi           8%        
k8s-node02     73m          0%     727Mi           9%        
k8s-node03     96m          1%     769Mi           9%        
k8s-node04     68m          0%     673Mi           8%        
k8s-node05     82m          1%     679Mi           8%

12.集群验证

12.1部署pod资源

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: docker.io/library/busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF # 查看
kubectl  get pod
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          17s

12.2用pod解析默认命名空间中的kubernetes

kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17h kubectl exec  busybox -n default -- nslookup kubernetes
3Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

12.3测试跨命名空间是否可以解析

kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name:      kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.  telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'. curl 10.96.0.10:53
curl: (52) Empty reply from server

12.5Pod和Pod之前要能通

kubectl get po -owide
NAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
busybox   1/1     Running   0          17m   172.27.14.193   k8s-node02   <none>           <none>  kubectl get po -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS      AGE   IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-5dffd5886b-4blh6   1/1     Running   0             77m   172.25.244.193   k8s-master01   <none>           <none>
calico-node-fvbdq                          1/1     Running   1 (75m ago)   77m   192.168.8.61     k8s-master01   <none>           <none>
calico-node-g8nqd                          1/1     Running   0             77m   192.168.8.64     k8s-node01     <none>           <none>
calico-node-mdps8                          1/1     Running   0             77m   192.168.8.65     k8s-node02     <none>           <none>
calico-node-nf4nt                          1/1     Running   0             77m   192.168.8.63     k8s-master03   <none>           <none>
calico-node-sq2ml                          1/1     Running   0             77m   192.168.8.62     k8s-master02   <none>           <none>
calico-typha-8445487f56-mg6p8              1/1     Running   0             77m   192.168.8.65     k8s-node02     <none>           <none>
calico-typha-8445487f56-pxbpj              1/1     Running   0             77m   192.168.8.61     k8s-master01   <none>           <none>
calico-typha-8445487f56-tnssl              1/1     Running   0             77m   192.168.8.64     k8s-node01     <none>           <none>
coredns-5db5696c7-67h79                    1/1     Running   0             63m   172.25.92.65     k8s-master02   <none>           <none>
metrics-server-6bf7dcd649-5fhrw            1/1     Running   0             61m   172.18.195.1     k8s-master03   <none>           <none> # 进入busybox ping其他节点上的pod kubectl exec -ti busybox -- sh
/ # ping 192.168.8.64
PING 192.168.8.64 (192.168.8.64): 56 data bytes
64 bytes from 192.168.8.64: seq=0 ttl=63 time=0.358 ms
64 bytes from 192.168.8.64: seq=1 ttl=63 time=0.668 ms
64 bytes from 192.168.8.64: seq=2 ttl=63 time=0.637 ms
64 bytes from 192.168.8.64: seq=3 ttl=63 time=0.624 ms
64 bytes from 192.168.8.64: seq=4 ttl=63 time=0.907 ms # 可以连通证明这个pod是可以跨命名空间和跨主机通信的

12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)

cat > deployments.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: docker.io/library/nginx:1.14.2
        ports:
        - containerPort: 80 EOF kubectl  apply -f deployments.yaml 
deployment.apps/nginx-deployment created kubectl  get pod 
NAME                               READY   STATUS    RESTARTS   AGE
busybox                            1/1     Running   0          6m25s
nginx-deployment-9456bbbf9-4bmvk   1/1     Running   0          8s
nginx-deployment-9456bbbf9-9rcdk   1/1     Running   0          8s
nginx-deployment-9456bbbf9-dqv8s   1/1     Running   0          8s # 删除nginx [root@k8s-master01 ~]# kubectl delete -f deployments.yaml

13.安装dashboard

wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard.yaml
wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard-user.yaml sed -i "s#kubernetesui/dashboard#registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard#g" dashboard.yaml
sed -i "s#kubernetesui/metrics-scraper#registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper#g" dashboard.yaml cat dashboard.yaml | grep image
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.6.1
          imagePullPolicy: Always
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.8 kubectl  apply -f dashboard.yaml
kubectl  apply -f dashboard-user.yaml

13.1更改dashboard的svc为NodePort,如果已是请忽略

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
  type: NodePort

13.2查看端口号

kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.108.120.110   <none>        443:30034/TCP   34s

13.3创建token

kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IllnWjFheFpNeDgxZ2pxdTlTYzBEWFJvdVoyWFZBTFZWME44dTgwam1DY2MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjcwMzE0Mzk5LCJpYXQiOjE2NzAzMTA3OTksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZjcyMTQ5NzctZDBlNi00NjExLWFlYzctNDgzMWE5MzVjN2M4In19LCJuYmYiOjE2NzAzMTA3OTksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.JU28wtYdQ2TkAUHJx0tz5pBH5Z3bHPoSNWC_z8bKjmU5IztvckUPiv7_VaNwJC3da39rSOfvIoMN7cvq0MNi4qLKm5k8S2szODh9m2FPWeN81aQpneVB8CcwL0PZVL3hvUy7VqnM_Q3L7PhDfsrS3EK3bo1blHJRmSLuQcAIEICU8WNX7R2zxvOlNyXorxkwk68jDUvuAO1-AXfTOTpXWS1NDmm_zceKAIscTeT_nH1qlEXsPLfofKqDnA8XmtQIGr89VfIBBDhh1eox_hC7qNkLvPKY2oIuSBXG5mttcziqZBijtbU7rwirtgiIVVWSTdLOZmeXaDWpyZAnNzBAVg

13.3登录dashboard

https://192.168.8.61:30034/

14.ingress安装

14.1执行部署

cd ingress/

kubectl  apply -f deploy.yaml 
kubectl  apply -f backend.yaml  # 等创建完成后在执行:
kubectl  apply -f ingress-demo-app.yaml  kubectl  get ingress
NAME               CLASS   HOSTS                            ADDRESS     PORTS   AGE
ingress-host-bar   nginx   hello.chenby.cn,demo.chenby.cn   192.168.8.62   80      7s

14.2过滤查看ingress端口

[root@hello ~/yaml]# kubectl  get svc -A | grep ingress
ingress-nginx          ingress-nginx-controller             NodePort    10.104.231.36    <none>        80:32636/TCP,443:30579/TCP   104s
ingress-nginx          ingress-nginx-controller-admission   ClusterIP   10.101.85.88     <none>        443/TCP                      105s
[root@hello ~/yaml]#

15.IPv6测试

#部署应用

cat<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: chenby
spec:
  replicas: 3
  selector:
    matchLabels:
      app: chenby
  template:
    metadata:
      labels:
        app: chenby
    spec:
      containers:
      - name: chenby
        image: docker.io/library/nginx
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 80 ---
apiVersion: v1
kind: Service
metadata:
  name: chenby
spec:
  ipFamilyPolicy: PreferDualStack
  ipFamilies:
  - IPv6
  - IPv4
  type: NodePort
  selector:
    app: chenby
  ports:
  - port: 80
    targetPort: 80
EOF #查看端口
[root@k8s-master01 ~]# kubectl  get svc
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
chenby         NodePort    fd00::a29c       <none>        80:30779/TCP   5s
[root@k8s-master01 ~]#  #使用内网访问
[root@localhost yaml]# curl -I http://[fd00::a29c]
HTTP/1.1 200 OK
Server: nginx/1.21.6
Date: Thu, 05 May 2022 10:20:35 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
Connection: keep-alive
ETag: "61f01158-267"
Accept-Ranges: bytes [root@localhost yaml]# curl -I http://192.168.8.61:30779
HTTP/1.1 200 OK
Server: nginx/1.21.6
Date: Thu, 05 May 2022 10:20:59 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
Connection: keep-alive
ETag: "61f01158-267"
Accept-Ranges: bytes [root@localhost yaml]#  #使用公网访问
[root@localhost yaml]# curl -I http://[2409:8a10:9e18:9020::10]:30779
HTTP/1.1 200 OK
Server: nginx/1.21.6
Date: Thu, 05 May 2022 10:20:54 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
Connection: keep-alive
ETag: "61f01158-267"
Accept-Ranges: bytes

16.安装命令行自动补全功能

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

关于

https://www.oiox.cn/

https://www.oiox.cn/index.php/start-page.html

CSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客

全网可搜《小陈运维》

文章主要发布于微信公众号:《Linux运维交流社区》

二进制安装k8s v1.25.4 IPv4/IPv6双栈的更多相关文章

  1. ansible离线安装k8s v1.25版本

    Kubernetes v1.25 企业级高可用集群自动部署(离线版) 注:确保所有节点系统时间一致 操作系统要求:CentOS7.x_x64 1.找一台服务器安装Ansible # yum insta ...

  2. 1、二进制安装K8s 之 环境准备

    二进制安装K8s 之 环境准备 1.系统&软件 序号 设备\系统 版本 1 宿主机 MacBook Pro 11.4 2 系统 Centos 7.8 3 虚拟机 Parallels Deskt ...

  3. 10、二进制安装K8s之部署CoreDNS 和Dashboard

    二进制安装K8s之部署CoreDNS 和Dashboard CoreDNS 和Dashboard 的yaml文件在 k8s源代码压缩包里面可以找到对应的配置文件,很多人从网上直接下载使用别人的,会导致 ...

  4. 7、二进制安装K8s之部署kube-proxy

    二进制安装K8s之部署kube-proxy 1.创建配置文件 cat > /data/k8s/config/kube-proxy.conf << EOF KUBE_PROXY_OPT ...

  5. 9、二进制安装K8s之增加node

    二进制安装K8s之增加node 1.复制文件,要部署几台就直接复制即可 #二进制文件 scp /data/k8s/bin/{kubelet,kube-proxy} root@192.168.100.1 ...

  6. 6、二进制安装K8s之部署kubectl

    二进制安装K8s之部署kubectl 我们把k8s-master 也设置成node,所以先master上面部署node,在其他机器上部署node也适用,更换名称即可. 1.在所有worker node ...

  7. 4、二进制安装K8s 之 部署kube-controller-manager

    二进制安装K8s 之 部署kube-controller-manager 1.创建配置文件 cat > /data/k8s/config/kube-controller-manager.conf ...

  8. 5、二进制安装K8s 之 部署kube-scheduler

    二进制安装K8s之部署kube-scheduler 1.创建配置文件 cat > /data/k8s/config/kube-scheduler.conf << EOF KUBE_S ...

  9. 2、二进制安装K8s 之 部署ETCD集群

    二进制安装K8s 之 部署ETCD集群 一.下载安装cfssl,用于k8s证书签名 二进制包地址:https://pkg.cfssl.org/ 所需软件包: cfssl 1.6.0 cfssljson ...

  10. Linux二进制安装apache2.4.25

    Linux二进制安装apache2.4.25 安装环境:CentOS 6.2 先检查是否安装了Apache 如通是通过rpm包安装的话直接用下面的命令:rpm -q httpd 也可以使用如下两种方法 ...

随机推荐

  1. vue element tree 上移下移

    效果图 需求是:上边没有了应该取最后一个    下边没有了 应该取第一个 直接上代码: <template> <el-tree :key="tree_key" v ...

  2. md5加密中文windows和linux不一致

    测试环境springboot md5加密结果不一致 linux启动的时候 java -Dfile.encoding=utf-8 -jar xxx.jar   即可.主要是编码不一致导致.

  3. uniapp调起微信支付查询订单状态逻辑处理

    首先看页面效果: <template> <view class="page"> <view class="page-bd"> ...

  4. Dubbo调用 Mybatis 实体类一对多时,报错

    添加fetchType="eager"属性 ,急加载 作为笔记,供个人参考

  5. python+基本3D显示

    想要将双目照片合成立体图实现三维重建,完全没有头绪.但是对三维理解是必须的.所以将以前在单片机上运行的 3D画图 程序移植到python上.效果如下: 没有用numpy.openGL等,只用了纯mat ...

  6. tensorflow的断点续训

    tensorflow的断点续训 2019-09-07 顾名思义,断点续训的意思是因为某些原因模型还没有训练完成就被中断,下一次训练可以在上一次训练的基础上继续训练而不用从头开始:这种方式对于你那些训练 ...

  7. Tomcat总体架构和启动流程

    Tomcat大家都知道,这个没什么好描述的,我们先看Tomcat的总体架构 1.总体架构 架构一步一步增加组件,先来个最原始的 === Server:Tomcat的整体服务,负责接收和处理请求.其拥有 ...

  8. JavaScript之jQuery要点记录

    一 属性和属性节点 1.什么是属性? 对象身上保存的变量就是属性 2.如何操作属性? 对象.属性名称 = 值; 对象.属性名称; 对象["属性名称"] = 值; 对象[" ...

  9. k8s configmap 配置分离

    ConfigMap ConfigMap用于保存配置珊数据的键值对,可以用来保存单个属性,也可以用来保存配置文件.一张图解释 上图就是整个ConfigMap的生命周期以及使用方式, ConfigMap的 ...

  10. 排序方法-c语言

    在接触过得排序算法中中,较为常见的有冒泡排序.选择排序.归并排序.快速排序法,他们的区别在于稳定性.时间复杂度.空间复杂度等: 现简单复习一下冒泡排序: 思路非常简单,逐个比较相邻的两个元素,前一个元 ...