二进制安装Kubernetes(k8s) v1.24.1 IPv4/IPv6双栈 --- Ubuntu版本

Kubernetes 开源不易,帮忙点个star,谢谢了

介绍

kubernetes二进制安装

后续尽可能第一时间更新新版本文档,更新后内容在GitHub。

本文是使用的是Ubuntu作为基底,其他文档请在GitHub上查看。

1.21.13 和 1.22.10 和 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和1.24.1 文档以及安装包已生成。

我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。

若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。

不配置IPV6,不影响后续,不过集群依旧是支持IPv6的。为后期留有扩展可能性。

https://github.com/cby-chen/Kubernetes/

手动项目地址:https://github.com/cby-chen/Kubernetes

脚本项目地址:https://github.com/cby-chen/Binary_installation_of_Kubernetes

kubernetes 1.24 变化较大,详细见:https://kubernetes.io/zh/blog/2022/04/07/upcoming-changes-in-kubernetes-1-24/

1.环境

主机名称 IP地址 说明 软件
Master01 192.168.1.11 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived
Master02 192.168.1.12 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived
Master03 192.168.1.13 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived
Node01 192.168.1.14 node节点 kubelet、kube-proxy、nfs-client
Node02 192.168.1.15 node节点 kubelet、kube-proxy、nfs-client

192.168.1.19 VIP
软件 版本
kernel 5.4.0-86
Ubuntu 2004 及以上
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy v1.24.1
etcd v3.5.4
containerd v1.5.11
cfssl v1.6.1
cni v1.1.1
crictl v1.24.2
haproxy v1.8.27
keepalived v2.1.5

网段

物理主机:10.0.0.0/24

service:10.96.0.0/12

pod:172.16.0.0/12

建议k8s集群与etcd集群分开安装

安装包已经整理好:https://github.com/cby-chen/Kubernetes/releases/download/v1.24.1/kubernetes-v1.24.1.tar

1.1.k8s基础系统环境配置

1.2.配置IP

  1. root@hello:~# vim /etc/netplan/00-installer-config.yaml 
  2. root@hello:~# 
  3. root@hello:~# cat /etc/netplan/00-installer-config.yaml
  4. # This is the network config written by 'subiquity'
  5. network:
  6.   ethernets:
  7.     ens18:
  8.        addresses:
  9.          - 192.168.1.11/24
  10.        gateway4: 192.168.1.1
  11.        nameservers:
  12.            addresses: [8.8.8.8]
  13.   version: 2
  14. root@hello:~# 
  15. root@hello:~# netplan apply 
  16. root@hello:~#

1.3.设置主机名

  1. hostnamectl set-hostname k8s-master01
  2. hostnamectl set-hostname k8s-master02
  3. hostnamectl set-hostname k8s-master03
  4. hostnamectl set-hostname k8s-node01
  5. hostnamectl set-hostname k8s-node02

1.4.配置apt源

  1. sudo sed -'s/archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list

1.5.安装一些必备工具

  1. apt install  wget jq psmisc vim net-tools nfs-kernel-server  telnet lvm2 git tar curl -y

1.6.选择性下载需要工具

  1. 1.下载kubernetes1.24.+的二进制包
  2. github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md
  3. wget https://dl.k8s.io/v1.24.1/kubernetes-server-linux-amd64.tar.gz
  4. 2.下载etcdctl二进制包
  5. github二进制包下载地址:https://github.com/etcd-io/etcd/releases
  6. wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz
  7. 3.docker-ce二进制包下载地址
  8. 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
  9. 这里需要下载20.10.+版本
  10. wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz
  11. 4.containerd二进制包下载
  12. github下载地址:https://github.com/containerd/containerd/releases
  13. containerd下载时下载带cni插件的二进制包。
  14. wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz
  15. 5.下载cfssl二进制包
  16. github二进制包下载地址:https://github.com/cloudflare/cfssl/releases
  17. wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
  18. wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
  19. wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
  20. 6.cni插件下载
  21. github下载地址:https://github.com/containernetworking/plugins/releases
  22. wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
  23. 7.crictl客户端二进制下载
  24. github下载:https://github.com/kubernetes-sigs/cri-tools/releases
  25. wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz

1.7.关闭防火墙

  1. systemctl disable --now ufw

1.8.关闭交换分区

  1. sed -ri 's/.*swap.*/#&/' /etc/fstab
  2. swapoff -&& sysctl -w vm.swappiness=0
  3. cat /etc/fstab
  4. # /dev/mapper/centos-swap swap                    swap    defaults        0 0

1.9.进行时间同步 (lb除外)

  1. # 服务端
  2. apt install chrony -y
  3. cat > /etc/chrony/chrony.conf << EOF 
  4. pool ntp.aliyun.com iburst
  5. driftfile /var/lib/chrony/drift
  6. makestep 1.0 3
  7. rtcsync
  8. allow 192.168.1.0/24
  9. local stratum 10
  10. keyfile /etc/chrony.keys
  11. leapsectz right/UTC
  12. logdir /var/log/chrony
  13. EOF
  14. systemctl restart chronyd
  15. systemctl enable chronyd
  16. # 客户端
  17. apt install chrony -y
  18. vim /etc/chrony/chrony.conf
  19. cat /etc/chrony/chrony.conf | grep -v  "^#" | grep -"^$"
  20. pool 192.168.1.11 iburst
  21. driftfile /var/lib/chrony/drift
  22. makestep 1.0 3
  23. rtcsync
  24. keyfile /etc/chrony.keys
  25. leapsectz right/UTC
  26. logdir /var/log/chrony
  27. systemctl restart chronyd ; systemctl enable chronyd
  28. # 客户端安装一条命令
  29. yum install chrony -; sed -"s#2.centos.pool.ntp.org#192.168.1.11#g" /etc/chrony/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd
  30. #使用客户端进行验证
  31. chronyc sources -v

1.10.配置ulimit

  1. ulimit -SHn 65535
  2. cat >> /etc/security/limits.conf <<EOF
  3. * soft nofile 655360
  4. * hard nofile 131072
  5. * soft nproc 655350
  6. * hard nproc 655350
  7. * seft memlock unlimited
  8. * hard memlock unlimitedd
  9. EOF

1.11.配置免密登录

  1. apt install -y sshpass
  2. ssh-keygen -/root/.ssh/id_rsa -''
  3. export IP="192.168.1.11 192.168.1.12 192.168.1.13 192.168.1.14 192.168.1.15"
  4. export SSHPASS=123123
  5. for HOST in $IP;do
  6.      sshpass -e ssh-copy-id -StrictHostKeyChecking=no $HOST
  7. done

1.12.安装ipvsadm (lb除外)

  1. apt install ipvsadm ipset sysstat conntrack -y
  2. cat >> /etc/modules-load.d/ipvs.conf <<EOF 
  3. ip_vs
  4. ip_vs_rr
  5. ip_vs_wrr
  6. ip_vs_sh
  7. nf_conntrack
  8. ip_tables
  9. ip_set
  10. xt_set
  11. ipt_set
  12. ipt_rpfilter
  13. ipt_REJECT
  14. ipip
  15. EOF
  16. systemctl restart systemd-modules-load.service
  17. lsmod | grep -e ip_vs -e nf_conntrack
  18. ip_vs_sh               16384  0
  19. ip_vs_wrr              16384  0
  20. ip_vs_rr               16384  0
  21. ip_vs                 155648  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
  22. nf_conntrack          139264  1 ip_vs
  23. nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
  24. nf_defrag_ipv4         16384  1 nf_conntrack
  25. libcrc32c              16384  4 nf_conntrack,btrfs,raid456,ip_vs

1.13.修改内核参数 (lb除外)

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.ipv4.ip_forward = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. fs.may_detach_mounts = 1
  5. vm.overcommit_memory=1
  6. vm.panic_on_oom=0
  7. fs.inotify.max_user_watches=89100
  8. fs.file-max=52706963
  9. fs.nr_open=52706963
  10. net.netfilter.nf_conntrack_max=2310720
  11. net.ipv4.tcp_keepalive_time = 600
  12. net.ipv4.tcp_keepalive_probes = 3
  13. net.ipv4.tcp_keepalive_intvl =15
  14. net.ipv4.tcp_max_tw_buckets = 36000
  15. net.ipv4.tcp_tw_reuse = 1
  16. net.ipv4.tcp_max_orphans = 327680
  17. net.ipv4.tcp_orphan_retries = 3
  18. net.ipv4.tcp_syncookies = 1
  19. net.ipv4.tcp_max_syn_backlog = 16384
  20. net.ipv4.ip_conntrack_max = 65536
  21. net.ipv4.tcp_max_syn_backlog = 16384
  22. net.ipv4.tcp_timestamps = 0
  23. net.core.somaxconn = 16384
  24. net.ipv6.conf.all.disable_ipv6 = 0
  25. net.ipv6.conf.default.disable_ipv6 = 0
  26. net.ipv6.conf.lo.disable_ipv6 = 0
  27. net.ipv6.conf.all.forwarding = 0
  28. EOF
  29. sysctl --system

1.14.所有节点配置hosts本地解析

  1. cat > /etc/hosts <<EOF
  2. 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.1.11 k8s-master01
  5. 192.168.1.12 k8s-master02
  6. 192.168.1.13 k8s-master03
  7. 192.168.1.14 k8s-node01
  8. 192.168.1.15 k8s-node02
  9. 192.168.1.19 lb-vip
  10. EOF

2.k8s基本组件安装

2.1.所有k8s节点安装Containerd作为Runtime

  1. wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
  2. #创建cni插件所需目录
  3. mkdir -/etc/cni/net./opt/cni/bin 
  4. #解压cni二进制包
  5. tar xf cni-plugins-linux-amd64-v1.1.1.tgz -/opt/cni/bin/
  6. wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz
  7. #解压
  8. tar -/ -xzf cri-containerd-cni-1.6.4-linux-amd64.tar.gz
  9. #创建服务启动文件
  10. cat > /etc/systemd/system/containerd.service <<EOF
  11. [Unit]
  12. Description=containerd container runtime
  13. Documentation=https://containerd.io
  14. After=network.target local-fs.target
  15. [Service]
  16. ExecStartPre=-/sbin/modprobe overlay
  17. ExecStart=/usr/local/bin/containerd
  18. Type=notify
  19. Delegate=yes
  20. KillMode=process
  21. Restart=always
  22. RestartSec=5
  23. LimitNPROC=infinity
  24. LimitCORE=infinity
  25. LimitNOFILE=infinity
  26. TasksMax=infinity
  27. OOMScoreAdjust=-999
  28. [Install]
  29. WantedBy=multi-user.target
  30. EOF

2.1.1配置Containerd所需的模块

  1. cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
  2. overlay
  3. br_netfilter
  4. EOF

2.1.2加载模块

  1. systemctl restart systemd-modules-load.service

2.1.3配置Containerd所需的内核

  1. cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
  2. net.bridge.bridge-nf-call-iptables  = 1
  3. net.ipv4.ip_forward                 = 1
  4. net.bridge.bridge-nf-call-ip6tables = 1
  5. EOF
  6. # 加载内核
  7. sysctl --system

2.1.4创建Containerd的配置文件

  1. mkdir -/etc/containerd
  2. containerd config default | tee /etc/containerd/config.toml
  3. 修改Containerd的配置文件
  4. sed -"s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
  5. cat /etc/containerd/config.toml | grep SystemdCgroup
  6. sed -"s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml
  7. cat /etc/containerd/config.toml | grep sandbox_image
  8. # 找到containerd.runtimes.runc.options,在其下加入SystemdCgroup = true
  9. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  10.               SystemdCgroup = true
  11.     [plugins."io.containerd.grpc.v1.cri".cni]
  12. # 将sandbox_image默认地址改为符合版本地址
  13.     sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"

2.1.5启动并设置为开机启动

  1. systemctl daemon-reload
  2. systemctl enable --now containerd

2.1.6配置crictl客户端连接的运行时位置

  1. wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz
  2. #解压
  3. tar xf crictl-v1.24.2-linux-amd64.tar.gz -/usr/bin/
  4. #生成配置文件
  5. cat > /etc/crictl.yaml <<EOF
  6. runtime-endpoint: unix:///run/containerd/containerd.sock
  7. image-endpoint: unix:///run/containerd/containerd.sock
  8. timeout: 10
  9. debug: false
  10. EOF
  11. #测试
  12. systemctl restart  containerd
  13. crictl info

2.2.k8s与etcd下载及安装(仅在master01操作)

2.2.1解压k8s安装包

  1. # 下载安装包
  2. wget https://dl.k8s.io/v1.24.1/kubernetes-server-linux-amd64.tar.gz
  3. wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz
  4. # 解压k8s安装文件
  5. cd cby
  6. tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -/usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
  7. # 解压etcd安装文件
  8. tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -/usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl}
  9. # 查看/usr/local/bin下内容
  10. ls /usr/local/bin/
  11. etcd  etcdctl  kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler

2.2.2查看版本

  1. [root@k8s-master01 ~]# kubelet --version
  2. Kubernetes v1.24.1
  3. [root@k8s-master01 ~]# etcdctl version
  4. etcdctl version: 3.5.4
  5. API version: 3.5
  6. [root@k8s-master01 ~]#

2.2.3将组件发送至其他k8s节点

  1. Master='k8s-master02 k8s-master03'
  2. Work='k8s-node01 k8s-node02'
  3. for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
  4. for NODE in $Work; do     scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
  5. mkdir -/opt/cni/bin

2.3创建证书相关文件

  1. mkdir pki
  2. cd pki
  3. cat > admin-csr.json << EOF 
  4. {
  5.   "CN": "admin",
  6.   "key": {
  7.     "algo": "rsa",
  8.     "size": 2048
  9.   },
  10.   "names": [
  11.     {
  12.       "C": "CN",
  13.       "ST": "Beijing",
  14.       "L": "Beijing",
  15.       "O": "system:masters",
  16.       "OU": "Kubernetes-manual"
  17.     }
  18.   ]
  19. }
  20. EOF
  21. cat > ca-config.json << EOF 
  22. {
  23.   "signing": {
  24.     "default": {
  25.       "expiry": "876000h"
  26.     },
  27.     "profiles": {
  28.       "kubernetes": {
  29.         "usages": [
  30.             "signing",
  31.             "key encipherment",
  32.             "server auth",
  33.             "client auth"
  34.         ],
  35.         "expiry": "876000h"
  36.       }
  37.     }
  38.   }
  39. }
  40. EOF
  41. cat > etcd-ca-csr.json  << EOF 
  42. {
  43.   "CN": "etcd",
  44.   "key": {
  45.     "algo": "rsa",
  46.     "size": 2048
  47.   },
  48.   "names": [
  49.     {
  50.       "C": "CN",
  51.       "ST": "Beijing",
  52.       "L": "Beijing",
  53.       "O": "etcd",
  54.       "OU": "Etcd Security"
  55.     }
  56.   ],
  57.   "ca": {
  58.     "expiry": "876000h"
  59.   }
  60. }
  61. EOF
  62. cat > front-proxy-ca-csr.json  << EOF 
  63. {
  64.   "CN": "kubernetes",
  65.   "key": {
  66.      "algo": "rsa",
  67.      "size": 2048
  68.   },
  69.   "ca": {
  70.     "expiry": "876000h"
  71.   }
  72. }
  73. EOF
  74. cat > kubelet-csr.json  << EOF 
  75. {
  76.   "CN": "system:node:\$NODE",
  77.   "key": {
  78.     "algo": "rsa",
  79.     "size": 2048
  80.   },
  81.   "names": [
  82.     {
  83.       "C": "CN",
  84.       "L": "Beijing",
  85.       "ST": "Beijing",
  86.       "O": "system:nodes",
  87.       "OU": "Kubernetes-manual"
  88.     }
  89.   ]
  90. }
  91. EOF
  92. cat > manager-csr.json << EOF 
  93. {
  94.   "CN": "system:kube-controller-manager",
  95.   "key": {
  96.     "algo": "rsa",
  97.     "size": 2048
  98.   },
  99.   "names": [
  100.     {
  101.       "C": "CN",
  102.       "ST": "Beijing",
  103.       "L": "Beijing",
  104.       "O": "system:kube-controller-manager",
  105.       "OU": "Kubernetes-manual"
  106.     }
  107.   ]
  108. }
  109. EOF
  110. cat > apiserver-csr.json << EOF 
  111. {
  112.   "CN": "kube-apiserver",
  113.   "key": {
  114.     "algo": "rsa",
  115.     "size": 2048
  116.   },
  117.   "names": [
  118.     {
  119.       "C": "CN",
  120.       "ST": "Beijing",
  121.       "L": "Beijing",
  122.       "O": "Kubernetes",
  123.       "OU": "Kubernetes-manual"
  124.     }
  125.   ]
  126. }
  127. EOF
  128. cat > ca-csr.json   << EOF 
  129. {
  130.   "CN": "kubernetes",
  131.   "key": {
  132.     "algo": "rsa",
  133.     "size": 2048
  134.   },
  135.   "names": [
  136.     {
  137.       "C": "CN",
  138.       "ST": "Beijing",
  139.       "L": "Beijing",
  140.       "O": "Kubernetes",
  141.       "OU": "Kubernetes-manual"
  142.     }
  143.   ],
  144.   "ca": {
  145.     "expiry": "876000h"
  146.   }
  147. }
  148. EOF
  149. cat > etcd-csr.json << EOF 
  150. {
  151.   "CN": "etcd",
  152.   "key": {
  153.     "algo": "rsa",
  154.     "size": 2048
  155.   },
  156.   "names": [
  157.     {
  158.       "C": "CN",
  159.       "ST": "Beijing",
  160.       "L": "Beijing",
  161.       "O": "etcd",
  162.       "OU": "Etcd Security"
  163.     }
  164.   ]
  165. }
  166. EOF
  167. cat > front-proxy-client-csr.json  << EOF 
  168. {
  169.   "CN": "front-proxy-client",
  170.   "key": {
  171.      "algo": "rsa",
  172.      "size": 2048
  173.   }
  174. }
  175. EOF
  176. cat > kube-proxy-csr.json  << EOF 
  177. {
  178.   "CN": "system:kube-proxy",
  179.   "key": {
  180.     "algo": "rsa",
  181.     "size": 2048
  182.   },
  183.   "names": [
  184.     {
  185.       "C": "CN",
  186.       "ST": "Beijing",
  187.       "L": "Beijing",
  188.       "O": "system:kube-proxy",
  189.       "OU": "Kubernetes-manual"
  190.     }
  191.   ]
  192. }
  193. EOF
  194. cat > scheduler-csr.json << EOF 
  195. {
  196.   "CN": "system:kube-scheduler",
  197.   "key": {
  198.     "algo": "rsa",
  199.     "size": 2048
  200.   },
  201.   "names": [
  202.     {
  203.       "C": "CN",
  204.       "ST": "Beijing",
  205.       "L": "Beijing",
  206.       "O": "system:kube-scheduler",
  207.       "OU": "Kubernetes-manual"
  208.     }
  209.   ]
  210. }
  211. EOF
  212. cd ..
  213. mkdir bootstrap
  214. cd bootstrap
  215. cat > bootstrap.secret.yaml << EOF 
  216. apiVersion: v1
  217. kind: Secret
  218. metadata:
  219.   name: bootstrap-token-c8ad9c
  220.   namespace: kube-system
  221. type: bootstrap.kubernetes.io/token
  222. stringData:
  223.   description: "The default bootstrap token generated by 'kubelet '."
  224.   token-id: c8ad9c
  225.   token-secret: 2e4d610cf3e7426e
  226.   usage-bootstrap-authentication: "true"
  227.   usage-bootstrap-signing: "true"
  228.   auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
  229. ---
  230. apiVersion: rbac.authorization.k8s.io/v1
  231. kind: ClusterRoleBinding
  232. metadata:
  233.   name: kubelet-bootstrap
  234. roleRef:
  235.   apiGroup: rbac.authorization.k8s.io
  236.   kind: ClusterRole
  237.   name: system:node-bootstrapper
  238. subjects:
  239. - apiGroup: rbac.authorization.k8s.io
  240.   kind: Group
  241.   name: system:bootstrappers:default-node-token
  242. ---
  243. apiVersion: rbac.authorization.k8s.io/v1
  244. kind: ClusterRoleBinding
  245. metadata:
  246.   name: node-autoapprove-bootstrap
  247. roleRef:
  248.   apiGroup: rbac.authorization.k8s.io
  249.   kind: ClusterRole
  250.   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
  251. subjects:
  252. - apiGroup: rbac.authorization.k8s.io
  253.   kind: Group
  254.   name: system:bootstrappers:default-node-token
  255. ---
  256. apiVersion: rbac.authorization.k8s.io/v1
  257. kind: ClusterRoleBinding
  258. metadata:
  259.   name: node-autoapprove-certificate-rotation
  260. roleRef:
  261.   apiGroup: rbac.authorization.k8s.io
  262.   kind: ClusterRole
  263.   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
  264. subjects:
  265. - apiGroup: rbac.authorization.k8s.io
  266.   kind: Group
  267.   name: system:nodes
  268. ---
  269. apiVersion: rbac.authorization.k8s.io/v1
  270. kind: ClusterRole
  271. metadata:
  272.   annotations:
  273.     rbac.authorization.kubernetes.io/autoupdate: "true"
  274.   labels:
  275.     kubernetes.io/bootstrapping: rbac-defaults
  276.   name: system:kube-apiserver-to-kubelet
  277. rules:
  278.   - apiGroups:
  279.       - ""
  280.     resources:
  281.       - nodes/proxy
  282.       - nodes/stats
  283.       - nodes/log
  284.       - nodes/spec
  285.       - nodes/metrics
  286.     verbs:
  287.       - "*"
  288. ---
  289. apiVersion: rbac.authorization.k8s.io/v1
  290. kind: ClusterRoleBinding
  291. metadata:
  292.   name: system:kube-apiserver
  293.   namespace: ""
  294. roleRef:
  295.   apiGroup: rbac.authorization.k8s.io
  296.   kind: ClusterRole
  297.   name: system:kube-apiserver-to-kubelet
  298. subjects:
  299.   - apiGroup: rbac.authorization.k8s.io
  300.     kind: User
  301.     name: kube-apiserver
  302. EOF
  303. cd ..
  304. mkdir coredns
  305. cd coredns
  306. cat > coredns.yaml << EOF 
  307. apiVersion: v1
  308. kind: ServiceAccount
  309. metadata:
  310.   name: coredns
  311.   namespace: kube-system
  312. ---
  313. apiVersion: rbac.authorization.k8s.io/v1
  314. kind: ClusterRole
  315. metadata:
  316.   labels:
  317.     kubernetes.io/bootstrapping: rbac-defaults
  318.   name: system:coredns
  319. rules:
  320.   - apiGroups:
  321.     - ""
  322.     resources:
  323.     - endpoints
  324.     - services
  325.     - pods
  326.     - namespaces
  327.     verbs:
  328.     - list
  329.     - watch
  330.   - apiGroups:
  331.     - discovery.k8s.io
  332.     resources:
  333.     - endpointslices
  334.     verbs:
  335.     - list
  336.     - watch
  337. ---
  338. apiVersion: rbac.authorization.k8s.io/v1
  339. kind: ClusterRoleBinding
  340. metadata:
  341.   annotations:
  342.     rbac.authorization.kubernetes.io/autoupdate: "true"
  343.   labels:
  344.     kubernetes.io/bootstrapping: rbac-defaults
  345.   name: system:coredns
  346. roleRef:
  347.   apiGroup: rbac.authorization.k8s.io
  348.   kind: ClusterRole
  349.   name: system:coredns
  350. subjects:
  351. - kind: ServiceAccount
  352.   name: coredns
  353.   namespace: kube-system
  354. ---
  355. apiVersion: v1
  356. kind: ConfigMap
  357. metadata:
  358.   name: coredns
  359.   namespace: kube-system
  360. data:
  361.   Corefile: |
  362.     .:53 {
  363.         errors
  364.         health {
  365.           lameduck 5s
  366.         }
  367.         ready
  368.         kubernetes cluster.local in-addr.arpa ip6.arpa {
  369.           fallthrough in-addr.arpa ip6.arpa
  370.         }
  371.         prometheus :9153
  372.         forward . /etc/resolv.conf {
  373.           max_concurrent 1000
  374.         }
  375.         cache 30
  376.         loop
  377.         reload
  378.         loadbalance
  379.     }
  380. ---
  381. apiVersion: apps/v1
  382. kind: Deployment
  383. metadata:
  384.   name: coredns
  385.   namespace: kube-system
  386.   labels:
  387.     k8s-app: kube-dns
  388.     kubernetes.io/name: "CoreDNS"
  389. spec:
  390.   # replicas: not specified here:
  391.   # 1. Default is 1.
  392.   # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  393.   strategy:
  394.     type: RollingUpdate
  395.     rollingUpdate:
  396.       maxUnavailable: 1
  397.   selector:
  398.     matchLabels:
  399.       k8s-app: kube-dns
  400.   template:
  401.     metadata:
  402.       labels:
  403.         k8s-app: kube-dns
  404.     spec:
  405.       priorityClassName: system-cluster-critical
  406.       serviceAccountName: coredns
  407.       tolerations:
  408.         - key: "CriticalAddonsOnly"
  409.           operator: "Exists"
  410.       nodeSelector:
  411.         kubernetes.io/os: linux
  412.       affinity:
  413.          podAntiAffinity:
  414.            preferredDuringSchedulingIgnoredDuringExecution:
  415.            - weight: 100
  416.              podAffinityTerm:
  417.                labelSelector:
  418.                  matchExpressions:
  419.                    - key: k8s-app
  420.                      operator: In
  421.                      values: ["kube-dns"]
  422.                topologyKey: kubernetes.io/hostname
  423.       containers:
  424.       - name: coredns
  425.         image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 
  426.         imagePullPolicy: IfNotPresent
  427.         resources:
  428.           limits:
  429.             memory: 170Mi
  430.           requests:
  431.             cpu: 100m
  432.             memory: 70Mi
  433.         args: [ "-conf", "/etc/coredns/Corefile" ]
  434.         volumeMounts:
  435.         - name: config-volume
  436.           mountPath: /etc/coredns
  437.           readOnly: true
  438.         ports:
  439.         - containerPort: 53
  440.           name: dns
  441.           protocol: UDP
  442.         - containerPort: 53
  443.           name: dns-tcp
  444.           protocol: TCP
  445.         - containerPort: 9153
  446.           name: metrics
  447.           protocol: TCP
  448.         securityContext:
  449.           allowPrivilegeEscalation: false
  450.           capabilities:
  451.             add:
  452.             - NET_BIND_SERVICE
  453.             drop:
  454.             - all
  455.           readOnlyRootFilesystem: true
  456.         livenessProbe:
  457.           httpGet:
  458.             path: /health
  459.             port: 8080
  460.             scheme: HTTP
  461.           initialDelaySeconds: 60
  462.           timeoutSeconds: 5
  463.           successThreshold: 1
  464.           failureThreshold: 5
  465.         readinessProbe:
  466.           httpGet:
  467.             path: /ready
  468.             port: 8181
  469.             scheme: HTTP
  470.       dnsPolicy: Default
  471.       volumes:
  472.         - name: config-volume
  473.           configMap:
  474.             name: coredns
  475.             items:
  476.             - key: Corefile
  477.               path: Corefile
  478. ---
  479. apiVersion: v1
  480. kind: Service
  481. metadata:
  482.   name: kube-dns
  483.   namespace: kube-system
  484.   annotations:
  485.     prometheus.io/port: "9153"
  486.     prometheus.io/scrape: "true"
  487.   labels:
  488.     k8s-app: kube-dns
  489.     kubernetes.io/cluster-service: "true"
  490.     kubernetes.io/name: "CoreDNS"
  491. spec:
  492.   selector:
  493.     k8s-app: kube-dns
  494.   clusterIP: 10.96.0.10 
  495.   ports:
  496.   - name: dns
  497.     port: 53
  498.     protocol: UDP
  499.   - name: dns-tcp
  500.     port: 53
  501.     protocol: TCP
  502.   - name: metrics
  503.     port: 9153
  504.     protocol: TCP
  505. EOF
  506. cd ..
  507. mkdir metrics-server
  508. cd metrics-server
  509. cat > metrics-server.yaml << EOF 
  510. apiVersion: v1
  511. kind: ServiceAccount
  512. metadata:
  513.   labels:
  514.     k8s-app: metrics-server
  515.   name: metrics-server
  516.   namespace: kube-system
  517. ---
  518. apiVersion: rbac.authorization.k8s.io/v1
  519. kind: ClusterRole
  520. metadata:
  521.   labels:
  522.     k8s-app: metrics-server
  523.     rbac.authorization.k8s.io/aggregate-to-admin: "true"
  524.     rbac.authorization.k8s.io/aggregate-to-edit: "true"
  525.     rbac.authorization.k8s.io/aggregate-to-view: "true"
  526.   name: system:aggregated-metrics-reader
  527. rules:
  528. - apiGroups:
  529.   - metrics.k8s.io
  530.   resources:
  531.   - pods
  532.   - nodes
  533.   verbs:
  534.   - get
  535.   - list
  536.   - watch
  537. ---
  538. apiVersion: rbac.authorization.k8s.io/v1
  539. kind: ClusterRole
  540. metadata:
  541.   labels:
  542.     k8s-app: metrics-server
  543.   name: system:metrics-server
  544. rules:
  545. - apiGroups:
  546.   - ""
  547.   resources:
  548.   - pods
  549.   - nodes
  550.   - nodes/stats
  551.   - namespaces
  552.   - configmaps
  553.   verbs:
  554.   - get
  555.   - list
  556.   - watch
  557. ---
  558. apiVersion: rbac.authorization.k8s.io/v1
  559. kind: RoleBinding
  560. metadata:
  561.   labels:
  562.     k8s-app: metrics-server
  563.   name: metrics-server-auth-reader
  564.   namespace: kube-system
  565. roleRef:
  566.   apiGroup: rbac.authorization.k8s.io
  567.   kind: Role
  568.   name: extension-apiserver-authentication-reader
  569. subjects:
  570. - kind: ServiceAccount
  571.   name: metrics-server
  572.   namespace: kube-system
  573. ---
  574. apiVersion: rbac.authorization.k8s.io/v1
  575. kind: ClusterRoleBinding
  576. metadata:
  577.   labels:
  578.     k8s-app: metrics-server
  579.   name: metrics-server:system:auth-delegator
  580. roleRef:
  581.   apiGroup: rbac.authorization.k8s.io
  582.   kind: ClusterRole
  583.   name: system:auth-delegator
  584. subjects:
  585. - kind: ServiceAccount
  586.   name: metrics-server
  587.   namespace: kube-system
  588. ---
  589. apiVersion: rbac.authorization.k8s.io/v1
  590. kind: ClusterRoleBinding
  591. metadata:
  592.   labels:
  593.     k8s-app: metrics-server
  594.   name: system:metrics-server
  595. roleRef:
  596.   apiGroup: rbac.authorization.k8s.io
  597.   kind: ClusterRole
  598.   name: system:metrics-server
  599. subjects:
  600. - kind: ServiceAccount
  601.   name: metrics-server
  602.   namespace: kube-system
  603. ---
  604. apiVersion: v1
  605. kind: Service
  606. metadata:
  607.   labels:
  608.     k8s-app: metrics-server
  609.   name: metrics-server
  610.   namespace: kube-system
  611. spec:
  612.   ports:
  613.   - name: https
  614.     port: 443
  615.     protocol: TCP
  616.     targetPort: https
  617.   selector:
  618.     k8s-app: metrics-server
  619. ---
  620. apiVersion: apps/v1
  621. kind: Deployment
  622. metadata:
  623.   labels:
  624.     k8s-app: metrics-server
  625.   name: metrics-server
  626.   namespace: kube-system
  627. spec:
  628.   selector:
  629.     matchLabels:
  630.       k8s-app: metrics-server
  631.   strategy:
  632.     rollingUpdate:
  633.       maxUnavailable: 0
  634.   template:
  635.     metadata:
  636.       labels:
  637.         k8s-app: metrics-server
  638.     spec:
  639.       containers:
  640.       - args:
  641.         - --cert-dir=/tmp
  642.         - --secure-port=4443
  643.         - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  644.         - --kubelet-use-node-status-port
  645.         - --metric-resolution=15s
  646.         - --kubelet-insecure-tls
  647.         - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm
  648.         - --requestheader-username-headers=X-Remote-User
  649.         - --requestheader-group-headers=X-Remote-Group
  650.         - --requestheader-extra-headers-prefix=X-Remote-Extra-
  651.         image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0
  652.         imagePullPolicy: IfNotPresent
  653.         livenessProbe:
  654.           failureThreshold: 3
  655.           httpGet:
  656.             path: /livez
  657.             port: https
  658.             scheme: HTTPS
  659.           periodSeconds: 10
  660.         name: metrics-server
  661.         ports:
  662.         - containerPort: 4443
  663.           name: https
  664.           protocol: TCP
  665.         readinessProbe:
  666.           failureThreshold: 3
  667.           httpGet:
  668.             path: /readyz
  669.             port: https
  670.             scheme: HTTPS
  671.           initialDelaySeconds: 20
  672.           periodSeconds: 10
  673.         resources:
  674.           requests:
  675.             cpu: 100m
  676.             memory: 200Mi
  677.         securityContext:
  678.           readOnlyRootFilesystem: true
  679.           runAsNonRoot: true
  680.           runAsUser: 1000
  681.         volumeMounts:
  682.         - mountPath: /tmp
  683.           name: tmp-dir
  684.         - name: ca-ssl
  685.           mountPath: /etc/kubernetes/pki
  686.       nodeSelector:
  687.         kubernetes.io/os: linux
  688.       priorityClassName: system-cluster-critical
  689.       serviceAccountName: metrics-server
  690.       volumes:
  691.       - emptyDir: {}
  692.         name: tmp-dir
  693.       - name: ca-ssl
  694.         hostPath:
  695.           path: /etc/kubernetes/pki
  696. ---
  697. apiVersion: apiregistration.k8s.io/v1
  698. kind: APIService
  699. metadata:
  700.   labels:
  701.     k8s-app: metrics-server
  702.   name: v1beta1.metrics.k8s.io
  703. spec:
  704.   group: metrics.k8s.io
  705.   groupPriorityMinimum: 100
  706.   insecureSkipTLSVerify: true
  707.   service:
  708.     name: metrics-server
  709.     namespace: kube-system
  710.   version: v1beta1
  711.   versionPriority: 100
  712. EOF

3.相关证书生成

  1. master01节点下载证书生成工具
  2. wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -/usr/local/bin/cfssl
  3. wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -/usr/local/bin/cfssljson
  4. chmod +/usr/local/bin/cfssl /usr/local/bin/cfssljson

3.1.生成etcd证书

特别说明除外,以下操作在所有master节点操作

3.1.1所有master节点创建证书存放目录

  1. mkdir /etc/etcd/ssl -p

3.1.2master01节点生成etcd证书

  1. cd pki
  2. # 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)
  3. cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
  4. cfssl gencert \
  5.    -ca=/etc/etcd/ssl/etcd-ca.pem \
  6.    -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
  7.    -config=ca-config.json \
  8.    -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.11,192.168.1.12,192.168.1.13,2408:8207:78ca:9fa1::10,2408:8207:78ca:9fa1::20,2408:8207:78ca:9fa1::30 \
  9.    -profile=kubernetes \
  10.    etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

3.1.3将证书复制到其他节点

  1. Master='k8s-master02 k8s-master03'
  2. for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

3.2.生成k8s相关证书

特别说明除外,以下操作在所有master节点操作

3.2.1所有k8s节点创建证书存放目录

  1. mkdir -/etc/kubernetes/pki

3.2.2master01节点生成k8s证书

  1. # 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备
  2. cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
  3. # 10.96.0.1是service网段的第一个地址,需要计算,192.168.1.19为高可用vip地址
  4. cfssl gencert   \
  5. -ca=/etc/kubernetes/pki/ca.pem   \
  6. -ca-key=/etc/kubernetes/pki/ca-key.pem   \
  7. -config=ca-config.json   \
  8. -hostname=10.96.0.1,192.168.1.19,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.11,192.168.1.12,192.168.1.13,192.168.1.14,192.168.1.15,192.168.1.16,192.168.1.17,192.168.1.18,2408:8207:78ca:9fa1::10,2408:8207:78ca:9fa1::20,2408:8207:78ca:9fa1::30,2408:8207:78ca:9fa1::40,2408:8207:78ca:9fa1::50,2408:8207:78ca:9fa1::60,2408:8207:78ca:9fa1::70,2408:8207:78ca:9fa1::80,2408:8207:78ca:9fa1::90,2408:8207:78ca:9fa1::100   \
  9. -profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

3.2.3生成apiserver聚合证书

  1. cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 
  2. # 有一个警告,可以忽略
  3. cfssl gencert  \
  4. -ca=/etc/kubernetes/pki/front-proxy-ca.pem   \
  5. -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   \
  6. -config=ca-config.json   \
  7. -profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

3.2.4生成controller-manage的证书

  1. cfssl gencert \
  2.    -ca=/etc/kubernetes/pki/ca.pem \
  3.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  4.    -config=ca-config.json \
  5.    -profile=kubernetes \
  6.    manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
  7. # 设置一个集群项
  8. kubectl config set-cluster kubernetes \
  9.      --certificate-authority=/etc/kubernetes/pki/ca.pem \
  10.      --embed-certs=true \
  11.      --server=https://192.168.1.19:8443 \
  12.      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  13. # 设置一个环境项,一个上下文
  14. kubectl config set-context system:kube-controller-manager@kubernetes \
  15.     --cluster=kubernetes \
  16.     --user=system:kube-controller-manager \
  17.     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  18. # 设置一个用户项
  19. kubectl config set-credentials system:kube-controller-manager \
  20.      --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
  21.      --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
  22.      --embed-certs=true \
  23.      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  24. # 设置默认环境
  25. kubectl config use-context system:kube-controller-manager@kubernetes \
  26.      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  27. cfssl gencert \
  28.    -ca=/etc/kubernetes/pki/ca.pem \
  29.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  30.    -config=ca-config.json \
  31.    -profile=kubernetes \
  32.    scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
  33. kubectl config set-cluster kubernetes \
  34.      --certificate-authority=/etc/kubernetes/pki/ca.pem \
  35.      --embed-certs=true \
  36.      --server=https://192.168.1.19:8443 \
  37.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  38. kubectl config set-credentials system:kube-scheduler \
  39.      --client-certificate=/etc/kubernetes/pki/scheduler.pem \
  40.      --client-key=/etc/kubernetes/pki/scheduler-key.pem \
  41.      --embed-certs=true \
  42.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  43. kubectl config set-context system:kube-scheduler@kubernetes \
  44.      --cluster=kubernetes \
  45.      --user=system:kube-scheduler \
  46.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  47. kubectl config use-context system:kube-scheduler@kubernetes \
  48.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  49. cfssl gencert \
  50.    -ca=/etc/kubernetes/pki/ca.pem \
  51.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  52.    -config=ca-config.json \
  53.    -profile=kubernetes \
  54.    admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
  55. kubectl config set-cluster kubernetes     \
  56.   --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  57.   --embed-certs=true     \
  58.   --server=https://192.168.1.19:8443     \
  59.   --kubeconfig=/etc/kubernetes/admin.kubeconfig
  60. kubectl config set-credentials kubernetes-admin  \
  61.   --client-certificate=/etc/kubernetes/pki/admin.pem     \
  62.   --client-key=/etc/kubernetes/pki/admin-key.pem     \
  63.   --embed-certs=true     \
  64.   --kubeconfig=/etc/kubernetes/admin.kubeconfig
  65. kubectl config set-context kubernetes-admin@kubernetes    \
  66.   --cluster=kubernetes     \
  67.   --user=kubernetes-admin     \
  68.   --kubeconfig=/etc/kubernetes/admin.kubeconfig
  69. kubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/etc/kubernetes/admin.kubeconfig

3.2.5创建kube-proxy证书

  1. cfssl gencert \
  2.    -ca=/etc/kubernetes/pki/ca.pem \
  3.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  4.    -config=ca-config.json \
  5.    -profile=kubernetes \
  6.    kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy
  7. kubectl config set-cluster kubernetes     \
  8.   --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  9.   --embed-certs=true     \
  10.   --server=https://192.168.1.19:8443     \
  11.   --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
  12. kubectl config set-credentials kube-proxy  \
  13.   --client-certificate=/etc/kubernetes/pki/kube-proxy.pem     \
  14.   --client-key=/etc/kubernetes/pki/kube-proxy-key.pem     \
  15.   --embed-certs=true     \
  16.   --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
  17. kubectl config set-context kube-proxy@kubernetes    \
  18.   --cluster=kubernetes     \
  19.   --user=kube-proxy     \
  20.   --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
  21. kubectl config use-context kube-proxy@kubernetes  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

3.2.5创建ServiceAccount Key ——secret

  1. openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
  2. openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

3.2.6将证书发送到其他master节点

  1. #其他节点创建目录
  2. # mkdir  /etc/kubernetes/pki/ -p
  3. for NODE in k8s-master02 k8s-master03; do  for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do  scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done;  for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do  scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done

3.2.7查看证书

  1. ls /etc/kubernetes/pki/
  2. admin.csr          ca.csr                      front-proxy-ca.csr          kube-proxy.csr      scheduler-key.pem
  3. admin-key.pem      ca-key.pem                  front-proxy-ca-key.pem      kube-proxy-key.pem  scheduler.pem
  4. admin.pem          ca.pem                      front-proxy-ca.pem          kube-proxy.pem
  5. apiserver.csr      controller-manager.csr      front-proxy-client.csr      sa.key
  6. apiserver-key.pem  controller-manager-key.pem  front-proxy-client-key.pem  sa.pub
  7. apiserver.pem      controller-manager.pem      front-proxy-client.pem      scheduler.csr
  8. # 一共26个就对了
  9. ls /etc/kubernetes/pki/ |wc -l
  10. 26

4.k8s系统组件配置

4.1.etcd配置

4.1.1master01配置

  1. # 如果要用IPv6那么把IPv4地址修改为IPv6即可
  2. cat > /etc/etcd/etcd.config.yml << EOF 
  3. name: 'k8s-master01'
  4. data-dir: /var/lib/etcd
  5. wal-dir: /var/lib/etcd/wal
  6. snapshot-count: 5000
  7. heartbeat-interval: 100
  8. election-timeout: 1000
  9. quota-backend-bytes: 0
  10. listen-peer-urls: 'https://192.168.1.11:2380'
  11. listen-client-urls: 'https://192.168.1.11:2379,http://127.0.0.1:2379'
  12. max-snapshots: 3
  13. max-wals: 5
  14. cors:
  15. initial-advertise-peer-urls: 'https://192.168.1.11:2380'
  16. advertise-client-urls: 'https://192.168.1.11:2379'
  17. discovery:
  18. discovery-fallback: 'proxy'
  19. discovery-proxy:
  20. discovery-srv:
  21. initial-cluster: 'k8s-master01=https://192.168.1.11:2380,k8s-master02=https://192.168.1.12:2380,k8s-master03=https://192.168.1.13:2380'
  22. initial-cluster-token: 'etcd-k8s-cluster'
  23. initial-cluster-state: 'new'
  24. strict-reconfig-check: false
  25. enable-v2: true
  26. enable-pprof: true
  27. proxy: 'off'
  28. proxy-failure-wait: 5000
  29. proxy-refresh-interval: 30000
  30. proxy-dial-timeout: 1000
  31. proxy-write-timeout: 5000
  32. proxy-read-timeout: 0
  33. client-transport-security:
  34.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  35.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  36.   client-cert-auth: true
  37.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  38.   auto-tls: true
  39. peer-transport-security:
  40.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  41.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  42.   peer-client-cert-auth: true
  43.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  44.   auto-tls: true
  45. debug: false
  46. log-package-levels:
  47. log-outputs: [default]
  48. force-new-cluster: false
  49. EOF

4.1.2master02配置

  1. # 如果要用IPv6那么把IPv4地址修改为IPv6即可
  2. cat > /etc/etcd/etcd.config.yml << EOF 
  3. name: 'k8s-master02'
  4. data-dir: /var/lib/etcd
  5. wal-dir: /var/lib/etcd/wal
  6. snapshot-count: 5000
  7. heartbeat-interval: 100
  8. election-timeout: 1000
  9. quota-backend-bytes: 0
  10. listen-peer-urls: 'https://192.168.1.12:2380'
  11. listen-client-urls: 'https://192.168.1.12:2379,http://127.0.0.1:2379'
  12. max-snapshots: 3
  13. max-wals: 5
  14. cors:
  15. initial-advertise-peer-urls: 'https://192.168.1.12:2380'
  16. advertise-client-urls: 'https://192.168.1.12:2379'
  17. discovery:
  18. discovery-fallback: 'proxy'
  19. discovery-proxy:
  20. discovery-srv:
  21. initial-cluster: 'k8s-master01=https://192.168.1.11:2380,k8s-master02=https://192.168.1.12:2380,k8s-master03=https://192.168.1.13:2380'
  22. initial-cluster-token: 'etcd-k8s-cluster'
  23. initial-cluster-state: 'new'
  24. strict-reconfig-check: false
  25. enable-v2: true
  26. enable-pprof: true
  27. proxy: 'off'
  28. proxy-failure-wait: 5000
  29. proxy-refresh-interval: 30000
  30. proxy-dial-timeout: 1000
  31. proxy-write-timeout: 5000
  32. proxy-read-timeout: 0
  33. client-transport-security:
  34.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  35.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  36.   client-cert-auth: true
  37.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  38.   auto-tls: true
  39. peer-transport-security:
  40.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  41.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  42.   peer-client-cert-auth: true
  43.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  44.   auto-tls: true
  45. debug: false
  46. log-package-levels:
  47. log-outputs: [default]
  48. force-new-cluster: false
  49. EOF

4.1.3master03配置

  1. # 如果要用IPv6那么把IPv4地址修改为IPv6即可
  2. cat > /etc/etcd/etcd.config.yml << EOF 
  3. name: 'k8s-master03'
  4. data-dir: /var/lib/etcd
  5. wal-dir: /var/lib/etcd/wal
  6. snapshot-count: 5000
  7. heartbeat-interval: 100
  8. election-timeout: 1000
  9. quota-backend-bytes: 0
  10. listen-peer-urls: 'https://192.168.1.13:2380'
  11. listen-client-urls: 'https://192.168.1.13:2379,http://127.0.0.1:2379'
  12. max-snapshots: 3
  13. max-wals: 5
  14. cors:
  15. initial-advertise-peer-urls: 'https://192.168.1.13:2380'
  16. advertise-client-urls: 'https://192.168.1.13:2379'
  17. discovery:
  18. discovery-fallback: 'proxy'
  19. discovery-proxy:
  20. discovery-srv:
  21. initial-cluster: 'k8s-master01=https://192.168.1.11:2380,k8s-master02=https://192.168.1.12:2380,k8s-master03=https://192.168.1.13:2380'
  22. initial-cluster-token: 'etcd-k8s-cluster'
  23. initial-cluster-state: 'new'
  24. strict-reconfig-check: false
  25. enable-v2: true
  26. enable-pprof: true
  27. proxy: 'off'
  28. proxy-failure-wait: 5000
  29. proxy-refresh-interval: 30000
  30. proxy-dial-timeout: 1000
  31. proxy-write-timeout: 5000
  32. proxy-read-timeout: 0
  33. client-transport-security:
  34.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  35.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  36.   client-cert-auth: true
  37.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  38.   auto-tls: true
  39. peer-transport-security:
  40.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  41.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  42.   peer-client-cert-auth: true
  43.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  44.   auto-tls: true
  45. debug: false
  46. log-package-levels:
  47. log-outputs: [default]
  48. force-new-cluster: false
  49. EOF

4.2.创建service(所有master节点操作)

4.2.1创建etcd.service并启动

  1. cat > /usr/lib/systemd/system/etcd.service << EOF
  2. [Unit]
  3. Description=Etcd Service
  4. Documentation=https://coreos.com/etcd/docs/latest/
  5. After=network.target
  6. [Service]
  7. Type=notify
  8. ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
  9. Restart=on-failure
  10. RestartSec=10
  11. LimitNOFILE=65536
  12. [Install]
  13. WantedBy=multi-user.target
  14. Alias=etcd3.service
  15. EOF

4.2.2创建etcd证书目录

  1. mkdir /etc/kubernetes/pki/etcd
  2. ln -/etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
  3. systemctl daemon-reload
  4. systemctl enable --now etcd

4.2.3查看etcd状态

  1. # 如果要用IPv6那么把IPv4地址修改为IPv6即可
  2. export ETCDCTL_API=3
  3. etcdctl --endpoints="192.168.1.13:2379,192.168.1.12:2379,192.168.1.11:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
  4. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  5. |    ENDPOINT    |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
  6. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  7. | 192.168.1.13:2379 | c0c8142615b9523f |   3.5.4 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
  8. | 192.168.1.12:2379 | de8396604d2c160d |   3.5.4 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
  9. | 192.168.1.11:2379 | 33c9d6df0037ab97 |   3.5.4 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
  10. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  11. [root@k8s-master01 pki]#

5.高可用配置

5.1在master三台服务器上操作

5.1.1安装keepalived和haproxy服务

  1. apt -y install keepalived haproxy

5.1.2修改haproxy配置文件(配置文件一样)

  1. # cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
  2. cat >/etc/haproxy/haproxy.cfg<<"EOF"
  3. global
  4.  maxconn 2000
  5.  ulimit-16384
  6.  log 127.0.0.1 local0 err
  7.  stats timeout 30s
  8. defaults
  9.  log global
  10.  mode http
  11.  option httplog
  12.  timeout connect 5000
  13.  timeout client 50000
  14.  timeout server 50000
  15.  timeout http-request 15s
  16.  timeout http-keep-alive 15s
  17. frontend monitor-in
  18.  bind *:33305
  19.  mode http
  20.  option httplog
  21.  monitor-uri /monitor
  22. frontend k8s-master
  23.  bind 0.0.0.0:8443
  24.  bind 127.0.0.1:8443
  25.  mode tcp
  26.  option tcplog
  27.  tcp-request inspect-delay 5s
  28.  default_backend k8s-master
  29. backend k8s-master
  30.  mode tcp
  31.  option tcplog
  32.  option tcp-check
  33.  balance roundrobin
  34.  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  35.  server  k8s-master01  192.168.1.11:6443 check
  36.  server  k8s-master02  192.168.1.12:6443 check
  37.  server  k8s-master03  192.168.1.13:6443 check
  38. EOF

5.1.3M1配置keepalived master节点

  1. #cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  2. cat > /etc/keepalived/keepalived.conf << EOF
  3. ! Configuration File for keepalived
  4. global_defs {
  5.     router_id LVS_DEVEL
  6. }
  7. vrrp_script chk_apiserver {
  8.     script "/etc/keepalived/check_apiserver.sh"
  9.     interval 5 
  10.     weight -5
  11.     fall 2
  12.     rise 1
  13. }
  14. vrrp_instance VI_1 {
  15.     state MASTER
  16.     interface ens18
  17.     mcast_src_ip 192.168.1.11
  18.     virtual_router_id 51
  19.     priority 100
  20.     nopreempt
  21.     advert_int 2
  22.     authentication {
  23.         auth_type PASS
  24.         auth_pass K8SHA_KA_AUTH
  25.     }
  26.     virtual_ipaddress {
  27.         192.168.1.19
  28.     }
  29.     track_script {
  30.       chk_apiserver 
  31. } }
  32. EOF

5.1.4M2配置keepalived backup节点

  1. # cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  2. cat > /etc/keepalived/keepalived.conf << EOF
  3. ! Configuration File for keepalived
  4. global_defs {
  5.     router_id LVS_DEVEL
  6. }
  7. vrrp_script chk_apiserver {
  8.     script "/etc/keepalived/check_apiserver.sh"
  9.     interval 5 
  10.     weight -5
  11.     fall 2
  12.     rise 1
  13. }
  14. vrrp_instance VI_1 {
  15.     state BACKUP
  16.     interface ens18
  17.     mcast_src_ip 192.168.1.12
  18.     virtual_router_id 51
  19.     priority 50
  20.     nopreempt
  21.     advert_int 2
  22.     authentication {
  23.         auth_type PASS
  24.         auth_pass K8SHA_KA_AUTH
  25.     }
  26.     virtual_ipaddress {
  27.         192.168.1.19
  28.     }
  29.     track_script {
  30.       chk_apiserver 
  31. } }
  32. EOF

5.1.5M3配置keepalived backup节点

  1. # cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  2. cat > /etc/keepalived/keepalived.conf << EOF
  3. ! Configuration File for keepalived
  4. global_defs {
  5.     router_id LVS_DEVEL
  6. }
  7. vrrp_script chk_apiserver {
  8.     script "/etc/keepalived/check_apiserver.sh"
  9.     interval 5 
  10.     weight -5
  11.     fall 2
  12.     rise 1
  13. }
  14. vrrp_instance VI_1 {
  15.     state BACKUP
  16.     interface ens18
  17.     mcast_src_ip 192.168.1.13
  18.     virtual_router_id 51
  19.     priority 50
  20.     nopreempt
  21.     advert_int 2
  22.     authentication {
  23.         auth_type PASS
  24.         auth_pass K8SHA_KA_AUTH
  25.     }
  26.     virtual_ipaddress {
  27.         192.168.1.19
  28.     }
  29.     track_script {
  30.       chk_apiserver 
  31. } }
  32. EOF

5.1.5健康检查脚本配置(所有master主机)

  1. cat >  /etc/keepalived/check_apiserver.sh << EOF
  2. #!/bin/bash
  3. err=0
  4. for k in \$(seq 1 3)
  5. do
  6.     check_code=\$(pgrep haproxy)
  7.     if [[ \$check_code == "" ]]; then
  8.         err=\$(expr \$err + 1)
  9.         sleep 1
  10.         continue
  11.     else
  12.         err=0
  13.         break
  14.     fi
  15. done
  16. if [[ \$err != "0" ]]; then
  17.     echo "systemctl stop keepalived"
  18.     /usr/bin/systemctl stop keepalived
  19.     exit 1
  20. else
  21.     exit 0
  22. fi
  23. EOF
  24. # 给脚本授权
  25. chmod +/etc/keepalived/check_apiserver.sh

5.1.6启动服务

  1. systemctl daemon-reload
  2. systemctl enable --now haproxy
  3. systemctl enable --now keepalived

5.1.7测试高可用

  1. # 能ping同
  2. [root@k8s-node02 ~]# ping 192.168.1.19
  3. # 能telnet访问
  4. [root@k8s-node02 ~]# telnet 192.168.1.19 8443
  5. # 关闭主节点,看vip是否漂移到备节点

6.k8s组件配置(区别于第4点)

所有k8s节点创建以下目录

  1. mkdir -/etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service./var/lib/kubelet /var/log/kubernetes

6.1.创建apiserver(所有master节点)

6.1.1master01节点配置

  1. cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-apiserver \
  8.       --v=2  \
  9.       --logtostderr=true  \
  10.       --allow-privileged=true  \
  11.       --bind-address=0.0.0.0  \
  12.       --secure-port=6443  \
  13.       --advertise-address=192.168.1.11 \
  14.       --service-cluster-ip-range=10.96.0.0/12,fd00::/108  \
  15.       --feature-gates=IPv6DualStack=true  \
  16.       --service-node-port-range=30000-32767  \
  17.       --etcd-servers=https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 \
  18.       --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
  19.       --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
  20.       --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
  21.       --client-ca-file=/etc/kubernetes/pki/ca.pem  \
  22.       --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
  23.       --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
  24.       --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
  25.       --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
  26.       --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
  27.       --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
  28.       --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  29.       --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
  30.       --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
  31.       --authorization-mode=Node,RBAC  \
  32.       --enable-bootstrap-token-auth=true  \
  33.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
  34.       --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
  35.       --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
  36.       --requestheader-allowed-names=aggregator  \
  37.       --requestheader-group-headers=X-Remote-Group  \
  38.       --requestheader-extra-headers-prefix=X-Remote-Extra-  \
  39.       --requestheader-username-headers=X-Remote-User \
  40.       --enable-aggregator-routing=true
  41.       # --token-auth-file=/etc/kubernetes/token.csv
  42. Restart=on-failure
  43. RestartSec=10s
  44. LimitNOFILE=65535
  45. [Install]
  46. WantedBy=multi-user.target
  47. EOF

6.1.2master02节点配置

  1. cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-apiserver \
  8.       --v=2  \
  9.       --logtostderr=true  \
  10.       --allow-privileged=true  \
  11.       --bind-address=0.0.0.0  \
  12.       --secure-port=6443  \
  13.       --advertise-address=192.168.1.12 \
  14.       --service-cluster-ip-range=10.96.0.0/12,fd00::/108  \
  15.             --feature-gates=IPv6DualStack=true \
  16.       --service-node-port-range=30000-32767  \
  17.       --etcd-servers=https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 \
  18.       --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
  19.       --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
  20.       --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
  21.       --client-ca-file=/etc/kubernetes/pki/ca.pem  \
  22.       --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
  23.       --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
  24.       --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
  25.       --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
  26.       --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
  27.       --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
  28.       --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  29.       --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
  30.       --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
  31.       --authorization-mode=Node,RBAC  \
  32.       --enable-bootstrap-token-auth=true  \
  33.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
  34.       --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
  35.       --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
  36.       --requestheader-allowed-names=aggregator  \
  37.       --requestheader-group-headers=X-Remote-Group  \
  38.       --requestheader-extra-headers-prefix=X-Remote-Extra-  \
  39.       --requestheader-username-headers=X-Remote-User \
  40.       --enable-aggregator-routing=true
  41.       # --token-auth-file=/etc/kubernetes/token.csv
  42. Restart=on-failure
  43. RestartSec=10s
  44. LimitNOFILE=65535
  45. [Install]
  46. WantedBy=multi-user.target
  47. EOF

6.1.3master03节点配置

  1. cat > /usr/lib/systemd/system/kube-apiserver.service  << EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-apiserver \
  8.       --v=2  \
  9.       --logtostderr=true  \
  10.       --allow-privileged=true  \
  11.       --bind-address=0.0.0.0  \
  12.       --secure-port=6443  \
  13.       --advertise-address=192.168.1.13 \
  14.       --service-cluster-ip-range=10.96.0.0/12,fd00::/108  \
  15.             --feature-gates=IPv6DualStack=true \
  16.       --service-node-port-range=30000-32767  \
  17.       --etcd-servers=https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 \
  18.       --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
  19.       --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
  20.       --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
  21.       --client-ca-file=/etc/kubernetes/pki/ca.pem  \
  22.       --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
  23.       --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
  24.       --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
  25.       --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
  26.       --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
  27.       --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
  28.       --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  29.       --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
  30.       --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
  31.       --authorization-mode=Node,RBAC  \
  32.       --enable-bootstrap-token-auth=true  \
  33.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
  34.       --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
  35.       --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
  36.       --requestheader-allowed-names=aggregator  \
  37.       --requestheader-group-headers=X-Remote-Group  \
  38.       --requestheader-extra-headers-prefix=X-Remote-Extra-  \
  39.       --requestheader-username-headers=X-Remote-User \
  40.       --enable-aggregator-routing=true
  41. Restart=on-failure
  42. RestartSec=10s
  43. LimitNOFILE=65535
  44. [Install]
  45. WantedBy=multi-user.target
  46. EOF

6.1.4启动apiserver(所有master节点)

  1. systemctl daemon-reload && systemctl enable --now kube-apiserver
  2. # 注意查看状态是否启动正常
  3. systemctl status kube-apiserver

6.2.配置kube-controller-manager service

  1. # 所有master节点配置,且配置相同
  2. # 172.16.0.0/12为pod网段,按需求设置你自己的网段
  3. cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
  4. [Unit]
  5. Description=Kubernetes Controller Manager
  6. Documentation=https://github.com/kubernetes/kubernetes
  7. After=network.target
  8. [Service]
  9. ExecStart=/usr/local/bin/kube-controller-manager \
  10.       --v=2 \
  11.       --logtostderr=true \
  12.       --bind-address=127.0.0.1 \
  13.       --root-ca-file=/etc/kubernetes/pki/ca.pem \
  14.       --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
  15.       --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
  16.       --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
  17.       --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
  18.       --leader-elect=true \
  19.       --use-service-account-credentials=true \
  20.       --node-monitor-grace-period=40s \
  21.       --node-monitor-period=5s \
  22.       --pod-eviction-timeout=2m0s \
  23.       --controllers=*,bootstrapsigner,tokencleaner \
  24.       --allocate-node-cidrs=true \
  25.       --feature-gates=IPv6DualStack=true \
  26.       --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \
  27.       --cluster-cidr=172.16.0.0/12,fc00::/48 \
  28.       --node-cidr-mask-size-ipv4=24 \
  29.       --node-cidr-mask-size-ipv6=64 \
  30.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem 
  31. Restart=always
  32. RestartSec=10s
  33. [Install]
  34. WantedBy=multi-user.target
  35. EOF

6.2.1启动kube-controller-manager,并查看状态

  1. systemctl daemon-reload
  2. systemctl enable --now kube-controller-manager
  3. systemctl  status kube-controller-manager

6.3.配置kube-scheduler service

6.3.1所有master节点配置,且配置相同

  1. cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
  2. [Unit]
  3. Description=Kubernetes Scheduler
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-scheduler \
  8.       --v=2 \
  9.       --logtostderr=true \
  10.       --bind-address=127.0.0.1 \
  11.       --leader-elect=true \
  12.       --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  13. Restart=always
  14. RestartSec=10s
  15. [Install]
  16. WantedBy=multi-user.target
  17. EOF

6.3.2启动并查看服务状态

  1. systemctl daemon-reload
  2. systemctl enable --now kube-scheduler
  3. systemctl status kube-scheduler

7.TLS Bootstrapping配置

7.1在master01上配置

  1. cd bootstrap
  2. kubectl config set-cluster kubernetes     \
  3. --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  4. --embed-certs=true     --server=https://192.168.1.19:8443     \
  5. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  6. kubectl config set-credentials tls-bootstrap-token-user     \
  7. --token=c8ad9c.2e4d610cf3e7426e \
  8. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  9. kubectl config set-context tls-bootstrap-token-user@kubernetes     \
  10. --cluster=kubernetes     \
  11. --user=tls-bootstrap-token-user     \
  12. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  13. kubectl config use-context tls-bootstrap-token-user@kubernetes     \
  14. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  15. # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改
  16. mkdir -/root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

7.2查看集群状态,没问题的话继续后续操作

  1. # 三台ha节点重启haproxy
  2. systemctl  stop haproxy
  3. systemctl  start haproxy
  4. kubectl get cs
  5. Warning: v1 ComponentStatus is deprecated in v1.19+
  6. NAME                 STATUS    MESSAGE                         ERROR
  7. scheduler            Healthy   ok                              
  8. controller-manager   Healthy   ok                              
  9. etcd-0               Healthy   {"health":"true","reason":""}   
  10. etcd-2               Healthy   {"health":"true","reason":""}   
  11. etcd-1               Healthy   {"health":"true","reason":""} 
  12. # 切记执行,别忘记!!!
  13. kubectl create -f bootstrap.secret.yaml

8.node节点配置

8.1.在master01上将证书复制到node节点

  1. cd /etc/kubernetes/
  2. for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -/etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

8.2.kubelet配置

8.2.1所有k8s节点创建相关目录

  1. mkdir -/var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service./etc/kubernetes/manifests/
  2. # 所有k8s节点配置kubelet service
  3. cat > /usr/lib/systemd/system/kubelet.service << EOF
  4. [Unit]
  5. Description=Kubernetes Kubelet
  6. Documentation=https://github.com/kubernetes/kubernetes
  7. After=containerd.service
  8. Requires=containerd.service
  9. [Service]
  10. ExecStart=/usr/local/bin/kubelet \
  11.     --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \
  12.     --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  13.     --config=/etc/kubernetes/kubelet-conf.yml \
  14.     --container-runtime=remote  \
  15.     --runtime-request-timeout=15m  \
  16.     --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \
  17.     --cgroup-driver=systemd \
  18.     --node-labels=node.kubernetes.io/node='' \
  19.     --feature-gates=IPv6DualStack=true
  20. [Install]
  21. WantedBy=multi-user.target
  22. EOF

8.2.2所有k8s节点创建kubelet的配置文件

  1. cat > /etc/kubernetes/kubelet-conf.yml <<EOF
  2. apiVersion: kubelet.config.k8s.io/v1beta1
  3. kind: KubeletConfiguration
  4. address: 0.0.0.0
  5. port: 10250
  6. readOnlyPort: 10255
  7. authentication:
  8.   anonymous:
  9.     enabled: false
  10.   webhook:
  11.     cacheTTL: 2m0s
  12.     enabled: true
  13.   x509:
  14.     clientCAFile: /etc/kubernetes/pki/ca.pem
  15. authorization:
  16.   mode: Webhook
  17.   webhook:
  18.     cacheAuthorizedTTL: 5m0s
  19.     cacheUnauthorizedTTL: 30s
  20. cgroupDriver: systemd
  21. cgroupsPerQOS: true
  22. clusterDNS:
  23. - 10.96.0.10
  24. clusterDomain: cluster.local
  25. containerLogMaxFiles: 5
  26. containerLogMaxSize: 10Mi
  27. contentType: application/vnd.kubernetes.protobuf
  28. cpuCFSQuota: true
  29. cpuManagerPolicy: none
  30. cpuManagerReconcilePeriod: 10s
  31. enableControllerAttachDetach: true
  32. enableDebuggingHandlers: true
  33. enforceNodeAllocatable:
  34. - pods
  35. eventBurst: 10
  36. eventRecordQPS: 5
  37. evictionHard:
  38.   imagefs.available: 15%
  39.   memory.available: 100Mi
  40.   nodefs.available: 10%
  41.   nodefs.inodesFree: 5%
  42. evictionPressureTransitionPeriod: 5m0s
  43. failSwapOn: true
  44. fileCheckFrequency: 20s
  45. hairpinMode: promiscuous-bridge
  46. healthzBindAddress: 127.0.0.1
  47. healthzPort: 10248
  48. httpCheckFrequency: 20s
  49. imageGCHighThresholdPercent: 85
  50. imageGCLowThresholdPercent: 80
  51. imageMinimumGCAge: 2m0s
  52. iptablesDropBit: 15
  53. iptablesMasqueradeBit: 14
  54. kubeAPIBurst: 10
  55. kubeAPIQPS: 5
  56. makeIPTablesUtilChains: true
  57. maxOpenFiles: 1000000
  58. maxPods: 110
  59. nodeStatusUpdateFrequency: 10s
  60. oomScoreAdj: -999
  61. podPidsLimit: -1
  62. registryBurst: 10
  63. registryPullQPS: 5
  64. resolvConf: /etc/resolv.conf
  65. rotateCertificates: true
  66. runtimeRequestTimeout: 2m0s
  67. serializeImagePulls: true
  68. staticPodPath: /etc/kubernetes/manifests
  69. streamingConnectionIdleTimeout: 4h0m0s
  70. syncFrequency: 1m0s
  71. volumeStatsAggPeriod: 1m0s
  72. EOF

8.2.3启动kubelet

  1. systemctl daemon-reload
  2. systemctl restart kubelet
  3. systemctl enable --now kubelet

8.2.4查看集群

  1. [root@k8s-master01 ~]# kubectl  get node
  2. NAME           STATUS     ROLES    AGE   VERSION
  3. k8s-master01   Ready   <none>   12s   v1.24.1
  4. k8s-master02   Ready   <none>   12s   v1.24.1
  5. k8s-master03   Ready   <none>   12s   v1.24.1
  6. k8s-node01     Ready   <none>   12s   v1.24.1
  7. k8s-node02     Ready   <none>   12s   v1.24.1
  8. [root@k8s-master01 ~]#

8.3.kube-proxy配置

8.3.1将kubeconfig发送至其他节点

  1. for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done
  2. for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig;  done

8.3.2所有k8s节点添加kube-proxy的service文件

  1. cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
  2. [Unit]
  3. Description=Kubernetes Kube Proxy
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-proxy \
  8.   --config=/etc/kubernetes/kube-proxy.yaml \
  9.   --v=2
  10. Restart=always
  11. RestartSec=10s
  12. [Install]
  13. WantedBy=multi-user.target
  14. EOF

8.3.3所有k8s节点添加kube-proxy的配置

  1. cat > /etc/kubernetes/kube-proxy.yaml << EOF
  2. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  3. bindAddress: 0.0.0.0
  4. clientConnection:
  5.   acceptContentTypes: ""
  6.   burst: 10
  7.   contentType: application/vnd.kubernetes.protobuf
  8.   kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  9.   qps: 5
  10. clusterCIDR: 172.16.0.0/12,fc00::/48 
  11. configSyncPeriod: 15m0s
  12. conntrack:
  13.   max: null
  14.   maxPerCore: 32768
  15.   min: 131072
  16.   tcpCloseWaitTimeout: 1h0m0s
  17.   tcpEstablishedTimeout: 24h0m0s
  18. enableProfiling: false
  19. healthzBindAddress: 0.0.0.0:10256
  20. hostnameOverride: ""
  21. iptables:
  22.   masqueradeAll: false
  23.   masqueradeBit: 14
  24.   minSyncPeriod: 0s
  25.   syncPeriod: 30s
  26. ipvs:
  27.   masqueradeAll: true
  28.   minSyncPeriod: 5s
  29.   scheduler: "rr"
  30.   syncPeriod: 30s
  31. kind: KubeProxyConfiguration
  32. metricsBindAddress: 127.0.0.1:10249
  33. mode: "ipvs"
  34. nodePortAddresses: null
  35. oomScoreAdj: -999
  36. portRange: ""
  37. udpIdleTimeout: 250ms
  38. EOF

8.3.4启动kube-proxy

  1. systemctl daemon-reload
  2.  systemctl restart kube-proxy
  3.  systemctl enable --now kube-proxy

9.安装Calico

9.1以下步骤只在master01操作

9.1.1更改calico网段

  1. # vim calico.yaml
  2. vim calico-ipv6.yaml
  3. # calico-config ConfigMap处
  4.     "ipam": {
  5.         "type": "calico-ipam",
  6.         "assign_ipv4": "true",
  7.         "assign_ipv6": "true"
  8.     },
  9.     - name: IP
  10.       value: "autodetect"
  11.     - name: IP6
  12.       value: "autodetect"
  13.     - name: CALICO_IPV4POOL_CIDR
  14.       value: "172.16.0.0/16"
  15.     - name: CALICO_IPV6POOL_CIDR
  16.       value: "fc00::/48"
  17.     - name: FELIX_IPV6SUPPORT
  18.       value: "true"
  19. # kubectl apply -f calico.yaml
  20. kubectl apply -f calico-ipv6.yaml

9.1.2查看容器状态

  1. [root@k8s-master01 ~]# kubectl  get pod -A
  2. NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
  3. kube-system   calico-kube-controllers-7fb57bc4b5-dwwg8   1/1     Running   0          23s
  4. kube-system   calico-node-b8p4z                          1/1     Running   0          23s
  5. kube-system   calico-node-c4lzj                          1/1     Running   0          23s
  6. kube-system   calico-node-dfh2m                          1/1     Running   0          23s
  7. kube-system   calico-node-gbhgn                          1/1     Running   0          23s
  8. kube-system   calico-node-ht6nl                          1/1     Running   0          23s
  9. kube-system   calico-typha-dd885f47-jvgsj                1/1     Running   0          23s
  10. [root@k8s-master01 ~]#

10.安装CoreDNS

10.1以下步骤只在master01操作

10.1.1修改文件

  1. cd coredns/
  2. sed -"s#10.96.0.10#10.96.0.10#g" coredns.yaml
  3. cat coredns.yaml | grep clusterIP:
  4.   clusterIP: 10.96.0.10

10.1.2安装

  1. kubectl  create -f coredns.yaml 
  2. serviceaccount/coredns created
  3. clusterrole.rbac.authorization.k8s.io/system:coredns created
  4. clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
  5. configmap/coredns created
  6. deployment.apps/coredns created
  7. service/kube-dns created

11.安装Metrics Server

11.1以下步骤只在master01操作

11.1.1安装Metrics-server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

  1. # 安装metrics server
  2. cd metrics-server/
  3. kubectl  apply -f metrics-server.yaml

11.1.2稍等片刻查看状态

  1. kubectl  top node
  2. NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
  3. k8s-master01   154m         1%     1715Mi          21%       
  4. k8s-master02   151m         1%     1274Mi          16%       
  5. k8s-master03   523m         6%     1345Mi          17%       
  6. k8s-node01     84m          1%     671Mi           8%        
  7. k8s-node02     73m          0%     727Mi           9%        
  8. k8s-node03     96m          1%     769Mi           9%        
  9. k8s-node04     68m          0%     673Mi           8%        
  10. k8s-node05     82m          1%     679Mi           8%

12.集群验证

12.1部署pod资源

  1. cat<<EOF | kubectl apply --
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5.   name: busybox
  6.   namespace: default
  7. spec:
  8.   containers:
  9.   - name: busybox
  10.     image: busybox:1.28
  11.     command:
  12.       - sleep
  13.       - "3600"
  14.     imagePullPolicy: IfNotPresent
  15.   restartPolicy: Always
  16. EOF
  17. # 查看
  18. kubectl  get pod
  19. NAME      READY   STATUS    RESTARTS   AGE
  20. busybox   1/1     Running   0          17s

12.2用pod解析默认命名空间中的kubernetes

  1. kubectl get svc
  2. NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
  3. kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17h
  4. kubectl exec  busybox -default -- nslookup kubernetes
  5. 3Server:    10.96.0.10
  6. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
  7. Name:      kubernetes
  8. Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

12.3测试跨命名空间是否可以解析

  1. kubectl exec  busybox -default -- nslookup kube-dns.kube-system
  2. Server:    10.96.0.10
  3. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
  4. Name:      kube-dns.kube-system
  5. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

  1. telnet 10.96.0.1 443
  2. Trying 10.96.0.1...
  3. Connected to 10.96.0.1.
  4. Escape character is '^]'.
  5.  telnet 10.96.0.10 53
  6. Trying 10.96.0.10...
  7. Connected to 10.96.0.10.
  8. Escape character is '^]'.
  9. curl 10.96.0.10:53
  10. curl: (52) Empty reply from server

12.5Pod和Pod之前要能通

  1. kubectl get po -owide
  2. NAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
  3. busybox   1/1     Running   0          17m   172.27.14.193   k8s-node02   <none>           <none>
  4.  kubectl get po -n kube-system -owide
  5. NAME                                       READY   STATUS    RESTARTS      AGE   IP               NODE           NOMINATED NODE   READINESS GATES
  6. calico-kube-controllers-5dffd5886b-4blh6   1/1     Running   0             77m   172.25.244.193   k8s-master01   <none>           <none>
  7. calico-node-fvbdq                          1/1     Running   1 (75m ago)   77m   192.168.1.11     k8s-master01   <none>           <none>
  8. calico-node-g8nqd                          1/1     Running   0             77m   192.168.1.14     k8s-node01     <none>           <none>
  9. calico-node-mdps8                          1/1     Running   0             77m   192.168.1.15     k8s-node02     <none>           <none>
  10. calico-node-nf4nt                          1/1     Running   0             77m   192.168.1.13     k8s-master03   <none>           <none>
  11. calico-node-sq2ml                          1/1     Running   0             77m   192.168.1.12     k8s-master02   <none>           <none>
  12. calico-typha-8445487f56-mg6p8              1/1     Running   0             77m   192.168.1.15     k8s-node02     <none>           <none>
  13. calico-typha-8445487f56-pxbpj              1/1     Running   0             77m   192.168.1.11     k8s-master01   <none>           <none>
  14. calico-typha-8445487f56-tnssl              1/1     Running   0             77m   192.168.1.14     k8s-node01     <none>           <none>
  15. coredns-5db5696c7-67h79                    1/1     Running   0             63m   172.25.92.65     k8s-master02   <none>           <none>
  16. metrics-server-6bf7dcd649-5fhrw            1/1     Running   0             61m   172.18.195.1     k8s-master03   <none>           <none>
  17. # 进入busybox ping其他节点上的pod
  18. kubectl exec -ti busybox -- sh
  19. / # ping 192.168.1.14
  20. PING 192.168.1.14 (192.168.1.14): 56 data bytes
  21. 64 bytes from 192.168.1.14: seq=0 ttl=63 time=0.358 ms
  22. 64 bytes from 192.168.1.14: seq=1 ttl=63 time=0.668 ms
  23. 64 bytes from 192.168.1.14: seq=2 ttl=63 time=0.637 ms
  24. 64 bytes from 192.168.1.14: seq=3 ttl=63 time=0.624 ms
  25. 64 bytes from 192.168.1.14: seq=4 ttl=63 time=0.907 ms
  26. # 可以连通证明这个pod是可以跨命名空间和跨主机通信的

12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)

  1. cat > deployments.yaml << EOF
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5.   name: nginx-deployment
  6.   labels:
  7.     app: nginx
  8. spec:
  9.   replicas: 3
  10.   selector:
  11.     matchLabels:
  12.       app: nginx
  13.   template:
  14.     metadata:
  15.       labels:
  16.         app: nginx
  17.     spec:
  18.       containers:
  19.       - name: nginx
  20.         image: nginx:1.14.2
  21.         ports:
  22.         - containerPort: 80
  23. EOF
  24. kubectl  apply -f deployments.yaml 
  25. deployment.apps/nginx-deployment created
  26. kubectl  get pod 
  27. NAME                               READY   STATUS    RESTARTS   AGE
  28. busybox                            1/1     Running   0          6m25s
  29. nginx-deployment-9456bbbf9-4bmvk   1/1     Running   0          8s
  30. nginx-deployment-9456bbbf9-9rcdk   1/1     Running   0          8s
  31. nginx-deployment-9456bbbf9-dqv8s   1/1     Running   0          8s
  32. # 删除nginx
  33. [root@k8s-master01 ~]# kubectl delete -f deployments.yaml

13.安装dashboard

  1. wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard.yaml
  2. wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard-user.yaml
  3. kubectl  apply -f dashboard.yaml
  4. kubectl  apply -f dashboard-user.yaml

13.1更改dashboard的svc为NodePort,如果已是请忽略

  1. kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
  2.   type: NodePort

13.2查看端口号

  1. kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
  2. NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
  3. kubernetes-dashboard   NodePort   10.108.120.110   <none>        443:30034/TCP   34s

13.3创建token

  1. kubectl -n kubernetes-dashboard create token admin-user
  2. eyJhbGciOiJSUzI1NiIsImtpZCI6ImxkV1hHaHViN2d3STVLTkxtbFkyaUZPdnhWa0s2NjUzRGVrNmJhMjVpRmsifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjUzODMwMTUwLCJpYXQiOjE2NTM4MjY1NTAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZDZlOTI2YWUtNDExYS00YTU3LTk3NWUtOWI4ZTEyMzYyZjg1In19LCJuYmYiOjE2NTM4MjY1NTAsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.ZSGJmGQc0F1jeJp8SwgZQ0a9ynTYi-y1JNUBJBhjRVStS9KphVK5MLpRxV4KqzhzGt8pR20nNZGop3na6EgIXVJ8XNrlQQO8kZV_I11ylw_mqL7sjCK_UsxJODOOvoRzOJMN3Qd9ONLB3cPjge9zIGeRvaEwpQulOWALScyQvO__1LkSjqz2DPQM7aDh0Gt6VZ2-JoVgTlEBy--nF-Okb0qyHMI8KEcqv7BnI1rJw5rETL7JrYBM3YIWY8_Ft71w6dKn7UhEbB9tPVMi0ymGTpUVja2M2ypsDymrMlcd4doRUn98F_i0iGW4ZN3CweRDFnkwwIUODjTn1fdp1uPXnQ

13.3登录dashboard

https://192.168.1.11:30034/

14.ingress安装

14.1写入配置文件,并执行

  1. [root@hello ~/yaml]# vim deploy.yaml
  2. [root@hello ~/yaml]# cat deploy.yaml
  3. apiVersion: v1
  4. kind: Namespace
  5. metadata:
  6.   name: ingress-nginx
  7.   labels:
  8.     app.kubernetes.io/name: ingress-nginx
  9.     app.kubernetes.io/instance: ingress-nginx
  10. ---
  11. # Source: ingress-nginx/templates/controller-serviceaccount.yaml
  12. apiVersion: v1
  13. kind: ServiceAccount
  14. metadata:
  15.   labels:
  16.     helm.sh/chart: ingress-nginx-4.0.10
  17.     app.kubernetes.io/name: ingress-nginx
  18.     app.kubernetes.io/instance: ingress-nginx
  19.     app.kubernetes.io/version: 1.1.0
  20.     app.kubernetes.io/managed-by: Helm
  21.     app.kubernetes.io/component: controller
  22.   name: ingress-nginx
  23.   namespace: ingress-nginx
  24. automountServiceAccountToken: true
  25. ---
  26. # Source: ingress-nginx/templates/controller-configmap.yaml
  27. apiVersion: v1
  28. kind: ConfigMap
  29. metadata:
  30.   labels:
  31.     helm.sh/chart: ingress-nginx-4.0.10
  32.     app.kubernetes.io/name: ingress-nginx
  33.     app.kubernetes.io/instance: ingress-nginx
  34.     app.kubernetes.io/version: 1.1.0
  35.     app.kubernetes.io/managed-by: Helm
  36.     app.kubernetes.io/component: controller
  37.   name: ingress-nginx-controller
  38.   namespace: ingress-nginx
  39. data:
  40.   allow-snippet-annotations: 'true'
  41. ---
  42. # Source: ingress-nginx/templates/clusterrole.yaml
  43. apiVersion: rbac.authorization.k8s.io/v1
  44. kind: ClusterRole
  45. metadata:
  46.   labels:
  47.     helm.sh/chart: ingress-nginx-4.0.10
  48.     app.kubernetes.io/name: ingress-nginx
  49.     app.kubernetes.io/instance: ingress-nginx
  50.     app.kubernetes.io/version: 1.1.0
  51.     app.kubernetes.io/managed-by: Helm
  52.   name: ingress-nginx
  53. rules:
  54.   - apiGroups:
  55.       - ''
  56.     resources:
  57.       - configmaps
  58.       - endpoints
  59.       - nodes
  60.       - pods
  61.       - secrets
  62.       - namespaces
  63.     verbs:
  64.       - list
  65.       - watch
  66.   - apiGroups:
  67.       - ''
  68.     resources:
  69.       - nodes
  70.     verbs:
  71.       - get
  72.   - apiGroups:
  73.       - ''
  74.     resources:
  75.       - services
  76.     verbs:
  77.       - get
  78.       - list
  79.       - watch
  80.   - apiGroups:
  81.       - networking.k8s.io
  82.     resources:
  83.       - ingresses
  84.     verbs:
  85.       - get
  86.       - list
  87.       - watch
  88.   - apiGroups:
  89.       - ''
  90.     resources:
  91.       - events
  92.     verbs:
  93.       - create
  94.       - patch
  95.   - apiGroups:
  96.       - networking.k8s.io
  97.     resources:
  98.       - ingresses/status
  99.     verbs:
  100.       - update
  101.   - apiGroups:
  102.       - networking.k8s.io
  103.     resources:
  104.       - ingressclasses
  105.     verbs:
  106.       - get
  107.       - list
  108.       - watch
  109. ---
  110. # Source: ingress-nginx/templates/clusterrolebinding.yaml
  111. apiVersion: rbac.authorization.k8s.io/v1
  112. kind: ClusterRoleBinding
  113. metadata:
  114.   labels:
  115.     helm.sh/chart: ingress-nginx-4.0.10
  116.     app.kubernetes.io/name: ingress-nginx
  117.     app.kubernetes.io/instance: ingress-nginx
  118.     app.kubernetes.io/version: 1.1.0
  119.     app.kubernetes.io/managed-by: Helm
  120.   name: ingress-nginx
  121. roleRef:
  122.   apiGroup: rbac.authorization.k8s.io
  123.   kind: ClusterRole
  124.   name: ingress-nginx
  125. subjects:
  126.   - kind: ServiceAccount
  127.     name: ingress-nginx
  128.     namespace: ingress-nginx
  129. ---
  130. # Source: ingress-nginx/templates/controller-role.yaml
  131. apiVersion: rbac.authorization.k8s.io/v1
  132. kind: Role
  133. metadata:
  134.   labels:
  135.     helm.sh/chart: ingress-nginx-4.0.10
  136.     app.kubernetes.io/name: ingress-nginx
  137.     app.kubernetes.io/instance: ingress-nginx
  138.     app.kubernetes.io/version: 1.1.0
  139.     app.kubernetes.io/managed-by: Helm
  140.     app.kubernetes.io/component: controller
  141.   name: ingress-nginx
  142.   namespace: ingress-nginx
  143. rules:
  144.   - apiGroups:
  145.       - ''
  146.     resources:
  147.       - namespaces
  148.     verbs:
  149.       - get
  150.   - apiGroups:
  151.       - ''
  152.     resources:
  153.       - configmaps
  154.       - pods
  155.       - secrets
  156.       - endpoints
  157.     verbs:
  158.       - get
  159.       - list
  160.       - watch
  161.   - apiGroups:
  162.       - ''
  163.     resources:
  164.       - services
  165.     verbs:
  166.       - get
  167.       - list
  168.       - watch
  169.   - apiGroups:
  170.       - networking.k8s.io
  171.     resources:
  172.       - ingresses
  173.     verbs:
  174.       - get
  175.       - list
  176.       - watch
  177.   - apiGroups:
  178.       - networking.k8s.io
  179.     resources:
  180.       - ingresses/status
  181.     verbs:
  182.       - update
  183.   - apiGroups:
  184.       - networking.k8s.io
  185.     resources:
  186.       - ingressclasses
  187.     verbs:
  188.       - get
  189.       - list
  190.       - watch
  191.   - apiGroups:
  192.       - ''
  193.     resources:
  194.       - configmaps
  195.     resourceNames:
  196.       - ingress-controller-leader
  197.     verbs:
  198.       - get
  199.       - update
  200.   - apiGroups:
  201.       - ''
  202.     resources:
  203.       - configmaps
  204.     verbs:
  205.       - create
  206.   - apiGroups:
  207.       - ''
  208.     resources:
  209.       - events
  210.     verbs:
  211.       - create
  212.       - patch
  213. ---
  214. # Source: ingress-nginx/templates/controller-rolebinding.yaml
  215. apiVersion: rbac.authorization.k8s.io/v1
  216. kind: RoleBinding
  217. metadata:
  218.   labels:
  219.     helm.sh/chart: ingress-nginx-4.0.10
  220.     app.kubernetes.io/name: ingress-nginx
  221.     app.kubernetes.io/instance: ingress-nginx
  222.     app.kubernetes.io/version: 1.1.0
  223.     app.kubernetes.io/managed-by: Helm
  224.     app.kubernetes.io/component: controller
  225.   name: ingress-nginx
  226.   namespace: ingress-nginx
  227. roleRef:
  228.   apiGroup: rbac.authorization.k8s.io
  229.   kind: Role
  230.   name: ingress-nginx
  231. subjects:
  232.   - kind: ServiceAccount
  233.     name: ingress-nginx
  234.     namespace: ingress-nginx
  235. ---
  236. # Source: ingress-nginx/templates/controller-service-webhook.yaml
  237. apiVersion: v1
  238. kind: Service
  239. metadata:
  240.   labels:
  241.     helm.sh/chart: ingress-nginx-4.0.10
  242.     app.kubernetes.io/name: ingress-nginx
  243.     app.kubernetes.io/instance: ingress-nginx
  244.     app.kubernetes.io/version: 1.1.0
  245.     app.kubernetes.io/managed-by: Helm
  246.     app.kubernetes.io/component: controller
  247.   name: ingress-nginx-controller-admission
  248.   namespace: ingress-nginx
  249. spec:
  250.   type: ClusterIP
  251.   ports:
  252.     - name: https-webhook
  253.       port: 443
  254.       targetPort: webhook
  255.       appProtocol: https
  256.   selector:
  257.     app.kubernetes.io/name: ingress-nginx
  258.     app.kubernetes.io/instance: ingress-nginx
  259.     app.kubernetes.io/component: controller
  260. ---
  261. # Source: ingress-nginx/templates/controller-service.yaml
  262. apiVersion: v1
  263. kind: Service
  264. metadata:
  265.   annotations:
  266.   labels:
  267.     helm.sh/chart: ingress-nginx-4.0.10
  268.     app.kubernetes.io/name: ingress-nginx
  269.     app.kubernetes.io/instance: ingress-nginx
  270.     app.kubernetes.io/version: 1.1.0
  271.     app.kubernetes.io/managed-by: Helm
  272.     app.kubernetes.io/component: controller
  273.   name: ingress-nginx-controller
  274.   namespace: ingress-nginx
  275. spec:
  276.   type: NodePort
  277.   externalTrafficPolicy: Local
  278.   ipFamilyPolicy: SingleStack
  279.   ipFamilies:
  280.     - IPv4
  281.   ports:
  282.     - name: http
  283.       port: 80
  284.       protocol: TCP
  285.       targetPort: http
  286.       appProtocol: http
  287.     - name: https
  288.       port: 443
  289.       protocol: TCP
  290.       targetPort: https
  291.       appProtocol: https
  292.   selector:
  293.     app.kubernetes.io/name: ingress-nginx
  294.     app.kubernetes.io/instance: ingress-nginx
  295.     app.kubernetes.io/component: controller
  296. ---
  297. # Source: ingress-nginx/templates/controller-deployment.yaml
  298. apiVersion: apps/v1
  299. kind: Deployment
  300. metadata:
  301.   labels:
  302.     helm.sh/chart: ingress-nginx-4.0.10
  303.     app.kubernetes.io/name: ingress-nginx
  304.     app.kubernetes.io/instance: ingress-nginx
  305.     app.kubernetes.io/version: 1.1.0
  306.     app.kubernetes.io/managed-by: Helm
  307.     app.kubernetes.io/component: controller
  308.   name: ingress-nginx-controller
  309.   namespace: ingress-nginx
  310. spec:
  311.   selector:
  312.     matchLabels:
  313.       app.kubernetes.io/name: ingress-nginx
  314.       app.kubernetes.io/instance: ingress-nginx
  315.       app.kubernetes.io/component: controller
  316.   revisionHistoryLimit: 10
  317.   minReadySeconds: 0
  318.   template:
  319.     metadata:
  320.       labels:
  321.         app.kubernetes.io/name: ingress-nginx
  322.         app.kubernetes.io/instance: ingress-nginx
  323.         app.kubernetes.io/component: controller
  324.     spec:
  325.       dnsPolicy: ClusterFirst
  326.       containers:
  327.         - name: controller
  328.           image: registry.cn-hangzhou.aliyuncs.com/chenby/controller:v1.2.0 
  329.           imagePullPolicy: IfNotPresent
  330.           lifecycle:
  331.             preStop:
  332.               exec:
  333.                 command:
  334.                   - /wait-shutdown
  335.           args:
  336.             - /nginx-ingress-controller
  337.             - --election-id=ingress-controller-leader
  338.             - --controller-class=k8s.io/ingress-nginx
  339.             - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
  340.             - --validating-webhook=:8443
  341.             - --validating-webhook-certificate=/usr/local/certificates/cert
  342.             - --validating-webhook-key=/usr/local/certificates/key
  343.           securityContext:
  344.             capabilities:
  345.               drop:
  346.                 - ALL
  347.               add:
  348.                 - NET_BIND_SERVICE
  349.             runAsUser: 101
  350.             allowPrivilegeEscalation: true
  351.           env:
  352.             - name: POD_NAME
  353.               valueFrom:
  354.                 fieldRef:
  355.                   fieldPath: metadata.name
  356.             - name: POD_NAMESPACE
  357.               valueFrom:
  358.                 fieldRef:
  359.                   fieldPath: metadata.namespace
  360.             - name: LD_PRELOAD
  361.               value: /usr/local/lib/libmimalloc.so
  362.           livenessProbe:
  363.             failureThreshold: 5
  364.             httpGet:
  365.               path: /healthz
  366.               port: 10254
  367.               scheme: HTTP
  368.             initialDelaySeconds: 10
  369.             periodSeconds: 10
  370.             successThreshold: 1
  371.             timeoutSeconds: 1
  372.           readinessProbe:
  373.             failureThreshold: 3
  374.             httpGet:
  375.               path: /healthz
  376.               port: 10254
  377.               scheme: HTTP
  378.             initialDelaySeconds: 10
  379.             periodSeconds: 10
  380.             successThreshold: 1
  381.             timeoutSeconds: 1
  382.           ports:
  383.             - name: http
  384.               containerPort: 80
  385.               protocol: TCP
  386.             - name: https
  387.               containerPort: 443
  388.               protocol: TCP
  389.             - name: webhook
  390.               containerPort: 8443
  391.               protocol: TCP
  392.           volumeMounts:
  393.             - name: webhook-cert
  394.               mountPath: /usr/local/certificates/
  395.               readOnly: true
  396.           resources:
  397.             requests:
  398.               cpu: 100m
  399.               memory: 90Mi
  400.       nodeSelector:
  401.         kubernetes.io/os: linux
  402.       serviceAccountName: ingress-nginx
  403.       terminationGracePeriodSeconds: 300
  404.       volumes:
  405.         - name: webhook-cert
  406.           secret:
  407.             secretName: ingress-nginx-admission
  408. ---
  409. # Source: ingress-nginx/templates/controller-ingressclass.yaml
  410. # We don't support namespaced ingressClass yet
  411. # So a ClusterRole and a ClusterRoleBinding is required
  412. apiVersion: networking.k8s.io/v1
  413. kind: IngressClass
  414. metadata:
  415.   labels:
  416.     helm.sh/chart: ingress-nginx-4.0.10
  417.     app.kubernetes.io/name: ingress-nginx
  418.     app.kubernetes.io/instance: ingress-nginx
  419.     app.kubernetes.io/version: 1.1.0
  420.     app.kubernetes.io/managed-by: Helm
  421.     app.kubernetes.io/component: controller
  422.   name: nginx
  423.   namespace: ingress-nginx
  424. spec:
  425.   controller: k8s.io/ingress-nginx
  426. ---
  427. # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
  428. # before changing this value, check the required kubernetes version
  429. # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
  430. apiVersion: admissionregistration.k8s.io/v1
  431. kind: ValidatingWebhookConfiguration
  432. metadata:
  433.   labels:
  434.     helm.sh/chart: ingress-nginx-4.0.10
  435.     app.kubernetes.io/name: ingress-nginx
  436.     app.kubernetes.io/instance: ingress-nginx
  437.     app.kubernetes.io/version: 1.1.0
  438.     app.kubernetes.io/managed-by: Helm
  439.     app.kubernetes.io/component: admission-webhook
  440.   name: ingress-nginx-admission
  441. webhooks:
  442.   - name: validate.nginx.ingress.kubernetes.io
  443.     matchPolicy: Equivalent
  444.     rules:
  445.       - apiGroups:
  446.           - networking.k8s.io
  447.         apiVersions:
  448.           - v1
  449.         operations:
  450.           - CREATE
  451.           - UPDATE
  452.         resources:
  453.           - ingresses
  454.     failurePolicy: Fail
  455.     sideEffects: None
  456.     admissionReviewVersions:
  457.       - v1
  458.     clientConfig:
  459.       service:
  460.         namespace: ingress-nginx
  461.         name: ingress-nginx-controller-admission
  462.         path: /networking/v1/ingresses
  463. ---
  464. # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
  465. apiVersion: v1
  466. kind: ServiceAccount
  467. metadata:
  468.   name: ingress-nginx-admission
  469.   namespace: ingress-nginx
  470.   annotations:
  471.     helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
  472.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  473.   labels:
  474.     helm.sh/chart: ingress-nginx-4.0.10
  475.     app.kubernetes.io/name: ingress-nginx
  476.     app.kubernetes.io/instance: ingress-nginx
  477.     app.kubernetes.io/version: 1.1.0
  478.     app.kubernetes.io/managed-by: Helm
  479.     app.kubernetes.io/component: admission-webhook
  480. ---
  481. # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
  482. apiVersion: rbac.authorization.k8s.io/v1
  483. kind: ClusterRole
  484. metadata:
  485.   name: ingress-nginx-admission
  486.   annotations:
  487.     helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
  488.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  489.   labels:
  490.     helm.sh/chart: ingress-nginx-4.0.10
  491.     app.kubernetes.io/name: ingress-nginx
  492.     app.kubernetes.io/instance: ingress-nginx
  493.     app.kubernetes.io/version: 1.1.0
  494.     app.kubernetes.io/managed-by: Helm
  495.     app.kubernetes.io/component: admission-webhook
  496. rules:
  497.   - apiGroups:
  498.       - admissionregistration.k8s.io
  499.     resources:
  500.       - validatingwebhookconfigurations
  501.     verbs:
  502.       - get
  503.       - update
  504. ---
  505. # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
  506. apiVersion: rbac.authorization.k8s.io/v1
  507. kind: ClusterRoleBinding
  508. metadata:
  509.   name: ingress-nginx-admission
  510.   annotations:
  511.     helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
  512.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  513.   labels:
  514.     helm.sh/chart: ingress-nginx-4.0.10
  515.     app.kubernetes.io/name: ingress-nginx
  516.     app.kubernetes.io/instance: ingress-nginx
  517.     app.kubernetes.io/version: 1.1.0
  518.     app.kubernetes.io/managed-by: Helm
  519.     app.kubernetes.io/component: admission-webhook
  520. roleRef:
  521.   apiGroup: rbac.authorization.k8s.io
  522.   kind: ClusterRole
  523.   name: ingress-nginx-admission
  524. subjects:
  525.   - kind: ServiceAccount
  526.     name: ingress-nginx-admission
  527.     namespace: ingress-nginx
  528. ---
  529. # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
  530. apiVersion: rbac.authorization.k8s.io/v1
  531. kind: Role
  532. metadata:
  533.   name: ingress-nginx-admission
  534.   namespace: ingress-nginx
  535.   annotations:
  536.     helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
  537.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  538.   labels:
  539.     helm.sh/chart: ingress-nginx-4.0.10
  540.     app.kubernetes.io/name: ingress-nginx
  541.     app.kubernetes.io/instance: ingress-nginx
  542.     app.kubernetes.io/version: 1.1.0
  543.     app.kubernetes.io/managed-by: Helm
  544.     app.kubernetes.io/component: admission-webhook
  545. rules:
  546.   - apiGroups:
  547.       - ''
  548.     resources:
  549.       - secrets
  550.     verbs:
  551.       - get
  552.       - create
  553. ---
  554. # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
  555. apiVersion: rbac.authorization.k8s.io/v1
  556. kind: RoleBinding
  557. metadata:
  558.   name: ingress-nginx-admission
  559.   namespace: ingress-nginx
  560.   annotations:
  561.     helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
  562.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  563.   labels:
  564.     helm.sh/chart: ingress-nginx-4.0.10
  565.     app.kubernetes.io/name: ingress-nginx
  566.     app.kubernetes.io/instance: ingress-nginx
  567.     app.kubernetes.io/version: 1.1.0
  568.     app.kubernetes.io/managed-by: Helm
  569.     app.kubernetes.io/component: admission-webhook
  570. roleRef:
  571.   apiGroup: rbac.authorization.k8s.io
  572.   kind: Role
  573.   name: ingress-nginx-admission
  574. subjects:
  575.   - kind: ServiceAccount
  576.     name: ingress-nginx-admission
  577.     namespace: ingress-nginx
  578. ---
  579. # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
  580. apiVersion: batch/v1
  581. kind: Job
  582. metadata:
  583.   name: ingress-nginx-admission-create
  584.   namespace: ingress-nginx
  585.   annotations:
  586.     helm.sh/hook: pre-install,pre-upgrade
  587.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  588.   labels:
  589.     helm.sh/chart: ingress-nginx-4.0.10
  590.     app.kubernetes.io/name: ingress-nginx
  591.     app.kubernetes.io/instance: ingress-nginx
  592.     app.kubernetes.io/version: 1.1.0
  593.     app.kubernetes.io/managed-by: Helm
  594.     app.kubernetes.io/component: admission-webhook
  595. spec:
  596.   template:
  597.     metadata:
  598.       name: ingress-nginx-admission-create
  599.       labels:
  600.         helm.sh/chart: ingress-nginx-4.0.10
  601.         app.kubernetes.io/name: ingress-nginx
  602.         app.kubernetes.io/instance: ingress-nginx
  603.         app.kubernetes.io/version: 1.1.0
  604.         app.kubernetes.io/managed-by: Helm
  605.         app.kubernetes.io/component: admission-webhook
  606.     spec:
  607.       containers:
  608.         - name: create
  609.           image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.2.0 
  610.           imagePullPolicy: IfNotPresent
  611.           args:
  612.             - create
  613.             - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
  614.             - --namespace=$(POD_NAMESPACE)
  615.             - --secret-name=ingress-nginx-admission
  616.           env:
  617.             - name: POD_NAMESPACE
  618.               valueFrom:
  619.                 fieldRef:
  620.                   fieldPath: metadata.namespace
  621.           securityContext:
  622.             allowPrivilegeEscalation: false
  623.       restartPolicy: OnFailure
  624.       serviceAccountName: ingress-nginx-admission
  625.       nodeSelector:
  626.         kubernetes.io/os: linux
  627.       securityContext:
  628.         runAsNonRoot: true
  629.         runAsUser: 2000
  630. ---
  631. # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
  632. apiVersion: batch/v1
  633. kind: Job
  634. metadata:
  635.   name: ingress-nginx-admission-patch
  636.   namespace: ingress-nginx
  637.   annotations:
  638.     helm.sh/hook: post-install,post-upgrade
  639.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  640.   labels:
  641.     helm.sh/chart: ingress-nginx-4.0.10
  642.     app.kubernetes.io/name: ingress-nginx
  643.     app.kubernetes.io/instance: ingress-nginx
  644.     app.kubernetes.io/version: 1.1.0
  645.     app.kubernetes.io/managed-by: Helm
  646.     app.kubernetes.io/component: admission-webhook
  647. spec:
  648.   template:
  649.     metadata:
  650.       name: ingress-nginx-admission-patch
  651.       labels:
  652.         helm.sh/chart: ingress-nginx-4.0.10
  653.         app.kubernetes.io/name: ingress-nginx
  654.         app.kubernetes.io/instance: ingress-nginx
  655.         app.kubernetes.io/version: 1.1.0
  656.         app.kubernetes.io/managed-by: Helm
  657.         app.kubernetes.io/component: admission-webhook
  658.     spec:
  659.       containers:
  660.         - name: patch
  661.           image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 
  662.           imagePullPolicy: IfNotPresent
  663.           args:
  664.             - patch
  665.             - --webhook-name=ingress-nginx-admission
  666.             - --namespace=$(POD_NAMESPACE)
  667.             - --patch-mutating=false
  668.             - --secret-name=ingress-nginx-admission
  669.             - --patch-failure-policy=Fail
  670.           env:
  671.             - name: POD_NAMESPACE
  672.               valueFrom:
  673.                 fieldRef:
  674.                   fieldPath: metadata.namespace
  675.           securityContext:
  676.             allowPrivilegeEscalation: false
  677.       restartPolicy: OnFailure
  678.       serviceAccountName: ingress-nginx-admission
  679.       nodeSelector:
  680.         kubernetes.io/os: linux
  681.       securityContext:
  682.         runAsNonRoot: true
  683.         runAsUser: 2000
  684. [root@hello ~/yaml]#

14.2启用后端,写入配置文件执行

  1. [root@hello ~/yaml]# vim backend.yaml
  2. [root@hello ~/yaml]# cat backend.yaml
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6.   name: default-http-backend
  7.   labels:
  8.     app.kubernetes.io/name: default-http-backend
  9.   namespace: kube-system
  10. spec:
  11.   replicas: 1
  12.   selector:
  13.     matchLabels:
  14.       app.kubernetes.io/name: default-http-backend
  15.   template:
  16.     metadata:
  17.       labels:
  18.         app.kubernetes.io/name: default-http-backend
  19.     spec:
  20.       terminationGracePeriodSeconds: 60
  21.       containers:
  22.       - name: default-http-backend
  23.         image: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5 
  24.         livenessProbe:
  25.           httpGet:
  26.             path: /healthz
  27.             port: 8080
  28.             scheme: HTTP
  29.           initialDelaySeconds: 30
  30.           timeoutSeconds: 5
  31.         ports:
  32.         - containerPort: 8080
  33.         resources:
  34.           limits:
  35.             cpu: 10m
  36.             memory: 20Mi
  37.           requests:
  38.             cpu: 10m
  39.             memory: 20Mi
  40. ---
  41. apiVersion: v1
  42. kind: Service
  43. metadata:
  44.   name: default-http-backend
  45.   namespace: kube-system
  46.   labels:
  47.     app.kubernetes.io/name: default-http-backend
  48. spec:
  49.   ports:
  50.   - port: 80
  51.     targetPort: 8080
  52.   selector:
  53.     app.kubernetes.io/name: default-http-backend
  54. [root@hello ~/yaml]#

14.3安装测试应用

  1. [root@hello ~/yaml]# vim ingress-demo-app.yaml
  2. [root@hello ~/yaml]#
  3. [root@hello ~/yaml]# cat ingress-demo-app.yaml
  4. apiVersion: apps/v1
  5. kind: Deployment
  6. metadata:
  7.   name: hello-server
  8. spec:
  9.   replicas: 2
  10.   selector:
  11.     matchLabels:
  12.       app: hello-server
  13.   template:
  14.     metadata:
  15.       labels:
  16.         app: hello-server
  17.     spec:
  18.       containers:
  19.       - name: hello-server
  20.         image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
  21.         ports:
  22.         - containerPort: 9000
  23. ---
  24. apiVersion: apps/v1
  25. kind: Deployment
  26. metadata:
  27.   labels:
  28.     app: nginx-demo
  29.   name: nginx-demo
  30. spec:
  31.   replicas: 2
  32.   selector:
  33.     matchLabels:
  34.       app: nginx-demo
  35.   template:
  36.     metadata:
  37.       labels:
  38.         app: nginx-demo
  39.     spec:
  40.       containers:
  41.       - image: nginx
  42.         name: nginx
  43. ---
  44. apiVersion: v1
  45. kind: Service
  46. metadata:
  47.   labels:
  48.     app: nginx-demo
  49.   name: nginx-demo
  50. spec:
  51.   selector:
  52.     app: nginx-demo
  53.   ports:
  54.   - port: 8000
  55.     protocol: TCP
  56.     targetPort: 80
  57. ---
  58. apiVersion: v1
  59. kind: Service
  60. metadata:
  61.   labels:
  62.     app: hello-server
  63.   name: hello-server
  64. spec:
  65.   selector:
  66.     app: hello-server
  67.   ports:
  68.   - port: 8000
  69.     protocol: TCP
  70.     targetPort: 9000
  71. ---
  72. apiVersion: networking.k8s.io/v1
  73. kind: Ingress  
  74. metadata:
  75.   name: ingress-host-bar
  76. spec:
  77.   ingressClassName: nginx
  78.   rules:
  79.   - host: "hello.chenby.cn"
  80.     http:
  81.       paths:
  82.       - pathType: Prefix
  83.         path: "/"
  84.         backend:
  85.           service:
  86.             name: hello-server
  87.             port:
  88.               number: 8000
  89.   - host: "demo.chenby.cn"
  90.     http:
  91.       paths:
  92.       - pathType: Prefix
  93.         path: "/nginx"  
  94.         backend:
  95.           service:
  96.             name: nginx-demo
  97.             port:
  98.               number: 8000

14.4执行部署

  1. kubectl  apply -f deploy.yaml 
  2. kubectl  apply -f backend.yaml 
  3. # 等创建完成后在执行:
  4. kubectl  apply -f ingress-demo-app.yaml 
  5. kubectl  get ingress
  6. NAME               CLASS   HOSTS                            ADDRESS     PORTS   AGE
  7. ingress-host-bar   nginx   hello.chenby.cn,demo.chenby.cn   192.168.1.12   80      7s

14.5过滤查看ingress端口

  1. [root@hello ~/yaml]# kubectl  get svc -| grep ingress
  2. ingress-nginx          ingress-nginx-controller             NodePort    10.104.231.36    <none>        80:32636/TCP,443:30579/TCP   104s
  3. ingress-nginx          ingress-nginx-controller-admission   ClusterIP   10.101.85.88     <none>        443/TCP                      105s
  4. [root@hello ~/yaml]#

15.IPv6测试

  1. #部署应用
  2. [root@k8s-master01 ~]# vim cby.yaml 
  3. [root@k8s-master01 ~]# cat cby.yaml 
  4. apiVersion: apps/v1
  5. kind: Deployment
  6. metadata:
  7.   name: chenby
  8. spec:
  9.   replicas: 3
  10.   selector:
  11.     matchLabels:
  12.       app: chenby
  13.   template:
  14.     metadata:
  15.       labels:
  16.         app: chenby
  17.     spec:
  18.       containers:
  19.       - name: chenby
  20.         image: nginx
  21.         resources:
  22.           limits:
  23.             memory: "128Mi"
  24.             cpu: "500m"
  25.         ports:
  26.         - containerPort: 80
  27. ---
  28. apiVersion: v1
  29. kind: Service
  30. metadata:
  31.   name: chenby
  32. spec:
  33.   ipFamilyPolicy: PreferDualStack
  34.   ipFamilies:
  35.   - IPv6
  36.   - IPv4
  37.   type: NodePort
  38.   selector:
  39.     app: chenby
  40.   ports:
  41.   - port: 80
  42.     targetPort: 80
  43. [root@k8s-master01 ~]# kubectl  apply -f cby.yaml
  44. #查看端口
  45. [root@k8s-master01 ~]# kubectl  get svc
  46. NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
  47. chenby         NodePort    fd00::a29c       <none>        80:30779/TCP   5s
  48. [root@k8s-master01 ~]# 
  49. #使用内网访问
  50. [root@localhost yaml]# curl -I http://[fd00::a29c]
  51. HTTP/1.1 200 OK
  52. Server: nginx/1.21.6
  53. Date: Thu, 05 May 2022 10:20:35 GMT
  54. Content-Type: text/html
  55. Content-Length: 615
  56. Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
  57. Connection: keep-alive
  58. ETag: "61f01158-267"
  59. Accept-Ranges: bytes
  60. [root@localhost yaml]# curl -I http://192.168.1.11:30779
  61. HTTP/1.1 200 OK
  62. Server: nginx/1.21.6
  63. Date: Thu, 05 May 2022 10:20:59 GMT
  64. Content-Type: text/html
  65. Content-Length: 615
  66. Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
  67. Connection: keep-alive
  68. ETag: "61f01158-267"
  69. Accept-Ranges: bytes
  70. [root@localhost yaml]# 
  71. #使用公网访问
  72. [root@localhost yaml]# curl -I http://[2408:8207:78ca:9fa1::10]:30779
  73. HTTP/1.1 200 OK
  74. Server: nginx/1.21.6
  75. Date: Thu, 05 May 2022 10:20:54 GMT
  76. Content-Type: text/html
  77. Content-Length: 615
  78. Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
  79. Connection: keep-alive
  80. ETag: "61f01158-267"
  81. Accept-Ranges: bytes

16.安装命令行自动补全功能

  1. yum install bash-completion -y
  2. source /usr/share/bash-completion/bash_completion
  3. source <(kubectl completion bash)
  4. echo "source <(kubectl completion bash)" >> ~/.bashrc

关于

https://www.oiox.cn/

https://www.oiox.cn/index.php/start-page.html

CSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客

全网可搜《小陈运维》

文章主要发布于微信公众号:《Linux运维交流社区》

二进制安装Kubernetes(k8s) v1.24.1 IPv4/IPv6双栈 --- Ubuntu版的更多相关文章

  1. Centos7安装Kubernetes k8s v1.16.0 国内环境

    一. 为什么是k8s v1.16.0? 最新版的v1.16.2试过了,一直无法安装完成,安装到kubeadm init那一步执行后,报了很多错,如:node xxx not found等.centos ...

  2. Centos7 二进制安装 Kubernetes 1.13

    目录 1.目录 1.1.什么是 Kubernetes? 1.2.Kubernetes 有哪些优势? 2.环境准备 2.1.网络配置 2.2.更改 HOSTNAME 2.3.配置ssh免密码登录登录 2 ...

  3. 5.基于二进制部署kubernetes(k8s)集群

    1 kubernetes组件 1.1 Kubernetes 集群图 官网集群架构图 1.2 组件及功能 1.2.1 控制组件(Control Plane Components) 控制组件对集群做出全局 ...

  4. 二进制安装kubernetes(六) kube-proxy组件安装

    Kube-Proxy简述 参考文献: https://ywnz.com/linuxyffq/2530.html 运行在每个节点上,监听 API Server 中服务对象的变化,再通过管理 IPtabl ...

  5. 开启和安装Kubernetes k8s 基于Docker For Windows

    0.最近发现,Docker For Windows Stable在Enable Kubernetes这个问题上是有Bug的,建议切换到Edge版本,并且采用下文AliyunContainerServi ...

  6. 二进制安装kubernetes(七) 部署知识点总结

    1.k8s各个组件之间通信,在高版本中,基本都是使用TSL通信,所以申请证书,是必不可少的,而且建议使用二进制安装,或者在接手一套K8S集群的时候,第一件事情是检查证书有效期,证书过期或者TSL通信问 ...

  7. Kubernetes系列三:二进制安装Kubernetes环境

    安装环境: # 三个节点信息 192.168.31.11 主机名:env11 角色:部署Master节点/Node节点/ETCD节点 192.168.31.12 主机名:env12 角色:部署Node ...

  8. 二进制安装kubernetes(五) kubelet组件安装

    概述资料地址:https://blog.csdn.net/bbwangj/article/details/81904350 Kubelet组件运行在Node节点上,维持运行中的Pods以及提供kube ...

  9. 二进制安装kubernetes(一) 环境准备及etcd组件安装及etcd管理软件etcdkeeper安装

    实验环境: 架构图: 主机环境: 操作系统:因docker对内核需要,本次部署操作系统全部采用centos7.6(需要内核3.8以上) VM :2C 2G 50G * 5  PS:因后面实验需要向k8 ...

  10. centos7安装kubernetes k8s 1.18

    可以参考其他网友的阿里云搭建k8s高可用集群(1.17.3) https://www.cnblogs.com/gmmy/p/12372805.html 准备四台centos7虚拟机,用来安装k8s集群 ...

随机推荐

  1. java pta第二次阶段性总结

    一.前言 经过这三次的pta训练,我对java再一次有了一个新的认识,这三次比起之前难度更大,所涉及的知识点更多.第4.5次作业是在前几次作业上的再次拓展,由三角形拓展到四边形,再由四边形拓展到五边形 ...

  2. 重写mybatis-plus的saveUpdate方法

    重写mybatis-plus的saveUpdate方法 1.问题出现 同步外部数据的时候,如果需要同步逻辑删除的数据,mybatis-plus的saveOrUpdate||saveOrUpdateBa ...

  3. Linux命令之nc命令

    1.简介 nc是netcat的简写,是一个功能强大的网络工具,有着网络界的瑞士军刀美誉.nc命令在linux系统中实际命令是ncat,nc是软连接到ncat.nc命令的主要作用如下: 实现任意TCP/ ...

  4. Bug的前后台分类及定位技巧

    必备工具:Firefox debug工具 一般浏览器F12即可   如何区分页面的bug问题归属:前端or后端 前端bug主要分为3个类别:HTML,CSS,Javascript三类问题 给个最大的区 ...

  5. texstudio设置外部浏览器及右侧预览不能使用问题

    刚装的texstudio,今天不知什么原因右侧显示的pdf文件一直是以前的,百度了下没找到,自己摸索了下,只需要把构建里面pdf查看器更改下即可 如果想更改外部pdf查看器,只需要设置下命令里面外部p ...

  6. Linux系统环境下部署jar程序实现后台运行1

    [ nohup java -jar xxx.jar --spring.profiles.active=prod > 日志文件名 2>&1 & ]

  7. django_应用及分布式路由

    一.应用的定义 1.应用在Django中是一个独立的业务模块,可以包含自己的路由.视图.模板.模型. 例如如下图所示,一个资讯类网站中会有不同的模块,如果所有的模块共用一个views.py文件,则会导 ...

  8. kafka 学习

    https://kafka.apache.org/quickstart C:\W_O_R_K\kafka_2.12-2.2.0\kafka_2.12-2.2.0\bin\windows\zookeep ...

  9. Asp.net MVC5中没有BundleConfig.cs-MVC学习笔记(一)

    创建ASP.NET MVC5项目时,选择了空项目,在App_Start文件夹中没有默认创建了BudleConfig.cs文件. 下面就来手动添加 在NuGet中搜索Microsoft.AspNet.W ...

  10. 5G智能网关助力打造5G移动医疗车

    医疗资源分布不均衡,是导致老百姓看病难的重要原因之一.随着新一代信息技术的快速发展和普及应用,基于5G远程通信技术.音视频数字化技术,解决医疗资源分布不均衡问题,打破空间限制,让群众在家门口就能享受到 ...