二进制安装Kubernetes(k8s) v1.23.6

背景

kubernetes二进制安装

1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 文档以及安装包已生成。

后续尽可能第一时间更新新版本文档

https://github.com/cby-chen/Kubernetes/releases

脚本项目地址:

https://github.com/cby-chen/Binary_installation_of_Kubernetes

手动项目地址:

https://github.com/cby-chen/Kubernetes

1.环境

主机名称 IP地址 说明 软件
Master01 192.168.1.81 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client
Master02 192.168.1.82 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client
Master03 192.168.1.83 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client
Node01 192.168.1.84 node节点 kubelet、kube-proxy、nfs-client
Node02 192.168.1.85 node节点 kubelet、kube-proxy、nfs-client
Node03 192.168.1.86 node节点 kubelet、kube-proxy、nfs-client
Node04 192.168.1.87 node节点 kubelet、kube-proxy、nfs-client
Node05 192.168.1.88 node节点 kubelet、kube-proxy、nfs-client
Lb01 192.168.1.80 Lb01节点 haproxy、keepalived
Lb02 192.168.1.90 Lb02节点 haproxy、keepalived

192.168.1.89 VIP
软件 版本
内核 4.18.0-373.el8.x86_64
CentOS 8 v8 或者 v7
kube-apiserver、kube-controller-manager
kube-scheduler、kubelet、kube-proxy
v1.23.6
etcd v3.5.3
docker-ce v20.10.14
containerd v1.5.11
cfssl v1.6.1
cni v1.1.1
crictl v1.23.0
haproxy v1.8.27
keepalived v2.1.5

网段

物理主机:192.168.1.0/24

service:10.96.0.0/12

pod:172.16.0.0/12

如果有条件建议k8s集群与etcd集群分开安装

1.1.k8s基础系统环境配置

1.2.配置IP

  1. ssh root@192.168.1.161 "nmcli con mod ens18 ipv4.addresses 192.168.1.81/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
  2. ssh root@192.168.1.167 "nmcli con mod ens18 ipv4.addresses 192.168.1.82/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
  3. ssh root@192.168.1.137 "nmcli con mod ens18 ipv4.addresses 192.168.1.83/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
  4. ssh root@192.168.1.152 "nmcli con mod ens18 ipv4.addresses 192.168.1.84/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
  5. ssh root@192.168.1.198 "nmcli con mod ens18 ipv4.addresses 192.168.1.85/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
  6. ssh root@192.168.1.166 "nmcli con mod ens18 ipv4.addresses 192.168.1.86/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
  7. ssh root@192.168.1.171 "nmcli con mod ens18 ipv4.addresses 192.168.1.87/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
  8. ssh root@192.168.1.159 "nmcli con mod ens18 ipv4.addresses 192.168.1.88/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
  9. ssh root@192.168.1.122 "nmcli con mod ens18 ipv4.addresses 192.168.1.80/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
  10. ssh root@192.168.1.125 "nmcli con mod ens18 ipv4.addresses 192.168.1.90/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"

1.3.设置主机名

  1. hostnamectl set-hostname k8s-master01
  2. hostnamectl set-hostname k8s-master02
  3. hostnamectl set-hostname k8s-master03
  4. hostnamectl set-hostname k8s-node01
  5. hostnamectl set-hostname k8s-node02
  6. hostnamectl set-hostname k8s-node03
  7. hostnamectl set-hostname k8s-node04
  8. hostnamectl set-hostname k8s-node05
  9. hostnamectl set-hostname lb01
  10. hostnamectl set-hostname lb02

1.4.配置yum源

  1. # 对于 CentOS 7
  2. sudo sed -'s|^mirrorlist=|#mirrorlist=|g' \
  3.          -'s|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \
  4.          -i.bak \
  5.          /etc/yum.repos.d/CentOS-*.repo
  6. # 对于 CentOS 8
  7. sudo sed -'s|^mirrorlist=|#mirrorlist=|g' \
  8.          -'s|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \
  9.          -i.bak \
  10.          /etc/yum.repos.d/CentOS-*.repo
  11. sed -'s|^mirrorlist=|#mirrorlist=|g' -'s|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak  /etc/yum.repos.d/CentOS-*.repo

1.5.安装一些必备工具

  1. yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y

1.6.安装docker工具 (lb除外)

  1. yum install -y yum-utils device-mapper-persistent-data lvm2
  2. wget -/etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
  3. sudo sed -'s+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
  4. yum makecache
  5. yum -y install docker-ce
  6. systemctl  enable --now docker

1.7.关闭防火墙

  1. systemctl disable --now firewalld

1.8.关闭SELinux

  1. setenforce 0
  2. sed -'s#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.9.关闭交换分区

  1. sed -ri 's/.*swap.*/#&/' /etc/fstab
  2. swapoff -&& sysctl -w vm.swappiness=0
  3. cat /etc/fstab
  4. # /dev/mapper/centos-swap swap                    swap    defaults        0 0

1.10.关闭NetworkManager 并启用 network (lb除外)

  1. systemctl disable --now NetworkManager
  2. systemctl start network && systemctl enable network

1.11.进行时间同步 (lb除外)

  1. 服务端
  2. yum install chrony -y
  3. cat > /etc/chrony.conf << EOF 
  4. pool ntp.aliyun.com iburst
  5. driftfile /var/lib/chrony/drift
  6. makestep 1.0 3
  7. rtcsync
  8. allow 192.168.1.0/24
  9. local stratum 10
  10. keyfile /etc/chrony.keys
  11. leapsectz right/UTC
  12. logdir /var/log/chrony
  13. EOF
  14. systemctl restart chronyd
  15. systemctl enable chronyd
  16. 客户端
  17. yum install chrony -y
  18. vim /etc/chrony.conf
  19. cat /etc/chrony.conf | grep -v  "^#" | grep -"^$"
  20. pool 192.168.1.81 iburst
  21. driftfile /var/lib/chrony/drift
  22. makestep 1.0 3
  23. rtcsync
  24. keyfile /etc/chrony.keys
  25. leapsectz right/UTC
  26. logdir /var/log/chrony
  27. systemctl restart chronyd ; systemctl enable chronyd
  28. yum install chrony -; sed -"s#2.centos.pool.ntp.org#192.168.1.81#g" /etc/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd
  29. 使用客户端进行验证
  30. chronyc sources -v

1.12.配置ulimit

  1. ulimit -SHn 65535
  2. cat >> /etc/security/limits.conf <<EOF
  3. * soft nofile 655360
  4. * hard nofile 131072
  5. * soft nproc 655350
  6. * hard nproc 655350
  7. * seft memlock unlimited
  8. * hard memlock unlimitedd
  9. EOF

1.13.配置免密登录

  1. yum install -y sshpass
  2. ssh-keygen -/root/.ssh/id_rsa -''
  3. export IP="192.168.1.81 192.168.1.82 192.168.1.83 192.168.1.84 192.168.1.85 192.168.1.86 192.168.1.87 192.168.1.88 192.168.1.80 192.168.1.90"
  4. export SSHPASS=123123
  5. for HOST in $IP;do
  6.      sshpass -e ssh-copy-id -StrictHostKeyChecking=no $HOST
  7. done

1.14.添加启用源 (lb除外)

  1.  RHEL-8 CentOS-8配置源
  2. yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
  3.  RHEL-7 SL-7  CentOS-7 安装 ELRepo 
  4. yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
  5. 查看可用安装包
  6. yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available

1.15.升级内核至4.18版本以上 (lb除外)

  1. 安装最新的内核
  2. # 我这里选择的是稳定版kernel-ml   如需更新长期维护版本kernel-lt  
  3. yum  --enablerepo=elrepo-kernel  install  kernel-ml
  4. 查看已安装那些内核
  5. rpm -qa | grep kernel
  6. kernel-core-4.18.0-358.el8.x86_64
  7. kernel-tools-4.18.0-358.el8.x86_64
  8. kernel-ml-core-5.16.7-1.el8.elrepo.x86_64
  9. kernel-ml-5.16.7-1.el8.elrepo.x86_64
  10. kernel-modules-4.18.0-358.el8.x86_64
  11. kernel-4.18.0-358.el8.x86_64
  12. kernel-tools-libs-4.18.0-358.el8.x86_64
  13. kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64
  14. 查看默认内核
  15. grubby --default-kernel
  16. /boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64
  17. 若不是最新的使用命令设置
  18. grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64
  19. 重启生效
  20. reboot
  21. v8 整合命令为:
  22. yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --default-kernel ; reboot
  23. v7 整合命令为:
  24. yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel

1.16.安装ipvsadm (lb除外)

  1. yum install ipvsadm ipset sysstat conntrack libseccomp -y
  2. cat >> /etc/modules-load.d/ipvs.conf <<EOF 
  3. ip_vs
  4. ip_vs_rr
  5. ip_vs_wrr
  6. ip_vs_sh
  7. nf_conntrack
  8. ip_tables
  9. ip_set
  10. xt_set
  11. ipt_set
  12. ipt_rpfilter
  13. ipt_REJECT
  14. ipip
  15. EOF
  16. systemctl restart systemd-modules-load.service
  17. lsmod | grep -e ip_vs -e nf_conntrack
  18. ip_vs_sh               16384  0
  19. ip_vs_wrr              16384  0
  20. ip_vs_rr               16384  0
  21. ip_vs                 180224  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
  22. nf_conntrack          176128  1 ip_vs
  23. nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
  24. nf_defrag_ipv4         16384  1 nf_conntrack
  25. libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

1.17.修改内核参数 (lb除外)

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.ipv4.ip_forward = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. fs.may_detach_mounts = 1
  5. vm.overcommit_memory=1
  6. vm.panic_on_oom=0
  7. fs.inotify.max_user_watches=89100
  8. fs.file-max=52706963
  9. fs.nr_open=52706963
  10. net.netfilter.nf_conntrack_max=2310720
  11. net.ipv4.tcp_keepalive_time = 600
  12. net.ipv4.tcp_keepalive_probes = 3
  13. net.ipv4.tcp_keepalive_intvl =15
  14. net.ipv4.tcp_max_tw_buckets = 36000
  15. net.ipv4.tcp_tw_reuse = 1
  16. net.ipv4.tcp_max_orphans = 327680
  17. net.ipv4.tcp_orphan_retries = 3
  18. net.ipv4.tcp_syncookies = 1
  19. net.ipv4.tcp_max_syn_backlog = 16384
  20. net.ipv4.ip_conntrack_max = 65536
  21. net.ipv4.tcp_max_syn_backlog = 16384
  22. net.ipv4.tcp_timestamps = 0
  23. net.core.somaxconn = 16384
  24. EOF
  25. sysctl --system

1.18.所有节点配置hosts本地解析

  1. cat > /etc/hosts <<EOF
  2. 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.1.81 k8s-master01
  5. 192.168.1.82 k8s-master02
  6. 192.168.1.83 k8s-master03
  7. 192.168.1.84 k8s-node01
  8. 192.168.1.85 k8s-node02
  9. 192.168.1.86 k8s-node03
  10. 192.168.1.87 k8s-node04
  11. 192.168.1.88 k8s-node05
  12. 192.168.1.80 lb01
  13. 192.168.1.90 lb02
  14. 192.168.1.89 lb-vip
  15. EOF

2.k8s基本组件安装

2.1.所有k8s节点安装Containerd作为Runtime

  1. yum install containerd -y

2.1.1配置Containerd所需的模块

  1. cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
  2. overlay
  3. br_netfilter
  4. EOF

2.1.2加载模块

  1. systemctl restart systemd-modules-load.service

2.1.3配置Containerd所需的内核

  1. cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
  2. net.bridge.bridge-nf-call-iptables  = 1
  3. net.ipv4.ip_forward                 = 1
  4. net.bridge.bridge-nf-call-ip6tables = 1
  5. EOF
  6. # 加载内核
  7. sysctl --system

2.1.4创建Containerd的配置文件

  1. mkdir -/etc/containerd
  2. containerd config default | tee /etc/containerd/config.toml
  3. 修改Containerd的配置文件
  4. sed -"s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
  5. cat /etc/containerd/config.toml | grep SystemdCgroup
  6. # 找到containerd.runtimes.runc.options,在其下加入SystemdCgroup = true
  7. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  8.               SystemdCgroup = true
  9.     [plugins."io.containerd.grpc.v1.cri".cni]
  10. # 将sandbox_image默认地址改为符合版本地址
  11.     sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"

2.1.5启动并设置为开机启动

  1. systemctl daemon-reload
  2. systemctl enable --now containerd

2.1.6配置crictl客户端连接的运行时位置

  1. cat > /etc/crictl.yaml <<EOF
  2. runtime-endpoint: unix:///run/containerd/containerd.sock
  3. image-endpoint: unix:///run/containerd/containerd.sock
  4. timeout: 10
  5. debug: false
  6. EOF
  7. systemctl restart  containerd

2.2.k8s与etcd下载及安装(仅在master01操作)

2.2.1下载k8s安装包(你用哪个下哪个)

  1. 1.下载kubernetes1.23.+的二进制包
  2. github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md
  3. wget https://dl.k8s.io/v1.23.6/kubernetes-server-linux-amd64.tar.gz
  4. 2.下载etcdctl二进制包
  5. github二进制包下载地址:https://github.com/etcd-io/etcd/releases
  6. wget https://github.com/etcd-io/etcd/releases/download/v3.5.3/etcd-v3.5.3-linux-amd64.tar.gz
  7. 3.docker-ce二进制包下载地址
  8. 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
  9. 这里需要下载20.10.+版本
  10. wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz
  11. 4.containerd二进制包下载
  12. github下载地址:https://github.com/containerd/containerd/releases
  13. containerd下载时下载带cni插件的二进制包。
  14. wget https://github.com/containerd/containerd/releases/download/v1.6.2/cri-containerd-cni-1.6.2-linux-amd64.tar.gz
  15. 5.下载cfssl二进制包
  16. github二进制包下载地址:https://github.com/cloudflare/cfssl/releases
  17. wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
  18. wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
  19. wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
  20. 6.cni插件下载
  21. github下载地址:https://github.com/containernetworking/plugins/releases
  22. wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
  23. 7.crictl客户端二进制下载
  24. github下载:https://github.com/kubernetes-sigs/cri-tools/releases
  25. wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz
  26. 解压k8s安装文件
  27. tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -/usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
  28. 解压etcd安装文件
  29. tar -xf etcd-v3.5.3-linux-amd64.tar.gz --strip-components=1 -/usr/local/bin etcd-v3.5.3-linux-amd64/etcd{,ctl}
  30. # 查看/usr/local/bin下内容
  31. ls /usr/local/bin/
  32. etcd  etcdctl  kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler
  33. 已经整理好的:
  34. wget https://github.com/cby-chen/Kubernetes/releases/download/v1.23.6/kubernetes-v1.23.6.tar

2.2.2查看版本

  1. [root@k8s-master01 ~]# kubelet --version
  2. Kubernetes v1.23.6
  3. [root@k8s-master01 ~]# etcdctl version
  4. etcdctl version: 3.5.3
  5. API version: 3.5
  6. [root@k8s-master01 ~]#

2.2.3将组件发送至其他k8s节点

  1. Master='k8s-master02 k8s-master03'
  2. Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05'
  3. for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
  4. for NODE in $Work; do     scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done

2.2.4克隆证书相关文件

  1. git clone https://github.com/cby-chen/Kubernetes.git

2.2.5所有k8s节点创建目录

  1. mkdir -/opt/cni/bin

3.相关证书生成

  1. master01节点下载证书生成工具
  2. wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -/usr/local/bin/cfssl
  3. wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -/usr/local/bin/cfssljson
  4. chmod +/usr/local/bin/cfssl /usr/local/bin/cfssljson

3.1.生成etcd证书

特别说明除外,以下操作在所有master节点操作

3.1.1所有master节点创建证书存放目录

  1. mkdir /etc/etcd/ssl -p

3.1.2master01节点生成etcd证书

  1. cd Kubernetes/pki/
  2. # 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)
  3. cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
  4. cfssl gencert \
  5.    -ca=/etc/etcd/ssl/etcd-ca.pem \
  6.    -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
  7.    -config=ca-config.json \
  8.    -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.81,192.168.1.82,192.168.1.83 \
  9.    -profile=kubernetes \
  10.    etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

3.1.3将证书复制到其他节点

  1. Master='k8s-master02 k8s-master03'
  2. for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

3.2.生成k8s相关证书

特别说明除外,以下操作在所有master节点操作

3.2.1所有k8s节点创建证书存放目录

  1. mkdir -/etc/kubernetes/pki

3.2.2master01节点生成k8s证书

  1. # 生成一个根证书
  2. cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
  3. # 10.96.0.1是service网段的第一个地址,需要计算,192.168.1.89为高可用vip地址
  4. cfssl gencert   \
  5. -ca=/etc/kubernetes/pki/ca.pem   \
  6. -ca-key=/etc/kubernetes/pki/ca-key.pem   \
  7. -config=ca-config.json   \
  8. -hostname=10.96.0.1,192.168.1.89,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.81,192.168.1.82,192.168.1.83,192.168.1.84,192.168.1.85,192.168.1.86,192.168.1.87,192.168.1.88,192.168.1.80,192.168.1.90,192.168.1.40,192.168.1.41   \
  9. -profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

3.2.3生成apiserver聚合证书

  1. cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 
  2. # 有一个警告,可以忽略
  3. cfssl gencert  \
  4. -ca=/etc/kubernetes/pki/front-proxy-ca.pem   \
  5. -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   \
  6. -config=ca-config.json   \
  7. -profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

3.2.4生成controller-manage的证书

  1. cfssl gencert \
  2.    -ca=/etc/kubernetes/pki/ca.pem \
  3.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  4.    -config=ca-config.json \
  5.    -profile=kubernetes \
  6.    manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
  7. # 设置一个集群项
  8. kubectl config set-cluster kubernetes \
  9.      --certificate-authority=/etc/kubernetes/pki/ca.pem \
  10.      --embed-certs=true \
  11.      --server=https://192.168.1.89:8443 \
  12.      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  13. # 设置一个环境项,一个上下文
  14. kubectl config set-context system:kube-controller-manager@kubernetes \
  15.     --cluster=kubernetes \
  16.     --user=system:kube-controller-manager \
  17.     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  18. # 设置一个用户项
  19. kubectl config set-credentials system:kube-controller-manager \
  20.      --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
  21.      --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
  22.      --embed-certs=true \
  23.      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  24. # 设置默认环境
  25. kubectl config use-context system:kube-controller-manager@kubernetes \
  26.      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  27. cfssl gencert \
  28.    -ca=/etc/kubernetes/pki/ca.pem \
  29.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  30.    -config=ca-config.json \
  31.    -profile=kubernetes \
  32.    scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
  33. kubectl config set-cluster kubernetes \
  34.      --certificate-authority=/etc/kubernetes/pki/ca.pem \
  35.      --embed-certs=true \
  36.      --server=https://192.168.1.89:8443 \
  37.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  38. kubectl config set-credentials system:kube-scheduler \
  39.      --client-certificate=/etc/kubernetes/pki/scheduler.pem \
  40.      --client-key=/etc/kubernetes/pki/scheduler-key.pem \
  41.      --embed-certs=true \
  42.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  43. kubectl config set-context system:kube-scheduler@kubernetes \
  44.      --cluster=kubernetes \
  45.      --user=system:kube-scheduler \
  46.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  47. kubectl config use-context system:kube-scheduler@kubernetes \
  48.      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  49. cfssl gencert \
  50.    -ca=/etc/kubernetes/pki/ca.pem \
  51.    -ca-key=/etc/kubernetes/pki/ca-key.pem \
  52.    -config=ca-config.json \
  53.    -profile=kubernetes \
  54.    admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
  55. kubectl config set-cluster kubernetes     \
  56.   --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  57.   --embed-certs=true     \
  58.   --server=https://192.168.1.89:8443     \
  59.   --kubeconfig=/etc/kubernetes/admin.kubeconfig
  60. kubectl config set-credentials kubernetes-admin  \
  61.   --client-certificate=/etc/kubernetes/pki/admin.pem     \
  62.   --client-key=/etc/kubernetes/pki/admin-key.pem     \
  63.   --embed-certs=true     \
  64.   --kubeconfig=/etc/kubernetes/admin.kubeconfig
  65. kubectl config set-context kubernetes-admin@kubernetes    \
  66.   --cluster=kubernetes     \
  67.   --user=kubernetes-admin     \
  68.   --kubeconfig=/etc/kubernetes/admin.kubeconfig
  69. kubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/etc/kubernetes/admin.kubeconfig

3.2.5创建ServiceAccount Key ——secret

  1. openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
  2. openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

3.2.6将证书发送到其他master节点

  1. for NODE in k8s-master02 k8s-master03; do  for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do  scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done;  for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do  scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done

3.2.7查看证书

  1. ls /etc/kubernetes/pki/
  2. admin.csr      apiserver-key.pem  ca.pem                      front-proxy-ca.csr      front-proxy-client-key.pem  scheduler.csr
  3. admin-key.pem  apiserver.pem      controller-manager.csr      front-proxy-ca-key.pem  front-proxy-client.pem      scheduler-key.pem
  4. admin.pem      ca.csr             controller-manager-key.pem  front-proxy-ca.pem      sa.key                      scheduler.pem
  5. apiserver.csr  ca-key.pem         controller-manager.pem      front-proxy-client.csr  sa.pub
  6. # 一共23个就对了
  7. ls /etc/kubernetes/pki/ |wc -l
  8. 23

4.k8s系统组件配置

4.1.etcd配置

4.1.1master01配置

  1. cat > /etc/etcd/etcd.config.yml << EOF 
  2. name: 'k8s-master01'
  3. data-dir: /var/lib/etcd
  4. wal-dir: /var/lib/etcd/wal
  5. snapshot-count: 5000
  6. heartbeat-interval: 100
  7. election-timeout: 1000
  8. quota-backend-bytes: 0
  9. listen-peer-urls: 'https://192.168.1.81:2380'
  10. listen-client-urls: 'https://192.168.1.81:2379,http://127.0.0.1:2379'
  11. max-snapshots: 3
  12. max-wals: 5
  13. cors:
  14. initial-advertise-peer-urls: 'https://192.168.1.81:2380'
  15. advertise-client-urls: 'https://192.168.1.81:2379'
  16. discovery:
  17. discovery-fallback: 'proxy'
  18. discovery-proxy:
  19. discovery-srv:
  20. initial-cluster: 'k8s-master01=https://192.168.1.81:2380,k8s-master02=https://192.168.1.82:2380,k8s-master03=https://192.168.1.83:2380'
  21. initial-cluster-token: 'etcd-k8s-cluster'
  22. initial-cluster-state: 'new'
  23. strict-reconfig-check: false
  24. enable-v2: true
  25. enable-pprof: true
  26. proxy: 'off'
  27. proxy-failure-wait: 5000
  28. proxy-refresh-interval: 30000
  29. proxy-dial-timeout: 1000
  30. proxy-write-timeout: 5000
  31. proxy-read-timeout: 0
  32. client-transport-security:
  33.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  34.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  35.   client-cert-auth: true
  36.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  37.   auto-tls: true
  38. peer-transport-security:
  39.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  40.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  41.   peer-client-cert-auth: true
  42.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  43.   auto-tls: true
  44. debug: false
  45. log-package-levels:
  46. log-outputs: [default]
  47. force-new-cluster: false
  48. EOF

4.1.2master02配置

  1. cat > /etc/etcd/etcd.config.yml << EOF 
  2. name: 'k8s-master02'
  3. data-dir: /var/lib/etcd
  4. wal-dir: /var/lib/etcd/wal
  5. snapshot-count: 5000
  6. heartbeat-interval: 100
  7. election-timeout: 1000
  8. quota-backend-bytes: 0
  9. listen-peer-urls: 'https://192.168.1.82:2380'
  10. listen-client-urls: 'https://192.168.1.82:2379,http://127.0.0.1:2379'
  11. max-snapshots: 3
  12. max-wals: 5
  13. cors:
  14. initial-advertise-peer-urls: 'https://192.168.1.82:2380'
  15. advertise-client-urls: 'https://192.168.1.82:2379'
  16. discovery:
  17. discovery-fallback: 'proxy'
  18. discovery-proxy:
  19. discovery-srv:
  20. initial-cluster: 'k8s-master01=https://192.168.1.81:2380,k8s-master02=https://192.168.1.82:2380,k8s-master03=https://192.168.1.83:2380'
  21. initial-cluster-token: 'etcd-k8s-cluster'
  22. initial-cluster-state: 'new'
  23. strict-reconfig-check: false
  24. enable-v2: true
  25. enable-pprof: true
  26. proxy: 'off'
  27. proxy-failure-wait: 5000
  28. proxy-refresh-interval: 30000
  29. proxy-dial-timeout: 1000
  30. proxy-write-timeout: 5000
  31. proxy-read-timeout: 0
  32. client-transport-security:
  33.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  34.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  35.   client-cert-auth: true
  36.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  37.   auto-tls: true
  38. peer-transport-security:
  39.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  40.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  41.   peer-client-cert-auth: true
  42.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  43.   auto-tls: true
  44. debug: false
  45. log-package-levels:
  46. log-outputs: [default]
  47. force-new-cluster: false
  48. EOF

4.1.3master03配置

  1. cat > /etc/etcd/etcd.config.yml << EOF 
  2. name: 'k8s-master03'
  3. data-dir: /var/lib/etcd
  4. wal-dir: /var/lib/etcd/wal
  5. snapshot-count: 5000
  6. heartbeat-interval: 100
  7. election-timeout: 1000
  8. quota-backend-bytes: 0
  9. listen-peer-urls: 'https://192.168.1.83:2380'
  10. listen-client-urls: 'https://192.168.1.83:2379,http://127.0.0.1:2379'
  11. max-snapshots: 3
  12. max-wals: 5
  13. cors:
  14. initial-advertise-peer-urls: 'https://192.168.1.83:2380'
  15. advertise-client-urls: 'https://192.168.1.83:2379'
  16. discovery:
  17. discovery-fallback: 'proxy'
  18. discovery-proxy:
  19. discovery-srv:
  20. initial-cluster: 'k8s-master01=https://192.168.1.81:2380,k8s-master02=https://192.168.1.82:2380,k8s-master03=https://192.168.1.83:2380'
  21. initial-cluster-token: 'etcd-k8s-cluster'
  22. initial-cluster-state: 'new'
  23. strict-reconfig-check: false
  24. enable-v2: true
  25. enable-pprof: true
  26. proxy: 'off'
  27. proxy-failure-wait: 5000
  28. proxy-refresh-interval: 30000
  29. proxy-dial-timeout: 1000
  30. proxy-write-timeout: 5000
  31. proxy-read-timeout: 0
  32. client-transport-security:
  33.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  34.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  35.   client-cert-auth: true
  36.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  37.   auto-tls: true
  38. peer-transport-security:
  39.   cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  40.   key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  41.   peer-client-cert-auth: true
  42.   trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  43.   auto-tls: true
  44. debug: false
  45. log-package-levels:
  46. log-outputs: [default]
  47. force-new-cluster: false
  48. EOF

4.2.创建service(所有master节点操作)

4.2.1创建etcd.service并启动

  1. cat > /usr/lib/systemd/system/etcd.service << EOF
  2. [Unit]
  3. Description=Etcd Service
  4. Documentation=https://coreos.com/etcd/docs/latest/
  5. After=network.target
  6. [Service]
  7. Type=notify
  8. ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
  9. Restart=on-failure
  10. RestartSec=10
  11. LimitNOFILE=65536
  12. [Install]
  13. WantedBy=multi-user.target
  14. Alias=etcd3.service
  15. EOF

4.2.2创建etcd证书目录

  1. mkdir /etc/kubernetes/pki/etcd
  2. ln -/etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
  3. systemctl daemon-reload
  4. systemctl enable --now etcd

4.2.3查看etcd状态

  1. export ETCDCTL_API=3
  2. etcdctl --endpoints="192.168.1.83:2379,192.168.1.82:2379,192.168.1.81:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
  3. +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  4. |     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
  5. +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  6. | 192.168.1.83:2379 | 7cb7be3df5c81965 |   3.5.2 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
  7. | 192.168.1.82:2379 | c077939949ab3f8b |   3.5.2 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
  8. | 192.168.1.81:2379 | 2ee388f67565dac9 |   3.5.2 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
  9. +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  10. [root@k8s-master01 pki]#

5.高可用配置

5.1在lb01和lb02两台服务器上操作

5.1.1安装keepalived和haproxy服务

  1. systemctl disable --now firewalld
  2. setenforce 0
  3. sed -'s#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
  4. yum -y install keepalived haproxy

5.1.2修改haproxy配置文件(两台配置文件一样)

  1. # cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
  2. cat >/etc/haproxy/haproxy.cfg<<"EOF"
  3. global
  4.  maxconn 2000
  5.  ulimit-16384
  6.  log 127.0.0.1 local0 err
  7.  stats timeout 30s
  8. defaults
  9.  log global
  10.  mode http
  11.  option httplog
  12.  timeout connect 5000
  13.  timeout client 50000
  14.  timeout server 50000
  15.  timeout http-request 15s
  16.  timeout http-keep-alive 15s
  17. frontend monitor-in
  18.  bind *:33305
  19.  mode http
  20.  option httplog
  21.  monitor-uri /monitor
  22. frontend k8s-master
  23.  bind 0.0.0.0:8443
  24.  bind 127.0.0.1:8443
  25.  mode tcp
  26.  option tcplog
  27.  tcp-request inspect-delay 5s
  28.  default_backend k8s-master
  29. backend k8s-master
  30.  mode tcp
  31.  option tcplog
  32.  option tcp-check
  33.  balance roundrobin
  34.  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  35.  server  k8s-master01  192.168.1.81:6443 check
  36.  server  k8s-master02  192.168.1.82:6443 check
  37.  server  k8s-master03  192.168.1.83:6443 check
  38. EOF

5.1.3lb01配置keepalived master节点

  1. #cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  2. cat > /etc/keepalived/keepalived.conf << EOF
  3. ! Configuration File for keepalived
  4. global_defs {
  5.     router_id LVS_DEVEL
  6. }
  7. vrrp_script chk_apiserver {
  8.     script "/etc/keepalived/check_apiserver.sh"
  9.     interval 5 
  10.     weight -5
  11.     fall 2
  12.     rise 1
  13. }
  14. vrrp_instance VI_1 {
  15.     state MASTER
  16.     interface ens18
  17.     mcast_src_ip 192.168.1.80
  18.     virtual_router_id 51
  19.     priority 100
  20.     nopreempt
  21.     advert_int 2
  22.     authentication {
  23.         auth_type PASS
  24.         auth_pass K8SHA_KA_AUTH
  25.     }
  26.     virtual_ipaddress {
  27.         192.168.1.89
  28.     }
  29.     track_script {
  30.       chk_apiserver 
  31. } }
  32. EOF

5.1.4lb02配置keepalived backup节点

  1. # cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  2. cat > /etc/keepalived/keepalived.conf << EOF
  3. ! Configuration File for keepalived
  4. global_defs {
  5.     router_id LVS_DEVEL
  6. }
  7. vrrp_script chk_apiserver {
  8.     script "/etc/keepalived/check_apiserver.sh"
  9.     interval 5 
  10.     weight -5
  11.     fall 2
  12.     rise 1
  13. }
  14. vrrp_instance VI_1 {
  15.     state BACKUP
  16.     interface ens18
  17.     mcast_src_ip 192.168.1.90
  18.     virtual_router_id 51
  19.     priority 50
  20.     nopreempt
  21.     advert_int 2
  22.     authentication {
  23.         auth_type PASS
  24.         auth_pass K8SHA_KA_AUTH
  25.     }
  26.     virtual_ipaddress {
  27.         192.168.1.89
  28.     }
  29.     track_script {
  30.       chk_apiserver 
  31. } }
  32. EOF

5.1.5健康检查脚本配置(两台lb主机)

  1. cat >  /etc/keepalived/check_apiserver.sh << EOF
  2. #!/bin/bash
  3. err=0
  4. for k in \$(seq 1 3)
  5. do
  6.     check_code=\$(pgrep haproxy)
  7.     if [[ \$check_code == "" ]]; then
  8.         err=\$(expr \$err + 1)
  9.         sleep 1
  10.         continue
  11.     else
  12.         err=0
  13.         break
  14.     fi
  15. done
  16. if [[ \$err != "0" ]]; then
  17.     echo "systemctl stop keepalived"
  18.     /usr/bin/systemctl stop keepalived
  19.     exit 1
  20. else
  21.     exit 0
  22. fi
  23. EOF
  24. # 给脚本授权
  25. chmod +/etc/keepalived/check_apiserver.sh

5.1.6启动服务

  1. systemctl daemon-reload
  2. systemctl enable --now haproxy
  3. systemctl enable --now keepalived

5.1.7测试高可用

  1. # 能ping同
  2. [root@k8s-node02 ~]# ping 192.168.1.89
  3. # 能telnet访问
  4. [root@k8s-node02 ~]# telnet 192.168.1.89 8443
  5. # 关闭主节点,看vip是否漂移到备节点

6.k8s组件配置(区别于第4点)

所有k8s节点创建以下目录

  1. mkdir -/etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service./var/lib/kubelet /var/log/kubernetes

6.1.创建apiserver(所有master节点)

6.1.1master01节点配置

  1. cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-apiserver \
  8.       --v=2  \
  9.       --logtostderr=true  \
  10.       --allow-privileged=true  \
  11.       --bind-address=0.0.0.0  \
  12.       --secure-port=6443  \
  13.       --insecure-port=0  \
  14.       --advertise-address=192.168.1.81 \
  15.       --service-cluster-ip-range=10.96.0.0/12  \
  16.       --service-node-port-range=30000-32767  \
  17.       --etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \
  18.       --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
  19.       --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
  20.       --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
  21.       --client-ca-file=/etc/kubernetes/pki/ca.pem  \
  22.       --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
  23.       --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
  24.       --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
  25.       --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
  26.       --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
  27.       --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
  28.       --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  29.       --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
  30.       --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
  31.       --authorization-mode=Node,RBAC  \
  32.       --enable-bootstrap-token-auth=true  \
  33.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
  34.       --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
  35.       --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
  36.       --requestheader-allowed-names=aggregator  \
  37.       --requestheader-group-headers=X-Remote-Group  \
  38.       --requestheader-extra-headers-prefix=X-Remote-Extra-  \
  39.       --requestheader-username-headers=X-Remote-User \
  40.       --enable-aggregator-routing=true
  41.       # --token-auth-file=/etc/kubernetes/token.csv
  42. Restart=on-failure
  43. RestartSec=10s
  44. LimitNOFILE=65535
  45. [Install]
  46. WantedBy=multi-user.target
  47. EOF

6.1.2master02节点配置

  1. cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-apiserver \
  8.       --v=2  \
  9.       --logtostderr=true  \
  10.       --allow-privileged=true  \
  11.       --bind-address=0.0.0.0  \
  12.       --secure-port=6443  \
  13.       --insecure-port=0  \
  14.       --advertise-address=192.168.1.82 \
  15.       --service-cluster-ip-range=10.96.0.0/12  \
  16.       --service-node-port-range=30000-32767  \
  17.       --etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \
  18.       --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
  19.       --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
  20.       --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
  21.       --client-ca-file=/etc/kubernetes/pki/ca.pem  \
  22.       --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
  23.       --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
  24.       --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
  25.       --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
  26.       --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
  27.       --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
  28.       --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  29.       --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
  30.       --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
  31.       --authorization-mode=Node,RBAC  \
  32.       --enable-bootstrap-token-auth=true  \
  33.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
  34.       --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
  35.       --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
  36.       --requestheader-allowed-names=aggregator  \
  37.       --requestheader-group-headers=X-Remote-Group  \
  38.       --requestheader-extra-headers-prefix=X-Remote-Extra-  \
  39.       --requestheader-username-headers=X-Remote-User \
  40.       --enable-aggregator-routing=true
  41.       # --token-auth-file=/etc/kubernetes/token.csv
  42. Restart=on-failure
  43. RestartSec=10s
  44. LimitNOFILE=65535
  45. [Install]
  46. WantedBy=multi-user.target
  47. EOF

6.1.3master03节点配置

  1. cat > /usr/lib/systemd/system/kube-apiserver.service  << EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-apiserver \
  8.       --v=2  \
  9.       --logtostderr=true  \
  10.       --allow-privileged=true  \
  11.       --bind-address=0.0.0.0  \
  12.       --secure-port=6443  \
  13.       --insecure-port=0  \
  14.       --advertise-address=192.168.1.83 \
  15.       --service-cluster-ip-range=10.96.0.0/12  \
  16.       --service-node-port-range=30000-32767  \
  17.       --etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \
  18.       --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
  19.       --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
  20.       --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
  21.       --client-ca-file=/etc/kubernetes/pki/ca.pem  \
  22.       --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
  23.       --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
  24.       --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
  25.       --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
  26.       --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
  27.       --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
  28.       --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  29.       --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
  30.       --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
  31.       --authorization-mode=Node,RBAC  \
  32.       --enable-bootstrap-token-auth=true  \
  33.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
  34.       --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
  35.       --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
  36.       --requestheader-allowed-names=aggregator  \
  37.       --requestheader-group-headers=X-Remote-Group  \
  38.       --requestheader-extra-headers-prefix=X-Remote-Extra-  \
  39.       --requestheader-username-headers=X-Remote-User \
  40.       --enable-aggregator-routing=true
  41.       # --token-auth-file=/etc/kubernetes/token.csv
  42. Restart=on-failure
  43. RestartSec=10s
  44. LimitNOFILE=65535
  45. [Install]
  46. WantedBy=multi-user.target
  47. EOF

6.1.4启动apiserver(所有master节点)

  1. systemctl daemon-reload && systemctl enable --now kube-apiserver
  2. # 注意查看状态是否启动正常
  3. systemctl status kube-apiserver

6.2.配置kube-controller-manager service

  1. 所有master节点配置,且配置相同
  2. 172.16.0.0/12pod网段,按需求设置你自己的网段
  3. cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
  4. [Unit]
  5. Description=Kubernetes Controller Manager
  6. Documentation=https://github.com/kubernetes/kubernetes
  7. After=network.target
  8. [Service]
  9. ExecStart=/usr/local/bin/kube-controller-manager \
  10.       --v=2 \
  11.       --logtostderr=true \
  12.       --address=127.0.0.1 \
  13.       --root-ca-file=/etc/kubernetes/pki/ca.pem \
  14.       --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
  15.       --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
  16.       --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
  17.       --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
  18.       --leader-elect=true \
  19.       --use-service-account-credentials=true \
  20.       --node-monitor-grace-period=40s \
  21.       --node-monitor-period=5s \
  22.       --pod-eviction-timeout=2m0s \
  23.       --controllers=*,bootstrapsigner,tokencleaner \
  24.       --allocate-node-cidrs=true \
  25.       --cluster-cidr=172.16.0.0/12 \
  26.       --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
  27.       --node-cidr-mask-size=24
  28. Restart=always
  29. RestartSec=10s
  30. [Install]
  31. WantedBy=multi-user.target
  32. EOF

6.2.1启动kube-controller-manager,并查看状态

  1. systemctl daemon-reload
  2. systemctl enable --now kube-controller-manager
  3. systemctl  status kube-controller-manager

6.3.配置kube-scheduler service

6.3.1所有master节点配置,且配置相同

  1. cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
  2. [Unit]
  3. Description=Kubernetes Scheduler
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-scheduler \
  8.       --v=2 \
  9.       --logtostderr=true \
  10.       --address=127.0.0.1 \
  11.       --leader-elect=true \
  12.       --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  13. Restart=always
  14. RestartSec=10s
  15. [Install]
  16. WantedBy=multi-user.target
  17. EOF

6.3.2启动并查看服务状态

  1. systemctl daemon-reload
  2. systemctl enable --now kube-scheduler
  3. systemctl status kube-scheduler

7.TLS Bootstrapping配置

7.1在master01上配置

  1. cd /root/Kubernetes/bootstrap
  2. kubectl config set-cluster kubernetes     \
  3. --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  4. --embed-certs=true     --server=https://192.168.1.89:8443     \
  5. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  6. kubectl config set-credentials tls-bootstrap-token-user     \
  7. --token=c8ad9c.2e4d610cf3e7426e \
  8. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  9. kubectl config set-context tls-bootstrap-token-user@kubernetes     \
  10. --cluster=kubernetes     \
  11. --user=tls-bootstrap-token-user     \
  12. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  13. kubectl config use-context tls-bootstrap-token-user@kubernetes     \
  14. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
  15. # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改
  16. mkdir -/root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

7.2查看集群状态,没问题的话继续后续操作

  1. kubectl get cs
  2. Warning: v1 ComponentStatus is deprecated in v1.19+
  3. NAME                 STATUS    MESSAGE                         ERROR
  4. scheduler            Healthy   ok                              
  5. controller-manager   Healthy   ok                              
  6. etcd-0               Healthy   {"health":"true","reason":""}   
  7. etcd-2               Healthy   {"health":"true","reason":""}   
  8. etcd-1               Healthy   {"health":"true","reason":""} 
  9. kubectl create -f bootstrap.secret.yaml

8.node节点配置

8.1.在master01上将证书复制到node节点

  1. cd /etc/kubernetes/
  2. for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do ssh $NODE mkdir -/etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

8.2.kubelet配置

8.2.1所有k8s节点创建相关目录

  1. mkdir -/var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service./etc/kubernetes/manifests/
  2. 所有k8s节点配置kubelet service
  3. cat > /usr/lib/systemd/system/kubelet.service << EOF
  4. [Unit]
  5. Description=Kubernetes Kubelet
  6. Documentation=https://github.com/kubernetes/kubernetes
  7. After=docker.service
  8. Requires=docker.service
  9. [Service]
  10. ExecStart=/usr/local/bin/kubelet \
  11.     --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \
  12.     --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  13.     --config=/etc/kubernetes/kubelet-conf.yml \
  14.     --network-plugin=cni  \
  15.     --cni-conf-dir=/etc/cni/net.d  \
  16.     --cni-bin-dir=/opt/cni/bin  \
  17.     --container-runtime=remote  \
  18.     --runtime-request-timeout=15m  \
  19.     --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \
  20.     --cgroup-driver=systemd \
  21.     --node-labels=node.kubernetes.io/node=''
  22. Restart=always
  23. StartLimitInterval=0
  24. RestartSec=10
  25. [Install]
  26. WantedBy=multi-user.target
  27. EOF

8.2.2所有k8s节点创建kubelet的配置文件

  1. cat > /etc/kubernetes/kubelet-conf.yml <<EOF
  2. apiVersion: kubelet.config.k8s.io/v1beta1
  3. kind: KubeletConfiguration
  4. address: 0.0.0.0
  5. port: 10250
  6. readOnlyPort: 10255
  7. authentication:
  8.   anonymous:
  9.     enabled: false
  10.   webhook:
  11.     cacheTTL: 2m0s
  12.     enabled: true
  13.   x509:
  14.     clientCAFile: /etc/kubernetes/pki/ca.pem
  15. authorization:
  16.   mode: Webhook
  17.   webhook:
  18.     cacheAuthorizedTTL: 5m0s
  19.     cacheUnauthorizedTTL: 30s
  20. cgroupDriver: systemd
  21. cgroupsPerQOS: true
  22. clusterDNS:
  23. - 10.96.0.10
  24. clusterDomain: cluster.local
  25. containerLogMaxFiles: 5
  26. containerLogMaxSize: 10Mi
  27. contentType: application/vnd.kubernetes.protobuf
  28. cpuCFSQuota: true
  29. cpuManagerPolicy: none
  30. cpuManagerReconcilePeriod: 10s
  31. enableControllerAttachDetach: true
  32. enableDebuggingHandlers: true
  33. enforceNodeAllocatable:
  34. - pods
  35. eventBurst: 10
  36. eventRecordQPS: 5
  37. evictionHard:
  38.   imagefs.available: 15%
  39.   memory.available: 100Mi
  40.   nodefs.available: 10%
  41.   nodefs.inodesFree: 5%
  42. evictionPressureTransitionPeriod: 5m0s
  43. failSwapOn: true
  44. fileCheckFrequency: 20s
  45. hairpinMode: promiscuous-bridge
  46. healthzBindAddress: 127.0.0.1
  47. healthzPort: 10248
  48. httpCheckFrequency: 20s
  49. imageGCHighThresholdPercent: 85
  50. imageGCLowThresholdPercent: 80
  51. imageMinimumGCAge: 2m0s
  52. iptablesDropBit: 15
  53. iptablesMasqueradeBit: 14
  54. kubeAPIBurst: 10
  55. kubeAPIQPS: 5
  56. makeIPTablesUtilChains: true
  57. maxOpenFiles: 1000000
  58. maxPods: 110
  59. nodeStatusUpdateFrequency: 10s
  60. oomScoreAdj: -999
  61. podPidsLimit: -1
  62. registryBurst: 10
  63. registryPullQPS: 5
  64. resolvConf: /etc/resolv.conf
  65. rotateCertificates: true
  66. runtimeRequestTimeout: 2m0s
  67. serializeImagePulls: true
  68. staticPodPath: /etc/kubernetes/manifests
  69. streamingConnectionIdleTimeout: 4h0m0s
  70. syncFrequency: 1m0s
  71. volumeStatsAggPeriod: 1m0s
  72. EOF

8.2.3启动kubelet

  1. systemctl daemon-reload
  2. systemctl restart kubelet
  3. systemctl enable --now kubelet

8.2.4查看集群

  1. [root@k8s-master01 ~]# kubectl  get node
  2. NAME           STATUS     ROLES    AGE   VERSION
  3. k8s-master01   NotReady   <none>   14h   v1.23.5
  4. k8s-master02   NotReady   <none>   14h   v1.23.5
  5. k8s-master03   NotReady   <none>   14h   v1.23.5
  6. k8s-node01     NotReady   <none>   14h   v1.23.5
  7. k8s-node02     NotReady   <none>   14h   v1.23.5
  8. k8s-node03     NotReady   <none>   14h   v1.23.5
  9. k8s-node04     NotReady   <none>   14h   v1.23.5
  10. k8s-node05     NotReady   <none>   14h   v1.23.5
  11. [root@k8s-master01 ~]#

8.3.kube-proxy配置

8.3.1此配置只在master01操作

  1. cd /root/Kubernetes/
  2. kubectl -n kube-system create serviceaccount kube-proxy
  3. kubectl create clusterrolebinding system:kube-proxy \
  4. --clusterrole system:node-proxier \
  5. --serviceaccount kube-system:kube-proxy
  6. SECRET=$(kubectl -n kube-system get sa/kube-proxy \
  7.     --output=jsonpath='{.secrets[0].name}')
  8. JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
  9. --output=jsonpath='{.data.token}' | base64 -d)
  10. PKI_DIR=/etc/kubernetes/pki
  11. K8S_DIR=/etc/kubernetes
  12. kubectl config set-cluster kubernetes \
  13. --certificate-authority=/etc/kubernetes/pki/ca.pem \
  14. --embed-certs=true \
  15. --server=https://192.168.1.89:8443 \
  16. --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
  17. kubectl config set-credentials kubernetes \
  18. --token=${JWT_TOKEN} \
  19. --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
  20. kubectl config set-context kubernetes \
  21. --cluster=kubernetes \
  22. --user=kubernetes \
  23. --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
  24. kubectl config use-context kubernetes \
  25. --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

8.3.2将kubeconfig发送至其他节点

  1. for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done
  2. for NODE in k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig;  done

8.3.3所有k8s节点添加kube-proxy的配置和service文件

  1. cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
  2. [Unit]
  3. Description=Kubernetes Kube Proxy
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=network.target
  6. [Service]
  7. ExecStart=/usr/local/bin/kube-proxy \
  8.   --config=/etc/kubernetes/kube-proxy.yaml \
  9.   --v=2
  10. Restart=always
  11. RestartSec=10s
  12. [Install]
  13. WantedBy=multi-user.target
  14. EOF
  1. cat > /etc/kubernetes/kube-proxy.yaml << EOF
  2. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  3. bindAddress: 0.0.0.0
  4. clientConnection:
  5.   acceptContentTypes: ""
  6.   burst: 10
  7.   contentType: application/vnd.kubernetes.protobuf
  8.   kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  9.   qps: 5
  10. clusterCIDR: 172.16.0.0/12 
  11. configSyncPeriod: 15m0s
  12. conntrack:
  13.   max: null
  14.   maxPerCore: 32768
  15.   min: 131072
  16.   tcpCloseWaitTimeout: 1h0m0s
  17.   tcpEstablishedTimeout: 24h0m0s
  18. enableProfiling: false
  19. healthzBindAddress: 0.0.0.0:10256
  20. hostnameOverride: ""
  21. iptables:
  22.   masqueradeAll: false
  23.   masqueradeBit: 14
  24.   minSyncPeriod: 0s
  25.   syncPeriod: 30s
  26. ipvs:
  27.   masqueradeAll: true
  28.   minSyncPeriod: 5s
  29.   scheduler: "rr"
  30.   syncPeriod: 30s
  31. kind: KubeProxyConfiguration
  32. metricsBindAddress: 127.0.0.1:10249
  33. mode: "ipvs"
  34. nodePortAddresses: null
  35. oomScoreAdj: -999
  36. portRange: ""
  37. udpIdleTimeout: 250ms
  38. EOF

8.3.4启动kube-proxy

  1. systemctl daemon-reload
  2.  systemctl enable --now kube-proxy

9.安装Calico

9.1以下步骤只在master01操作

9.1.1更改calico网段

  1. cd /root/Kubernetes/calico/
  2. sed -"s#POD_CIDR#172.16.0.0/12#g" calico.yaml
  3. grep "IPV4POOL_CIDR" calico.yaml  -1
  4.             - name: CALICO_IPV4POOL_CIDR
  5.               value: "172.16.0.0/12"
  6. # 创建
  7. kubectl apply -f calico.yaml

9.1.2查看容器状态

  1. [root@k8s-master01 ~]# kubectl  get pod -A
  2. NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
  3. kube-system   calico-kube-controllers-6f6595874c-nb95g   1/1     Running   0          2m54s
  4. kube-system   calico-node-67dn4                          1/1     Running   0          2m54s
  5. kube-system   calico-node-79zxj                          1/1     Running   0          2m54s
  6. kube-system   calico-node-85bsf                          1/1     Running   0          2m54s
  7. kube-system   calico-node-8trsm                          1/1     Running   0          2m54s
  8. kube-system   calico-node-dvz72                          1/1     Running   0          2m54s
  9. kube-system   calico-node-qqzwx                          1/1     Running   0          2m54s
  10. kube-system   calico-node-rngzq                          1/1     Running   0          2m55s
  11. kube-system   calico-node-w8gqp                          1/1     Running   0          2m54s
  12. kube-system   calico-typha-6b6cf8cbdf-2b454              1/1     Running   0          2m55s
  13. [root@k8s-master01 ~]# 
  14. [root@k8s-master01 ~]# kubectl  get node
  15. NAME           STATUS   ROLES    AGE   VERSION
  16. k8s-master01   Ready    <none>   14h   v1.23.5
  17. k8s-master02   Ready    <none>   14h   v1.23.5
  18. k8s-master03   Ready    <none>   14h   v1.23.5
  19. k8s-node01     Ready    <none>   14h   v1.23.5
  20. k8s-node02     Ready    <none>   14h   v1.23.5
  21. k8s-node03     Ready    <none>   14h   v1.23.5
  22. k8s-node04     Ready    <none>   14h   v1.23.5
  23. k8s-node05     Ready    <none>   14h   v1.23.5
  24. [root@k8s-master01 ~]#

10.安装CoreDNS

10.1以下步骤只在master01操作

10.1.1修改文件

  1. cd /root/Kubernetes/CoreDNS/
  2. sed -"s#KUBEDNS_SERVICE_IP#10.96.0.10#g" coredns.yaml
  3. cat coredns.yaml | grep clusterIP:
  4.   clusterIP: 10.96.0.10

10.1.2安装

  1. kubectl  create -f coredns.yaml 
  2. serviceaccount/coredns created
  3. clusterrole.rbac.authorization.k8s.io/system:coredns created
  4. clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
  5. configmap/coredns created
  6. deployment.apps/coredns created
  7. service/kube-dns created

11.安装Metrics Server

11.1以下步骤只在master01操作

11.1.1安装Metrics-server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

  1. 安装metrics server
  2. cd /root/Kubernetes/metrics-server/
  3. kubectl  create -. 
  4. serviceaccount/metrics-server created
  5. clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
  6. clusterrole.rbac.authorization.k8s.io/system:metrics-server created
  7. rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
  8. clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
  9. clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
  10. service/metrics-server created
  11. deployment.apps/metrics-server created
  12. apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

11.1.2稍等片刻查看状态

  1. kubectl  top node
  2. NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
  3. k8s-master01   154m         1%     1715Mi          21%       
  4. k8s-master02   151m         1%     1274Mi          16%       
  5. k8s-master03   523m         6%     1345Mi          17%       
  6. k8s-node01     84m          1%     671Mi           8%        
  7. k8s-node02     73m          0%     727Mi           9%        
  8. k8s-node03     96m          1%     769Mi           9%        
  9. k8s-node04     68m          0%     673Mi           8%        
  10. k8s-node05     82m          1%     679Mi           8%

12.集群验证

12.1部署pod资源

  1. cat<<EOF | kubectl apply --
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5.   name: busybox
  6.   namespace: default
  7. spec:
  8.   containers:
  9.   - name: busybox
  10.     image: busybox:1.28
  11.     command:
  12.       - sleep
  13.       - "3600"
  14.     imagePullPolicy: IfNotPresent
  15.   restartPolicy: Always
  16. EOF
  17. # 查看
  18. kubectl  get pod
  19. NAME      READY   STATUS    RESTARTS   AGE
  20. busybox   1/1     Running   0          17s

12.2用pod解析默认命名空间中的kubernetes

  1. kubectl get svc
  2. NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
  3. kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17h
  4. kubectl exec  busybox -default -- nslookup kubernetes
  5. 3Server:    10.96.0.10
  6. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
  7. Name:      kubernetes
  8. Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

12.3测试跨命名空间是否可以解析

  1. kubectl exec  busybox -default -- nslookup kube-dns.kube-system
  2. Server:    10.96.0.10
  3. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
  4. Name:      kube-dns.kube-system
  5. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

  1. telnet 10.96.0.1 443
  2. Trying 10.96.0.1...
  3. Connected to 10.96.0.1.
  4. Escape character is '^]'.
  5.  telnet 10.96.0.10 53
  6. Trying 10.96.0.10...
  7. Connected to 10.96.0.10.
  8. Escape character is '^]'.
  9. curl 10.96.0.10:53
  10. curl: (52) Empty reply from server

12.5Pod和Pod之前要能通

  1. kubectl get po -owide
  2. NAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
  3. busybox   1/1     Running   0          17m   172.27.14.193   k8s-node02   <none>           <none>
  4.  kubectl get po -n kube-system -owide
  5. NAME                                       READY   STATUS    RESTARTS      AGE   IP               NODE           NOMINATED NODE   READINESS GATES
  6. calico-kube-controllers-5dffd5886b-4blh6   1/1     Running   0             77m   172.25.244.193   k8s-master01   <none>           <none>
  7. calico-node-fvbdq                          1/1     Running   1 (75m ago)   77m   192.168.1.81     k8s-master01   <none>           <none>
  8. calico-node-g8nqd                          1/1     Running   0             77m   192.168.1.84     k8s-node01     <none>           <none>
  9. calico-node-mdps8                          1/1     Running   0             77m   192.168.1.85     k8s-node02     <none>           <none>
  10. calico-node-nf4nt                          1/1     Running   0             77m   192.168.1.83     k8s-master03   <none>           <none>
  11. calico-node-sq2ml                          1/1     Running   0             77m   192.168.1.82     k8s-master02   <none>           <none>
  12. calico-typha-8445487f56-mg6p8              1/1     Running   0             77m   192.168.1.85     k8s-node02     <none>           <none>
  13. calico-typha-8445487f56-pxbpj              1/1     Running   0             77m   192.168.1.81     k8s-master01   <none>           <none>
  14. calico-typha-8445487f56-tnssl              1/1     Running   0             77m   192.168.1.84     k8s-node01     <none>           <none>
  15. coredns-5db5696c7-67h79                    1/1     Running   0             63m   172.25.92.65     k8s-master02   <none>           <none>
  16. metrics-server-6bf7dcd649-5fhrw            1/1     Running   0             61m   172.18.195.1     k8s-master03   <none>           <none>
  17. # 进入busybox ping其他节点上的pod
  18. kubectl exec -ti busybox -- sh
  19. / # ping 192.168.1.84
  20. PING 192.168.1.84 (192.168.1.84): 56 data bytes
  21. 64 bytes from 192.168.1.84: seq=0 ttl=63 time=0.358 ms
  22. 64 bytes from 192.168.1.84: seq=1 ttl=63 time=0.668 ms
  23. 64 bytes from 192.168.1.84: seq=2 ttl=63 time=0.637 ms
  24. 64 bytes from 192.168.1.84: seq=3 ttl=63 time=0.624 ms
  25. 64 bytes from 192.168.1.84: seq=4 ttl=63 time=0.907 ms
  26. # 可以连通证明这个pod是可以跨命名空间和跨主机通信的

12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)

  1. cat > deployments.yaml << EOF
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5.   name: nginx-deployment
  6.   labels:
  7.     app: nginx
  8. spec:
  9.   replicas: 3
  10.   selector:
  11.     matchLabels:
  12.       app: nginx
  13.   template:
  14.     metadata:
  15.       labels:
  16.         app: nginx
  17.     spec:
  18.       containers:
  19.       - name: nginx
  20.         image: nginx:1.14.2
  21.         ports:
  22.         - containerPort: 80
  23. EOF
  24. kubectl  apply -f deployments.yaml 
  25. deployment.apps/nginx-deployment created
  26. kubectl  get pod 
  27. NAME                               READY   STATUS    RESTARTS   AGE
  28. busybox                            1/1     Running   0          6m25s
  29. nginx-deployment-9456bbbf9-4bmvk   1/1     Running   0          8s
  30. nginx-deployment-9456bbbf9-9rcdk   1/1     Running   0          8s
  31. nginx-deployment-9456bbbf9-dqv8s   1/1     Running   0          8s
  32. # 删除nginx
  33. [root@k8s-master01 ~]# kubectl delete -f deployments.yaml

13.安装dashboard

  1. cd /root/Kubernetes/dashboard/
  2. kubectl  create -.
  3. serviceaccount/admin-user created
  4. clusterrolebinding.rbac.authorization.k8s.io/admin-user created
  5. namespace/kubernetes-dashboard created
  6. serviceaccount/kubernetes-dashboard created
  7. service/kubernetes-dashboard created
  8. secret/kubernetes-dashboard-certs created
  9. secret/kubernetes-dashboard-csrf created
  10. secret/kubernetes-dashboard-key-holder created
  11. configmap/kubernetes-dashboard-settings created
  12. role.rbac.authorization.k8s.io/kubernetes-dashboard created
  13. clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
  14. rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
  15. clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
  16. deployment.apps/kubernetes-dashboard created
  17. service/dashboard-metrics-scraper created
  18. deployment.apps/dashboard-metrics-scraper created

13.1创建管理员用户

  1. cat > admin.yaml << EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5.   name: admin-user
  6.   namespace: kube-system
  7. ---
  8. apiVersion: rbac.authorization.k8s.io/v1
  9. kind: ClusterRoleBinding 
  10. metadata: 
  11.   name: admin-user
  12.   annotations:
  13.     rbac.authorization.kubernetes.io/autoupdate: "true"
  14. roleRef:
  15.   apiGroup: rbac.authorization.k8s.io
  16.   kind: ClusterRole
  17.   name: cluster-admin
  18. subjects:
  19. - kind: ServiceAccount
  20.   name: admin-user
  21.   namespace: kube-system
  22. EOF

13.2执行yaml文件

  1. kubectl apply -f admin.yaml -n kube-system
  2. serviceaccount/admin-user created
  3. clusterrolebinding.rbac.authorization.k8s.io/admin-user created

13.3更改dashboard的svc为NodePort,如果已是请忽略

  1. kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
  2.   type: NodePort

13.4查看端口号

  1. kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
  2. NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
  3. kubernetes-dashboard   NodePort   10.98.201.22   <none>        443:31245/TCP   10m

13.5查看token

  1. kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
  2. Name:         admin-user-token-5vfk4
  3. Namespace:    kube-system
  4. Labels:       <none>
  5. Annotations:  kubernetes.io/service-account.name: admin-user
  6.               kubernetes.io/service-account.uid: fc2535ae-8760-4037-9026-966f03ab9bf9
  7. Type:  kubernetes.io/service-account-token
  8. Data
  9. ====
  10. ca.crt:     1363 bytes
  11. namespace:  11 bytes
  12. token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InVOMnhMdHFTRWxweUlfUm93VmhMZTVXZW1FXzFrT01nQ0dTcE5uYjJlNWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTV2Zms0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmYzI1MzVhZS04NzYwLTQwMzctOTAyNi05NjZmMDNhYjliZjkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.HSU1FeqY6pDVoXVIv4Lu27TDhCYHM-FzGsGybYL5QPJ5-P0b3tQqUH9i3AQlisiGPB--jCFT5CUeOeXneOyfV7XkC7frbn6VaQoh51n6ztkIvjUm8Q4xj_LQ2OSFfWlFUnaZsaYTdD-RCldwh63pX362T_FjgDknO4q1wtKZH5qR0mpL1dOjas50gnOSyBY0j-nSPrifhnNq3_GcDLE4LxjuzO1DfGNTEHZ6TojPJ_5ZElMolaYJsVejn2slfeUQEWdiD5AHFZlRd4exODCHyvUhRpzb9jO2rovN2LMqdE_vxBtNgXp19evQB9AgZyMMSmu1Ch2C2UAi4NxjKw8HNA

13.6登录dashboard

https://192.168.1.81:31245/

eyJhbGciOiJSUzI1NiIsImtpZCI6InYzV2dzNnQzV3hHb2FQWnYzdnlOSmpudmtpVmNjQW5VM3daRi12SFM4dEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWs1NDVrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMzA4MDcxYy00Y2Y1LTQ1ODMtODNhMi1lYWY3ODEyNTEyYjQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.pshvZPi9ZJkXUWuWilcYs1wawTpzV-nMKesgF3d_l7qyTPaK2N5ofzIThd0SjzU7BFNb4_rOm1dw1Be5kLeHjY_YW5lDnM5TAxVPXmZQ0HJ2pAQ0pjQqCHFnPD0bZFIYkeyz8pZx0Hmwcd3ZdC1yztr0ADpTAmMgI9NC2ZFIeoFFo4Ue9ZM_ulhqJQjmgoAlI_qbyjuKCNsWeEQBwM6HHHAsH1gOQIdVxqQ83OQZUuynDQRpqlHHFIndbK2zVRYFA3GgUnTu2-VRQ-DXBFRjvZR5qArnC1f383jmIjGT6VO7l04QJteG_LFetRbXa-T4mcnbsd8XutSgO0INqwKpjw

14.ingress安装

14.1写入配置文件,并执行

  1. [root@hello ~/yaml]# vim deploy.yaml
  2. [root@hello ~/yaml]#
  3. [root@hello ~/yaml]#
  4. [root@hello ~/yaml]# cat deploy.yaml
  5. apiVersion: v1
  6. kind: Namespace
  7. metadata:
  8.   name: ingress-nginx
  9.   labels:
  10.     app.kubernetes.io/name: ingress-nginx
  11.     app.kubernetes.io/instance: ingress-nginx
  12. ---
  13. # Source: ingress-nginx/templates/controller-serviceaccount.yaml
  14. apiVersion: v1
  15. kind: ServiceAccount
  16. metadata:
  17.   labels:
  18.     helm.sh/chart: ingress-nginx-4.0.10
  19.     app.kubernetes.io/name: ingress-nginx
  20.     app.kubernetes.io/instance: ingress-nginx
  21.     app.kubernetes.io/version: 1.1.0
  22.     app.kubernetes.io/managed-by: Helm
  23.     app.kubernetes.io/component: controller
  24.   name: ingress-nginx
  25.   namespace: ingress-nginx
  26. automountServiceAccountToken: true
  27. ---
  28. # Source: ingress-nginx/templates/controller-configmap.yaml
  29. apiVersion: v1
  30. kind: ConfigMap
  31. metadata:
  32.   labels:
  33.     helm.sh/chart: ingress-nginx-4.0.10
  34.     app.kubernetes.io/name: ingress-nginx
  35.     app.kubernetes.io/instance: ingress-nginx
  36.     app.kubernetes.io/version: 1.1.0
  37.     app.kubernetes.io/managed-by: Helm
  38.     app.kubernetes.io/component: controller
  39.   name: ingress-nginx-controller
  40.   namespace: ingress-nginx
  41. data:
  42.   allow-snippet-annotations: 'true'
  43. ---
  44. # Source: ingress-nginx/templates/clusterrole.yaml
  45. apiVersion: rbac.authorization.k8s.io/v1
  46. kind: ClusterRole
  47. metadata:
  48.   labels:
  49.     helm.sh/chart: ingress-nginx-4.0.10
  50.     app.kubernetes.io/name: ingress-nginx
  51.     app.kubernetes.io/instance: ingress-nginx
  52.     app.kubernetes.io/version: 1.1.0
  53.     app.kubernetes.io/managed-by: Helm
  54.   name: ingress-nginx
  55. rules:
  56.   - apiGroups:
  57.       - ''
  58.     resources:
  59.       - configmaps
  60.       - endpoints
  61.       - nodes
  62.       - pods
  63.       - secrets
  64.       - namespaces
  65.     verbs:
  66.       - list
  67.       - watch
  68.   - apiGroups:
  69.       - ''
  70.     resources:
  71.       - nodes
  72.     verbs:
  73.       - get
  74.   - apiGroups:
  75.       - ''
  76.     resources:
  77.       - services
  78.     verbs:
  79.       - get
  80.       - list
  81.       - watch
  82.   - apiGroups:
  83.       - networking.k8s.io
  84.     resources:
  85.       - ingresses
  86.     verbs:
  87.       - get
  88.       - list
  89.       - watch
  90.   - apiGroups:
  91.       - ''
  92.     resources:
  93.       - events
  94.     verbs:
  95.       - create
  96.       - patch
  97.   - apiGroups:
  98.       - networking.k8s.io
  99.     resources:
  100.       - ingresses/status
  101.     verbs:
  102.       - update
  103.   - apiGroups:
  104.       - networking.k8s.io
  105.     resources:
  106.       - ingressclasses
  107.     verbs:
  108.       - get
  109.       - list
  110.       - watch
  111. ---
  112. # Source: ingress-nginx/templates/clusterrolebinding.yaml
  113. apiVersion: rbac.authorization.k8s.io/v1
  114. kind: ClusterRoleBinding
  115. metadata:
  116.   labels:
  117.     helm.sh/chart: ingress-nginx-4.0.10
  118.     app.kubernetes.io/name: ingress-nginx
  119.     app.kubernetes.io/instance: ingress-nginx
  120.     app.kubernetes.io/version: 1.1.0
  121.     app.kubernetes.io/managed-by: Helm
  122.   name: ingress-nginx
  123. roleRef:
  124.   apiGroup: rbac.authorization.k8s.io
  125.   kind: ClusterRole
  126.   name: ingress-nginx
  127. subjects:
  128.   - kind: ServiceAccount
  129.     name: ingress-nginx
  130.     namespace: ingress-nginx
  131. ---
  132. # Source: ingress-nginx/templates/controller-role.yaml
  133. apiVersion: rbac.authorization.k8s.io/v1
  134. kind: Role
  135. metadata:
  136.   labels:
  137.     helm.sh/chart: ingress-nginx-4.0.10
  138.     app.kubernetes.io/name: ingress-nginx
  139.     app.kubernetes.io/instance: ingress-nginx
  140.     app.kubernetes.io/version: 1.1.0
  141.     app.kubernetes.io/managed-by: Helm
  142.     app.kubernetes.io/component: controller
  143.   name: ingress-nginx
  144.   namespace: ingress-nginx
  145. rules:
  146.   - apiGroups:
  147.       - ''
  148.     resources:
  149.       - namespaces
  150.     verbs:
  151.       - get
  152.   - apiGroups:
  153.       - ''
  154.     resources:
  155.       - configmaps
  156.       - pods
  157.       - secrets
  158.       - endpoints
  159.     verbs:
  160.       - get
  161.       - list
  162.       - watch
  163.   - apiGroups:
  164.       - ''
  165.     resources:
  166.       - services
  167.     verbs:
  168.       - get
  169.       - list
  170.       - watch
  171.   - apiGroups:
  172.       - networking.k8s.io
  173.     resources:
  174.       - ingresses
  175.     verbs:
  176.       - get
  177.       - list
  178.       - watch
  179.   - apiGroups:
  180.       - networking.k8s.io
  181.     resources:
  182.       - ingresses/status
  183.     verbs:
  184.       - update
  185.   - apiGroups:
  186.       - networking.k8s.io
  187.     resources:
  188.       - ingressclasses
  189.     verbs:
  190.       - get
  191.       - list
  192.       - watch
  193.   - apiGroups:
  194.       - ''
  195.     resources:
  196.       - configmaps
  197.     resourceNames:
  198.       - ingress-controller-leader
  199.     verbs:
  200.       - get
  201.       - update
  202.   - apiGroups:
  203.       - ''
  204.     resources:
  205.       - configmaps
  206.     verbs:
  207.       - create
  208.   - apiGroups:
  209.       - ''
  210.     resources:
  211.       - events
  212.     verbs:
  213.       - create
  214.       - patch
  215. ---
  216. # Source: ingress-nginx/templates/controller-rolebinding.yaml
  217. apiVersion: rbac.authorization.k8s.io/v1
  218. kind: RoleBinding
  219. metadata:
  220.   labels:
  221.     helm.sh/chart: ingress-nginx-4.0.10
  222.     app.kubernetes.io/name: ingress-nginx
  223.     app.kubernetes.io/instance: ingress-nginx
  224.     app.kubernetes.io/version: 1.1.0
  225.     app.kubernetes.io/managed-by: Helm
  226.     app.kubernetes.io/component: controller
  227.   name: ingress-nginx
  228.   namespace: ingress-nginx
  229. roleRef:
  230.   apiGroup: rbac.authorization.k8s.io
  231.   kind: Role
  232.   name: ingress-nginx
  233. subjects:
  234.   - kind: ServiceAccount
  235.     name: ingress-nginx
  236.     namespace: ingress-nginx
  237. ---
  238. # Source: ingress-nginx/templates/controller-service-webhook.yaml
  239. apiVersion: v1
  240. kind: Service
  241. metadata:
  242.   labels:
  243.     helm.sh/chart: ingress-nginx-4.0.10
  244.     app.kubernetes.io/name: ingress-nginx
  245.     app.kubernetes.io/instance: ingress-nginx
  246.     app.kubernetes.io/version: 1.1.0
  247.     app.kubernetes.io/managed-by: Helm
  248.     app.kubernetes.io/component: controller
  249.   name: ingress-nginx-controller-admission
  250.   namespace: ingress-nginx
  251. spec:
  252.   type: ClusterIP
  253.   ports:
  254.     - name: https-webhook
  255.       port: 443
  256.       targetPort: webhook
  257.       appProtocol: https
  258.   selector:
  259.     app.kubernetes.io/name: ingress-nginx
  260.     app.kubernetes.io/instance: ingress-nginx
  261.     app.kubernetes.io/component: controller
  262. ---
  263. # Source: ingress-nginx/templates/controller-service.yaml
  264. apiVersion: v1
  265. kind: Service
  266. metadata:
  267.   annotations:
  268.   labels:
  269.     helm.sh/chart: ingress-nginx-4.0.10
  270.     app.kubernetes.io/name: ingress-nginx
  271.     app.kubernetes.io/instance: ingress-nginx
  272.     app.kubernetes.io/version: 1.1.0
  273.     app.kubernetes.io/managed-by: Helm
  274.     app.kubernetes.io/component: controller
  275.   name: ingress-nginx-controller
  276.   namespace: ingress-nginx
  277. spec:
  278.   type: NodePort
  279.   externalTrafficPolicy: Local
  280.   ipFamilyPolicy: SingleStack
  281.   ipFamilies:
  282.     - IPv4
  283.   ports:
  284.     - name: http
  285.       port: 80
  286.       protocol: TCP
  287.       targetPort: http
  288.       appProtocol: http
  289.     - name: https
  290.       port: 443
  291.       protocol: TCP
  292.       targetPort: https
  293.       appProtocol: https
  294.   selector:
  295.     app.kubernetes.io/name: ingress-nginx
  296.     app.kubernetes.io/instance: ingress-nginx
  297.     app.kubernetes.io/component: controller
  298. ---
  299. # Source: ingress-nginx/templates/controller-deployment.yaml
  300. apiVersion: apps/v1
  301. kind: Deployment
  302. metadata:
  303.   labels:
  304.     helm.sh/chart: ingress-nginx-4.0.10
  305.     app.kubernetes.io/name: ingress-nginx
  306.     app.kubernetes.io/instance: ingress-nginx
  307.     app.kubernetes.io/version: 1.1.0
  308.     app.kubernetes.io/managed-by: Helm
  309.     app.kubernetes.io/component: controller
  310.   name: ingress-nginx-controller
  311.   namespace: ingress-nginx
  312. spec:
  313.   selector:
  314.     matchLabels:
  315.       app.kubernetes.io/name: ingress-nginx
  316.       app.kubernetes.io/instance: ingress-nginx
  317.       app.kubernetes.io/component: controller
  318.   revisionHistoryLimit: 10
  319.   minReadySeconds: 0
  320.   template:
  321.     metadata:
  322.       labels:
  323.         app.kubernetes.io/name: ingress-nginx
  324.         app.kubernetes.io/instance: ingress-nginx
  325.         app.kubernetes.io/component: controller
  326.     spec:
  327.       dnsPolicy: ClusterFirst
  328.       containers:
  329.         - name: controller
  330.           image: registry.cn-hangzhou.aliyuncs.com/chenby/controller:v1.1.3 
  331.           imagePullPolicy: IfNotPresent
  332.           lifecycle:
  333.             preStop:
  334.               exec:
  335.                 command:
  336.                   - /wait-shutdown
  337.           args:
  338.             - /nginx-ingress-controller
  339.             - --election-id=ingress-controller-leader
  340.             - --controller-class=k8s.io/ingress-nginx
  341.             - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
  342.             - --validating-webhook=:8443
  343.             - --validating-webhook-certificate=/usr/local/certificates/cert
  344.             - --validating-webhook-key=/usr/local/certificates/key
  345.           securityContext:
  346.             capabilities:
  347.               drop:
  348.                 - ALL
  349.               add:
  350.                 - NET_BIND_SERVICE
  351.             runAsUser: 101
  352.             allowPrivilegeEscalation: true
  353.           env:
  354.             - name: POD_NAME
  355.               valueFrom:
  356.                 fieldRef:
  357.                   fieldPath: metadata.name
  358.             - name: POD_NAMESPACE
  359.               valueFrom:
  360.                 fieldRef:
  361.                   fieldPath: metadata.namespace
  362.             - name: LD_PRELOAD
  363.               value: /usr/local/lib/libmimalloc.so
  364.           livenessProbe:
  365.             failureThreshold: 5
  366.             httpGet:
  367.               path: /healthz
  368.               port: 10254
  369.               scheme: HTTP
  370.             initialDelaySeconds: 10
  371.             periodSeconds: 10
  372.             successThreshold: 1
  373.             timeoutSeconds: 1
  374.           readinessProbe:
  375.             failureThreshold: 3
  376.             httpGet:
  377.               path: /healthz
  378.               port: 10254
  379.               scheme: HTTP
  380.             initialDelaySeconds: 10
  381.             periodSeconds: 10
  382.             successThreshold: 1
  383.             timeoutSeconds: 1
  384.           ports:
  385.             - name: http
  386.               containerPort: 80
  387.               protocol: TCP
  388.             - name: https
  389.               containerPort: 443
  390.               protocol: TCP
  391.             - name: webhook
  392.               containerPort: 8443
  393.               protocol: TCP
  394.           volumeMounts:
  395.             - name: webhook-cert
  396.               mountPath: /usr/local/certificates/
  397.               readOnly: true
  398.           resources:
  399.             requests:
  400.               cpu: 100m
  401.               memory: 90Mi
  402.       nodeSelector:
  403.         kubernetes.io/os: linux
  404.       serviceAccountName: ingress-nginx
  405.       terminationGracePeriodSeconds: 300
  406.       volumes:
  407.         - name: webhook-cert
  408.           secret:
  409.             secretName: ingress-nginx-admission
  410. ---
  411. # Source: ingress-nginx/templates/controller-ingressclass.yaml
  412. # We don't support namespaced ingressClass yet
  413. # So a ClusterRole and a ClusterRoleBinding is required
  414. apiVersion: networking.k8s.io/v1
  415. kind: IngressClass
  416. metadata:
  417.   labels:
  418.     helm.sh/chart: ingress-nginx-4.0.10
  419.     app.kubernetes.io/name: ingress-nginx
  420.     app.kubernetes.io/instance: ingress-nginx
  421.     app.kubernetes.io/version: 1.1.0
  422.     app.kubernetes.io/managed-by: Helm
  423.     app.kubernetes.io/component: controller
  424.   name: nginx
  425.   namespace: ingress-nginx
  426. spec:
  427.   controller: k8s.io/ingress-nginx
  428. ---
  429. # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
  430. # before changing this value, check the required kubernetes version
  431. # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
  432. apiVersion: admissionregistration.k8s.io/v1
  433. kind: ValidatingWebhookConfiguration
  434. metadata:
  435.   labels:
  436.     helm.sh/chart: ingress-nginx-4.0.10
  437.     app.kubernetes.io/name: ingress-nginx
  438.     app.kubernetes.io/instance: ingress-nginx
  439.     app.kubernetes.io/version: 1.1.0
  440.     app.kubernetes.io/managed-by: Helm
  441.     app.kubernetes.io/component: admission-webhook
  442.   name: ingress-nginx-admission
  443. webhooks:
  444.   - name: validate.nginx.ingress.kubernetes.io
  445.     matchPolicy: Equivalent
  446.     rules:
  447.       - apiGroups:
  448.           - networking.k8s.io
  449.         apiVersions:
  450.           - v1
  451.         operations:
  452.           - CREATE
  453.           - UPDATE
  454.         resources:
  455.           - ingresses
  456.     failurePolicy: Fail
  457.     sideEffects: None
  458.     admissionReviewVersions:
  459.       - v1
  460.     clientConfig:
  461.       service:
  462.         namespace: ingress-nginx
  463.         name: ingress-nginx-controller-admission
  464.         path: /networking/v1/ingresses
  465. ---
  466. # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
  467. apiVersion: v1
  468. kind: ServiceAccount
  469. metadata:
  470.   name: ingress-nginx-admission
  471.   namespace: ingress-nginx
  472.   annotations:
  473.     helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
  474.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  475.   labels:
  476.     helm.sh/chart: ingress-nginx-4.0.10
  477.     app.kubernetes.io/name: ingress-nginx
  478.     app.kubernetes.io/instance: ingress-nginx
  479.     app.kubernetes.io/version: 1.1.0
  480.     app.kubernetes.io/managed-by: Helm
  481.     app.kubernetes.io/component: admission-webhook
  482. ---
  483. # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
  484. apiVersion: rbac.authorization.k8s.io/v1
  485. kind: ClusterRole
  486. metadata:
  487.   name: ingress-nginx-admission
  488.   annotations:
  489.     helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
  490.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  491.   labels:
  492.     helm.sh/chart: ingress-nginx-4.0.10
  493.     app.kubernetes.io/name: ingress-nginx
  494.     app.kubernetes.io/instance: ingress-nginx
  495.     app.kubernetes.io/version: 1.1.0
  496.     app.kubernetes.io/managed-by: Helm
  497.     app.kubernetes.io/component: admission-webhook
  498. rules:
  499.   - apiGroups:
  500.       - admissionregistration.k8s.io
  501.     resources:
  502.       - validatingwebhookconfigurations
  503.     verbs:
  504.       - get
  505.       - update
  506. ---
  507. # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
  508. apiVersion: rbac.authorization.k8s.io/v1
  509. kind: ClusterRoleBinding
  510. metadata:
  511.   name: ingress-nginx-admission
  512.   annotations:
  513.     helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
  514.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  515.   labels:
  516.     helm.sh/chart: ingress-nginx-4.0.10
  517.     app.kubernetes.io/name: ingress-nginx
  518.     app.kubernetes.io/instance: ingress-nginx
  519.     app.kubernetes.io/version: 1.1.0
  520.     app.kubernetes.io/managed-by: Helm
  521.     app.kubernetes.io/component: admission-webhook
  522. roleRef:
  523.   apiGroup: rbac.authorization.k8s.io
  524.   kind: ClusterRole
  525.   name: ingress-nginx-admission
  526. subjects:
  527.   - kind: ServiceAccount
  528.     name: ingress-nginx-admission
  529.     namespace: ingress-nginx
  530. ---
  531. # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
  532. apiVersion: rbac.authorization.k8s.io/v1
  533. kind: Role
  534. metadata:
  535.   name: ingress-nginx-admission
  536.   namespace: ingress-nginx
  537.   annotations:
  538.     helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
  539.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  540.   labels:
  541.     helm.sh/chart: ingress-nginx-4.0.10
  542.     app.kubernetes.io/name: ingress-nginx
  543.     app.kubernetes.io/instance: ingress-nginx
  544.     app.kubernetes.io/version: 1.1.0
  545.     app.kubernetes.io/managed-by: Helm
  546.     app.kubernetes.io/component: admission-webhook
  547. rules:
  548.   - apiGroups:
  549.       - ''
  550.     resources:
  551.       - secrets
  552.     verbs:
  553.       - get
  554.       - create
  555. ---
  556. # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
  557. apiVersion: rbac.authorization.k8s.io/v1
  558. kind: RoleBinding
  559. metadata:
  560.   name: ingress-nginx-admission
  561.   namespace: ingress-nginx
  562.   annotations:
  563.     helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
  564.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  565.   labels:
  566.     helm.sh/chart: ingress-nginx-4.0.10
  567.     app.kubernetes.io/name: ingress-nginx
  568.     app.kubernetes.io/instance: ingress-nginx
  569.     app.kubernetes.io/version: 1.1.0
  570.     app.kubernetes.io/managed-by: Helm
  571.     app.kubernetes.io/component: admission-webhook
  572. roleRef:
  573.   apiGroup: rbac.authorization.k8s.io
  574.   kind: Role
  575.   name: ingress-nginx-admission
  576. subjects:
  577.   - kind: ServiceAccount
  578.     name: ingress-nginx-admission
  579.     namespace: ingress-nginx
  580. ---
  581. # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
  582. apiVersion: batch/v1
  583. kind: Job
  584. metadata:
  585.   name: ingress-nginx-admission-create
  586.   namespace: ingress-nginx
  587.   annotations:
  588.     helm.sh/hook: pre-install,pre-upgrade
  589.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  590.   labels:
  591.     helm.sh/chart: ingress-nginx-4.0.10
  592.     app.kubernetes.io/name: ingress-nginx
  593.     app.kubernetes.io/instance: ingress-nginx
  594.     app.kubernetes.io/version: 1.1.0
  595.     app.kubernetes.io/managed-by: Helm
  596.     app.kubernetes.io/component: admission-webhook
  597. spec:
  598.   template:
  599.     metadata:
  600.       name: ingress-nginx-admission-create
  601.       labels:
  602.         helm.sh/chart: ingress-nginx-4.0.10
  603.         app.kubernetes.io/name: ingress-nginx
  604.         app.kubernetes.io/instance: ingress-nginx
  605.         app.kubernetes.io/version: 1.1.0
  606.         app.kubernetes.io/managed-by: Helm
  607.         app.kubernetes.io/component: admission-webhook
  608.     spec:
  609.       containers:
  610.         - name: create
  611.           image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 
  612.           imagePullPolicy: IfNotPresent
  613.           args:
  614.             - create
  615.             - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
  616.             - --namespace=$(POD_NAMESPACE)
  617.             - --secret-name=ingress-nginx-admission
  618.           env:
  619.             - name: POD_NAMESPACE
  620.               valueFrom:
  621.                 fieldRef:
  622.                   fieldPath: metadata.namespace
  623.           securityContext:
  624.             allowPrivilegeEscalation: false
  625.       restartPolicy: OnFailure
  626.       serviceAccountName: ingress-nginx-admission
  627.       nodeSelector:
  628.         kubernetes.io/os: linux
  629.       securityContext:
  630.         runAsNonRoot: true
  631.         runAsUser: 2000
  632. ---
  633. # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
  634. apiVersion: batch/v1
  635. kind: Job
  636. metadata:
  637.   name: ingress-nginx-admission-patch
  638.   namespace: ingress-nginx
  639.   annotations:
  640.     helm.sh/hook: post-install,post-upgrade
  641.     helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  642.   labels:
  643.     helm.sh/chart: ingress-nginx-4.0.10
  644.     app.kubernetes.io/name: ingress-nginx
  645.     app.kubernetes.io/instance: ingress-nginx
  646.     app.kubernetes.io/version: 1.1.0
  647.     app.kubernetes.io/managed-by: Helm
  648.     app.kubernetes.io/component: admission-webhook
  649. spec:
  650.   template:
  651.     metadata:
  652.       name: ingress-nginx-admission-patch
  653.       labels:
  654.         helm.sh/chart: ingress-nginx-4.0.10
  655.         app.kubernetes.io/name: ingress-nginx
  656.         app.kubernetes.io/instance: ingress-nginx
  657.         app.kubernetes.io/version: 1.1.0
  658.         app.kubernetes.io/managed-by: Helm
  659.         app.kubernetes.io/component: admission-webhook
  660.     spec:
  661.       containers:
  662.         - name: patch
  663.           image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 
  664.           imagePullPolicy: IfNotPresent
  665.           args:
  666.             - patch
  667.             - --webhook-name=ingress-nginx-admission
  668.             - --namespace=$(POD_NAMESPACE)
  669.             - --patch-mutating=false
  670.             - --secret-name=ingress-nginx-admission
  671.             - --patch-failure-policy=Fail
  672.           env:
  673.             - name: POD_NAMESPACE
  674.               valueFrom:
  675.                 fieldRef:
  676.                   fieldPath: metadata.namespace
  677.           securityContext:
  678.             allowPrivilegeEscalation: false
  679.       restartPolicy: OnFailure
  680.       serviceAccountName: ingress-nginx-admission
  681.       nodeSelector:
  682.         kubernetes.io/os: linux
  683.       securityContext:
  684.         runAsNonRoot: true
  685.         runAsUser: 2000
  686. [root@hello ~/yaml]#

14.2启用后端,写入配置文件执行

  1. [root@hello ~/yaml]# vim backend.yaml
  2. [root@hello ~/yaml]# cat backend.yaml
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6.   name: default-http-backend
  7.   labels:
  8.     app.kubernetes.io/name: default-http-backend
  9.   namespace: kube-system
  10. spec:
  11.   replicas: 1
  12.   selector:
  13.     matchLabels:
  14.       app.kubernetes.io/name: default-http-backend
  15.   template:
  16.     metadata:
  17.       labels:
  18.         app.kubernetes.io/name: default-http-backend
  19.     spec:
  20.       terminationGracePeriodSeconds: 60
  21.       containers:
  22.       - name: default-http-backend
  23.         image: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5 
  24.         livenessProbe:
  25.           httpGet:
  26.             path: /healthz
  27.             port: 8080
  28.             scheme: HTTP
  29.           initialDelaySeconds: 30
  30.           timeoutSeconds: 5
  31.         ports:
  32.         - containerPort: 8080
  33.         resources:
  34.           limits:
  35.             cpu: 10m
  36.             memory: 20Mi
  37.           requests:
  38.             cpu: 10m
  39.             memory: 20Mi
  40. ---
  41. apiVersion: v1
  42. kind: Service
  43. metadata:
  44.   name: default-http-backend
  45.   namespace: kube-system
  46.   labels:
  47.     app.kubernetes.io/name: default-http-backend
  48. spec:
  49.   ports:
  50.   - port: 80
  51.     targetPort: 8080
  52.   selector:
  53.     app.kubernetes.io/name: default-http-backend
  54. [root@hello ~/yaml]#

14.3安装测试应用

  1. [root@hello ~/yaml]# vim ingress-demo-app.yaml
  2. [root@hello ~/yaml]#
  3. [root@hello ~/yaml]# cat ingress-demo-app.yaml
  4. apiVersion: apps/v1
  5. kind: Deployment
  6. metadata:
  7.   name: hello-server
  8. spec:
  9.   replicas: 2
  10.   selector:
  11.     matchLabels:
  12.       app: hello-server
  13.   template:
  14.     metadata:
  15.       labels:
  16.         app: hello-server
  17.     spec:
  18.       containers:
  19.       - name: hello-server
  20.         image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
  21.         ports:
  22.         - containerPort: 9000
  23. ---
  24. apiVersion: apps/v1
  25. kind: Deployment
  26. metadata:
  27.   labels:
  28.     app: nginx-demo
  29.   name: nginx-demo
  30. spec:
  31.   replicas: 2
  32.   selector:
  33.     matchLabels:
  34.       app: nginx-demo
  35.   template:
  36.     metadata:
  37.       labels:
  38.         app: nginx-demo
  39.     spec:
  40.       containers:
  41.       - image: nginx
  42.         name: nginx
  43. ---
  44. apiVersion: v1
  45. kind: Service
  46. metadata:
  47.   labels:
  48.     app: nginx-demo
  49.   name: nginx-demo
  50. spec:
  51.   selector:
  52.     app: nginx-demo
  53.   ports:
  54.   - port: 8000
  55.     protocol: TCP
  56.     targetPort: 80
  57. ---
  58. apiVersion: v1
  59. kind: Service
  60. metadata:
  61.   labels:
  62.     app: hello-server
  63.   name: hello-server
  64. spec:
  65.   selector:
  66.     app: hello-server
  67.   ports:
  68.   - port: 8000
  69.     protocol: TCP
  70.     targetPort: 9000
  71. ---
  72. apiVersion: networking.k8s.io/v1
  73. kind: Ingress  
  74. metadata:
  75.   name: ingress-host-bar
  76. spec:
  77.   ingressClassName: nginx
  78.   rules:
  79.   - host: "hello.chenby.cn"
  80.     http:
  81.       paths:
  82.       - pathType: Prefix
  83.         path: "/"
  84.         backend:
  85.           service:
  86.             name: hello-server
  87.             port:
  88.               number: 8000
  89.   - host: "demo.chenby.cn"
  90.     http:
  91.       paths:
  92.       - pathType: Prefix
  93.         path: "/nginx"  
  94.         backend:
  95.           service:
  96.             name: nginx-demo
  97.             port:
  98.               number: 8000
  99. [root@hello ~/yaml]#
  100. [root@hello ~/yaml]# kubectl  get ingress
  101. NAME               CLASS    HOSTS                            ADDRESS        PORTS   AGE
  102. ingress-demo-app   <none>   app.demo.com                     192.168.1.11   80      20m
  103. ingress-host-bar   nginx    hello.chenby.cn,demo.chenby.cn   192.168.1.11   80      2m17s
  104. [root@hello ~/yaml]#

14.4执行部署

  1. root@hello:~# kubectl  apply -f deploy.yaml 
  2. namespace/ingress-nginx created
  3. serviceaccount/ingress-nginx created
  4. configmap/ingress-nginx-controller created
  5. clusterrole.rbac.authorization.k8s.io/ingress-nginx created
  6. clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
  7. role.rbac.authorization.k8s.io/ingress-nginx created
  8. rolebinding.rbac.authorization.k8s.io/ingress-nginx created
  9. service/ingress-nginx-controller-admission created
  10. service/ingress-nginx-controller created
  11. deployment.apps/ingress-nginx-controller created
  12. ingressclass.networking.k8s.io/nginx created
  13. validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
  14. serviceaccount/ingress-nginx-admission created
  15. clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
  16. clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
  17. role.rbac.authorization.k8s.io/ingress-nginx-admission created
  18. rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
  19. job.batch/ingress-nginx-admission-create created
  20. job.batch/ingress-nginx-admission-patch created
  21. root@hello:~# 
  22. root@hello:~# kubectl  apply -f backend.yaml 
  23. deployment.apps/default-http-backend created
  24. service/default-http-backend created
  25. root@hello:~# 
  26. root@hello:~# kubectl  apply -f ingress-demo-app.yaml 
  27. deployment.apps/hello-server created
  28. deployment.apps/nginx-demo created
  29. service/nginx-demo created
  30. service/hello-server created
  31. ingress.networking.k8s.io/ingress-host-bar created
  32. root@hello:~#

14.5过滤查看ingress端口

  1. [root@hello ~/yaml]# kubectl  get svc -| grep ingress
  2. default         ingress-demo-app                     ClusterIP   10.68.231.41    <none>        80/TCP                       51m
  3. ingress-nginx   ingress-nginx-controller             NodePort    10.68.93.71     <none>        80:32746/TCP,443:30538/TCP   32m
  4. ingress-nginx   ingress-nginx-controller-admission   ClusterIP   10.68.146.23    <none>        443/TCP                      32m
  5. [root@hello ~/yaml]#

15.安装命令行自动补全功能

  1. yum install bash-completion -y
  2. source /usr/share/bash-completion/bash_completion
  3. source <(kubectl completion bash)
  4. echo "source <(kubectl completion bash)" >> ~/.bashrc

附录:

配置kube-controller-manager有效期100年(能不能生效的先配上再说)

  1. vim /usr/lib/systemd/system/kube-controller-manager.service
  2. # [Service]下找个地方加上
  3. --cluster-signing-duration=876000h0m0s \
  4. # 重启
  5. systemctl daemon-reload 
  6. systemctl restart kube-controller-manager

防止漏洞扫描

  1. vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
  2. [Service] 
  3. Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig" 
  4. Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" 
  5. Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" 
  6. Environment="KUBELET_EXTRA_ARGS=--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384    --image-pull-progress-deadline=30m" 
  7. ExecStart= 
  8. ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

预留空间,按需分配

  1. vim /etc/kubernetes/kubelet-conf.yml
  2. rotateServerCertificates: true
  3. allowedUnsafeSysctls:
  4.  - "net.core*"
  5.  - "net.ipv4.*"
  6.    kubeReserved:
  7.      cpu: "1"
  8.      memory: 1Gi
  9.      ephemeral-storage: 10Gi
  10.    systemReserved:
  11.      cpu: "1"
  12.      memory: 1Gi
  13.      ephemeral-storage: 10Gi

数据盘要与系统盘分开;etcd使用ssd磁盘

https://www.oiox.cn/

https://www.chenby.cn/

https://cby-chen.github.io/

https://blog.csdn.net/qq_33921750

https://my.oschina.net/u/3981543

https://www.zhihu.com/people/chen-bu-yun-2

https://segmentfault.com/u/hppyvyv6/articles

https://juejin.cn/user/3315782802482007

https://cloud.tencent.com/developer/column/93230

https://www.jianshu.com/u/0f894314ae2c

https://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/

CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、个人博客、全网可搜《小陈运维》

文章主要发布于微信公众号:《Linux运维交流社区》

二进制安装Kubernetes(k8s) v1.23.6的更多相关文章

  1. Centos7安装Kubernetes k8s v1.16.0 国内环境

    一. 为什么是k8s v1.16.0? 最新版的v1.16.2试过了,一直无法安装完成,安装到kubeadm init那一步执行后,报了很多错,如:node xxx not found等.centos ...

  2. Centos7 二进制安装 Kubernetes 1.13

    目录 1.目录 1.1.什么是 Kubernetes? 1.2.Kubernetes 有哪些优势? 2.环境准备 2.1.网络配置 2.2.更改 HOSTNAME 2.3.配置ssh免密码登录登录 2 ...

  3. 5.基于二进制部署kubernetes(k8s)集群

    1 kubernetes组件 1.1 Kubernetes 集群图 官网集群架构图 1.2 组件及功能 1.2.1 控制组件(Control Plane Components) 控制组件对集群做出全局 ...

  4. 二进制安装kubernetes(六) kube-proxy组件安装

    Kube-Proxy简述 参考文献: https://ywnz.com/linuxyffq/2530.html 运行在每个节点上,监听 API Server 中服务对象的变化,再通过管理 IPtabl ...

  5. 开启和安装Kubernetes k8s 基于Docker For Windows

    0.最近发现,Docker For Windows Stable在Enable Kubernetes这个问题上是有Bug的,建议切换到Edge版本,并且采用下文AliyunContainerServi ...

  6. 二进制安装kubernetes(七) 部署知识点总结

    1.k8s各个组件之间通信,在高版本中,基本都是使用TSL通信,所以申请证书,是必不可少的,而且建议使用二进制安装,或者在接手一套K8S集群的时候,第一件事情是检查证书有效期,证书过期或者TSL通信问 ...

  7. Kubernetes系列三:二进制安装Kubernetes环境

    安装环境: # 三个节点信息 192.168.31.11 主机名:env11 角色:部署Master节点/Node节点/ETCD节点 192.168.31.12 主机名:env12 角色:部署Node ...

  8. 二进制安装kubernetes(五) kubelet组件安装

    概述资料地址:https://blog.csdn.net/bbwangj/article/details/81904350 Kubelet组件运行在Node节点上,维持运行中的Pods以及提供kube ...

  9. 二进制安装kubernetes(二) kube-apiserver组件安装

    根据架构图,我们的apiserver部署在hdss7-21和hdss7-22上: 首先在hdss7-200上申请证书并拷贝到21和22上: 创建证书文件: # cd /opt/certs # vi c ...

  10. centos7安装kubernetes k8s 1.18

    可以参考其他网友的阿里云搭建k8s高可用集群(1.17.3) https://www.cnblogs.com/gmmy/p/12372805.html 准备四台centos7虚拟机,用来安装k8s集群 ...

随机推荐

  1. vins-fusion(1)安装编译

    https://github.com/HKUST-Aerial-Robotics/VINS-Fusion https://blog.csdn.net/haner27/article/details/1 ...

  2. Linux系列---【设置ssh连接超时时间】

    设置ssh连接超时时间 shell工具总是不断的超时,上个厕所就断开了,很不方便,这里根据自己习惯设置一下. echo "export TMOUT=600" >> /e ...

  3. python 实现视频流下载保存MP4

    # -*- coding:utf-8 -*-import sysimport osfrom glob import globimport requests reload(sys)sys.setdefa ...

  4. JAVA 学习打卡 day3

    2022-04-25 22:53:16 1.运算符 表达式是由操作数与运算符所组成Java中的语句有很多种形式,表达式就是其中一种形式.表达式是由操作数与运算符所组成,操作数可以是常量.变量也可以是方 ...

  5. 电脑日常维护技巧(windows系统)

    一.磁盘检测 cmd-->chkdsk 二.磁盘修复 cmd-->sfc/scannow 三.删除缓存文件 运行-->%temp%

  6. ant build 报 warning modified in the future

    错误原因:在测试项目时,修改了系统时间,之后保存了文件,再将系统时间改回来,会报这个错误 解决方法:复制改过的文件到记事本,然后回退下文件,再将记事本的内容覆盖下文件,重新build下就可以了.

  7. 引用本地的layUI

    <script src="/public/vendor/layui-v2.5.6/layui.all.js"></script>

  8. PHP 文件和文件夹操作

    文件夹操作 创建文件夹 mkdir(名称,权限,递归创建):创建文件 例如: #创建文件夹 mkdir('./aa') # 创建 aa 文件夹 mkdir('./aa/bb') # 在 aa 目录下创 ...

  9. Centos7 禁用IPV6地址的方法

    方法 1 编辑文件/etc/sysctl.conf, vi /etc/sysctl.conf 添加下面的行: net.ipv6.conf.all.disable_ipv6 =1 net.ipv6.co ...

  10. 使用nsis美化安装向导后,安装时实现浏览器自定义协议打开

    1. electron官方提供api,支持向注册表中写入协议,可通过浏览器打开 app.setAsDefaultProtocolClient('open-electron') 问题:1. 因为该方法时 ...