文章目录

前言

为什么用 containerd ?

因为 k8s 早在2021年就说要取消 docker-shim ,相关的资料可以查看下面的链接

弃用 Dockershim 的常见问题

迟早都要接受的,不如早点接受

k8s 组件

Kubernetes 组件

  • master 节点
组件名称 组件作用
etcd 兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。
kube-apiserver 提供了资源操作的唯一入口,各组件协调者,并提供认证、授权、访问控制、API注册和发现等机制;
以 HTTP API 提供接口服务,所有对象资源的增删改查和监听操作都交给 apiserver 处理后再提交给 etcd 存储。
kube-controller-manager 负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
处理集群中常规后台任务,一个资源对应一个控制器,而 controllermanager 就是负责管理这些控制器的。
kube-scheduler 负责资源的调度,按照预定的调度策略将 pod 调度到相应的机器上。
  • work 节点
组件名称 组件作用
kubelet kubelet 是 master 在 work 节点上的 agent,管理本机运行容器的生命周期,比如创建容器、pod 挂载数据卷、下载 secret 、获取容器和节点状态等工作。
kubelet 将每个 pod 转换成一组容器。负责维护容器的生命周期,同时也负责 volume(CVI)和网络(CNI)的管理
kube-proxy 负责为 service 提供 cluster 内部的服务发现和负载均衡;
在 work 节点上实现 pod 网络代理,维护网络规则和四层负载均衡工作。
container runtime 负责镜像管理以及Pod和容器的真正运行(CRI)
目前用的比较多的有 docker 、 containerd
cluster networking 集群网络系统
目前用的比较多的有 flannel 、calico
coredns 负责为整个集群提供DNS服务
ingress controller 为服务提供外网入口
metrics-server 提供资源监控
dashboard 提供 GUI 界面

环境准备

IP 角色 内核版本
192.168.91.19 master/work centos7.6/3.10.0-957.el7.x86_64
192.168.91.20 work centos7.6/3.10.0-957.el7.x86_64
service version
etcd v3.5.1
kubernetes v1.23.3
cfssl v1.6.1
containerd v1.5.9
pause v3.6
flannel v0.15.1
coredns v1.8.6
metrics-server v0.5.2
dashboard v2.4.0

cfssl github

etcd github

k8s github

containerd github

runc github

本次部署用到的安装包和镜像都上传到csdn了

master节点的配置不能小于2c2gwork节点可以给1c1g

节点之间需要完成免密操作,这里就不体现操作步骤了

因为懒…所以就弄了一个master节点

以下的操作,只需要选一台可以和其他节点免密的 master 节点就好

网络条件好的情况下,镜像可以让他自己拉取,如果镜像经常拉取失败,可以从本地上传镜像包然后导入到 containerd,文章后面的镜像导入一类的操作不是必须要操作的

创建目录

根据自身实际情况创建指定路径,此路径用来存放k8s二进制文件以及用到的镜像文件

  1. mkdir -p /approot1/k8s/{bin,images,pkg,tmp/{ssl,service}}

关闭防火墙

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "systemctl disable firewalld"; \
  3. ssh $i "systemctl stop firewalld"; \
  4. done

关闭selinux

临时关闭

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "setenforce 0"; \
  3. done

永久关闭

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "sed -i '/SELINUX/s/enforcing/disabled/g' /etc/selinux/config"; \
  3. done

关闭swap

临时关闭

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "swapoff -a"; \
  3. done

永久关闭

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab"; \
  3. done

开启内核模块

临时开启

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "modprobe ip_vs"; \
  3. ssh $i "modprobe ip_vs_rr"; \
  4. ssh $i "modprobe ip_vs_wrr"; \
  5. ssh $i "modprobe ip_vs_sh"; \
  6. ssh $i "modprobe nf_conntrack"; \
  7. ssh $i "modprobe nf_conntrack_ipv4"; \
  8. ssh $i "modprobe br_netfilter"; \
  9. ssh $i "modprobe overlay"; \
  10. done

永久开启

  1. vim /approot1/k8s/tmp/service/k8s-modules.conf
  1. ip_vs
  2. ip_vs_rr
  3. ip_vs_wrr
  4. ip_vs_sh
  5. nf_conntrack
  6. nf_conntrack_ipv4
  7. br_netfilter
  8. overlay

分发到所有节点

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. scp /approot1/k8s/tmp/service/k8s-modules.conf $i:/etc/modules-load.d/; \
  3. done

启用systemd自动加载模块服务

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "systemctl enable systemd-modules-load"; \
  3. ssh $i "systemctl restart systemd-modules-load"; \
  4. ssh $i "systemctl is-active systemd-modules-load"; \
  5. done

返回active表示 自动加载模块服务 启动成功

配置系统参数

以下的参数适用于3.x和4.x系列的内核

  1. vim /approot1/k8s/tmp/service/kubernetes.conf

建议编辑之前,在 vim 里面先执行 :set paste ,避免复制进去的内容和文档的不一致,比如多了注释,或者语法对齐异常

  1. # 开启数据包转发功能(实现vxlan)
  2. net.ipv4.ip_forward=1
  3. # iptables对bridge的数据进行处理
  4. net.bridge.bridge-nf-call-iptables=1
  5. net.bridge.bridge-nf-call-ip6tables=1
  6. net.bridge.bridge-nf-call-arptables=1
  7. # 关闭tcp_tw_recycle,否则和NAT冲突,会导致服务不通
  8. net.ipv4.tcp_tw_recycle=0
  9. # 不允许将TIME-WAIT sockets重新用于新的TCP连接
  10. net.ipv4.tcp_tw_reuse=0
  11. # socket监听(listen)的backlog上限
  12. net.core.somaxconn=32768
  13. # 最大跟踪连接数,默认 nf_conntrack_buckets * 4
  14. net.netfilter.nf_conntrack_max=1000000
  15. # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
  16. vm.swappiness=0
  17. # 计算当前的内存映射文件数。
  18. vm.max_map_count=655360
  19. # 内核可分配的最大文件数
  20. fs.file-max=6553600
  21. # 持久连接
  22. net.ipv4.tcp_keepalive_time=600
  23. net.ipv4.tcp_keepalive_intvl=30
  24. net.ipv4.tcp_keepalive_probes=10

分发到所有节点

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. scp /approot1/k8s/tmp/service/kubernetes.conf $i:/etc/sysctl.d/; \
  3. done

加载系统参数

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "sysctl -p /etc/sysctl.d/kubernetes.conf"; \
  3. done

清空iptables规则

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat"; \
  3. ssh $i "iptables -P FORWARD ACCEPT"; \
  4. done

配置 PATH 变量

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "echo 'PATH=$PATH:/approot1/k8s/bin' >> $HOME/.bashrc"; \
  3. done
  4. source $HOME/.bashrc

下载二进制文件

其中一台节点操作即可

github下载会比较慢,可以从本地上传到 /approot1/k8s/pkg/ 目录下

  1. wget -O /approot1/k8s/pkg/kubernetes.tar.gz \
  2. https://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz
  3. wget -O /approot1/k8s/pkg/etcd.tar.gz \
  4. https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz

解压并删除不必要的文件

  1. cd /approot1/k8s/pkg/
  2. for i in $(ls *.tar.gz);do tar xvf $i && rm -f $i;done
  3. mv kubernetes/server/bin/ kubernetes/
  4. rm -rf kubernetes/{addons,kubernetes-src.tar.gz,LICENSES,server}
  5. rm -f kubernetes/bin/*_tag kubernetes/bin/*.tar
  6. rm -rf etcd-v3.5.1-linux-amd64/Documentation etcd-v3.5.1-linux-amd64/*.md

部署 master 节点

创建 ca 根证书

  1. wget -O /approot1/k8s/bin/cfssl https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
  2. wget -O /approot1/k8s/bin/cfssljson https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
  3. chmod +x /approot1/k8s/bin/*
  1. vim /approot1/k8s/tmp/ssl/ca-config.json
  1. {
  2. "signing": {
  3. "default": {
  4. "expiry": "87600h"
  5. },
  6. "profiles": {
  7. "kubernetes": {
  8. "usages": [
  9. "signing",
  10. "key encipherment",
  11. "server auth",
  12. "client auth"
  13. ],
  14. "expiry": "876000h"
  15. }
  16. }
  17. }
  18. }
  1. vim /approot1/k8s/tmp/ssl/ca-csr.json
  1. {
  2. "CN": "kubernetes",
  3. "key": {
  4. "algo": "rsa",
  5. "size": 2048
  6. },
  7. "names": [
  8. {
  9. "C": "CN",
  10. "ST": "ShangHai",
  11. "L": "ShangHai",
  12. "O": "k8s",
  13. "OU": "System"
  14. }
  15. ],
  16. "ca": {
  17. "expiry": "876000h"
  18. }
  19. }
  1. cd /approot1/k8s/tmp/ssl/
  2. cfssl gencert -initca ca-csr.json | cfssljson -bare ca

部署 etcd 组件

创建 etcd 证书

  1. vim /approot1/k8s/tmp/ssl/etcd-csr.json

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

注意json的格式

  1. {
  2. "CN": "etcd",
  3. "hosts": [
  4. "127.0.0.1",
  5. "192.168.91.19"
  6. ],
  7. "key": {
  8. "algo": "rsa",
  9. "size": 2048
  10. },
  11. "names": [
  12. {
  13. "C": "CN",
  14. "ST": "ShangHai",
  15. "L": "ShangHai",
  16. "O": "k8s",
  17. "OU": "System"
  18. }
  19. ]
  20. }
  1. cd /approot1/k8s/tmp/ssl/
  2. cfssl gencert -ca=ca.pem \
  3. -ca-key=ca-key.pem \
  4. -config=ca-config.json \
  5. -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

配置 etcd 为 systemctl 管理

  1. vim /approot1/k8s/tmp/service/kube-etcd.service.192.168.91.19

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

etcd 参数

  1. [Unit]
  2. Description=Etcd Server
  3. After=network.target
  4. After=network-online.target
  5. Wants=network-online.target
  6. Documentation=https://github.com/coreos
  7. [Service]
  8. Type=notify
  9. WorkingDirectory=/approot1/k8s/data/etcd
  10. ExecStart=/approot1/k8s/bin/etcd \
  11. --name=etcd-192.168.91.19 \
  12. --cert-file=/etc/kubernetes/ssl/etcd.pem \
  13. --key-file=/etc/kubernetes/ssl/etcd-key.pem \
  14. --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
  15. --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
  16. --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  17. --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  18. --initial-advertise-peer-urls=https://192.168.91.19:2380 \
  19. --listen-peer-urls=https://192.168.91.19:2380 \
  20. --listen-client-urls=https://192.168.91.19:2379,http://127.0.0.1:2379 \
  21. --advertise-client-urls=https://192.168.91.19:2379 \
  22. --initial-cluster-token=etcd-cluster-0 \
  23. --initial-cluster=etcd-192.168.91.19=https://192.168.91.19:2380 \
  24. --initial-cluster-state=new \
  25. --data-dir=/approot1/k8s/data/etcd \
  26. --wal-dir= \
  27. --snapshot-count=50000 \
  28. --auto-compaction-retention=1 \
  29. --auto-compaction-mode=periodic \
  30. --max-request-bytes=10485760 \
  31. --quota-backend-bytes=8589934592
  32. Restart=always
  33. RestartSec=15
  34. LimitNOFILE=65536
  35. OOMScoreAdjust=-999
  36. [Install]
  37. WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

  1. for i in 192.168.91.19;do \
  2. ssh $i "mkdir -p /etc/kubernetes/ssl"; \
  3. ssh $i "mkdir -m 700 -p /approot1/k8s/data/etcd"; \
  4. ssh $i "mkdir -p /approot1/k8s/bin"; \
  5. scp /approot1/k8s/tmp/ssl/{ca*.pem,etcd*.pem} $i:/etc/kubernetes/ssl/; \
  6. scp /approot1/k8s/tmp/service/kube-etcd.service.$i $i:/etc/systemd/system/kube-etcd.service; \
  7. scp /approot1/k8s/pkg/etcd-v3.5.1-linux-amd64/etcd* $i:/approot1/k8s/bin/; \
  8. done

启动 etcd 服务

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

  1. for i in 192.168.91.19;do \
  2. ssh $i "systemctl daemon-reload"; \
  3. ssh $i "systemctl enable kube-etcd"; \
  4. ssh $i "systemctl restart kube-etcd --no-block"; \
  5. ssh $i "systemctl is-active kube-etcd"; \
  6. done

返回 activating 表示 etcd 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-etcd";done

返回active表示 etcd 启动成功,如果是多节点 etcd ,其中一个没有返回active属于正常的,可以使用下面的方式来验证集群

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

  1. for i in 192.168.91.19;do \
  2. ssh $i "ETCDCTL_API=3 /approot1/k8s/bin/etcdctl \
  3. --endpoints=https://${i}:2379 \
  4. --cacert=/etc/kubernetes/ssl/ca.pem \
  5. --cert=/etc/kubernetes/ssl/etcd.pem \
  6. --key=/etc/kubernetes/ssl/etcd-key.pem \
  7. endpoint health"; \
  8. done

https://192.168.91.19:2379 is healthy: successfully committed proposal: took = 7.135668ms

返回以上信息,并显示 successfully 表示节点是健康的

部署 apiserver 组件

创建 apiserver 证书

  1. vim /approot1/k8s/tmp/ssl/kubernetes-csr.json

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

注意json的格式

10.88.0.1 是 k8s 的服务 ip,千万不要和现有的网络一致,避免出现冲突

  1. {
  2. "CN": "kubernetes",
  3. "hosts": [
  4. "127.0.0.1",
  5. "192.168.91.19",
  6. "10.88.0.1",
  7. "kubernetes",
  8. "kubernetes.default",
  9. "kubernetes.default.svc",
  10. "kubernetes.default.svc.cluster",
  11. "kubernetes.default.svc.cluster.local"
  12. ],
  13. "key": {
  14. "algo": "rsa",
  15. "size": 2048
  16. },
  17. "names": [
  18. {
  19. "C": "CN",
  20. "ST": "ShangHai",
  21. "L": "ShangHai",
  22. "O": "k8s",
  23. "OU": "System"
  24. }
  25. ]
  26. }
  1. cd /approot1/k8s/tmp/ssl/
  2. cfssl gencert -ca=ca.pem \
  3. -ca-key=ca-key.pem \
  4. -config=ca-config.json \
  5. -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

创建 metrics-server 证书

  1. vim /approot1/k8s/tmp/ssl/metrics-server-csr.json
  1. {
  2. "CN": "aggregator",
  3. "hosts": [
  4. ],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "ST": "ShangHai",
  13. "L": "ShangHai",
  14. "O": "k8s",
  15. "OU": "System"
  16. }
  17. ]
  18. }
  1. cd /approot1/k8s/tmp/ssl/
  2. cfssl gencert -ca=ca.pem \
  3. -ca-key=ca-key.pem \
  4. -config=ca-config.json \
  5. -profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server

配置 apiserver 为 systemctl 管理

  1. vim /approot1/k8s/tmp/service/kube-apiserver.service.192.168.91.19

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

--service-cluster-ip-range 参数的 ip 网段要和 kubernetes-csr.json 里面的 10.88.0.1 是一个网段的

--etcd-servers 如果 etcd 是多节点的,这里要写上所有的 etcd 节点

apiserver 参数

  1. [Unit]
  2. Description=Kubernetes API Server
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. After=network.target
  5. [Service]
  6. ExecStart=/approot1/k8s/bin/kube-apiserver \
  7. --allow-privileged=true \
  8. --anonymous-auth=false \
  9. --api-audiences=api,istio-ca \
  10. --authorization-mode=Node,RBAC \
  11. --bind-address=192.168.91.19 \
  12. --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  13. --endpoint-reconciler-type=lease \
  14. --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
  15. --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
  16. --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
  17. --etcd-servers=https://192.168.91.19:2379 \
  18. --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
  19. --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \
  20. --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \
  21. --secure-port=6443 \
  22. --service-account-issuer=https://kubernetes.default.svc \
  23. --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  24. --service-account-key-file=/etc/kubernetes/ssl/ca.pem \
  25. --service-cluster-ip-range=10.88.0.0/16 \
  26. --service-node-port-range=30000-32767 \
  27. --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  28. --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  29. --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
  30. --requestheader-allowed-names= \
  31. --requestheader-extra-headers-prefix=X-Remote-Extra- \
  32. --requestheader-group-headers=X-Remote-Group \
  33. --requestheader-username-headers=X-Remote-User \
  34. --proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \
  35. --proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \
  36. --enable-aggregator-routing=true \
  37. --v=2
  38. Restart=always
  39. RestartSec=5
  40. Type=notify
  41. LimitNOFILE=65536
  42. [Install]
  43. WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

  1. for i in 192.168.91.19;do \
  2. ssh $i "mkdir -p /etc/kubernetes/ssl"; \
  3. ssh $i "mkdir -p /approot1/k8s/bin"; \
  4. scp /approot1/k8s/tmp/ssl/{ca*.pem,kubernetes*.pem,metrics-server*.pem} $i:/etc/kubernetes/ssl/; \
  5. scp /approot1/k8s/tmp/service/kube-apiserver.service.$i $i:/etc/systemd/system/kube-apiserver.service; \
  6. scp /approot1/k8s/pkg/kubernetes/bin/kube-apiserver $i:/approot1/k8s/bin/; \
  7. done

启动 apiserver 服务

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

  1. for i in 192.168.91.19;do \
  2. ssh $i "systemctl daemon-reload"; \
  3. ssh $i "systemctl enable kube-apiserver"; \
  4. ssh $i "systemctl restart kube-apiserver --no-block"; \
  5. ssh $i "systemctl is-active kube-apiserver"; \
  6. done

返回 activating 表示 apiserver 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-apiserver";done

返回active表示 apiserver 启动成功

  1. curl -k --cacert /etc/kubernetes/ssl/ca.pem \
  2. --cert /etc/kubernetes/ssl/kubernetes.pem \
  3. --key /etc/kubernetes/ssl/kubernetes-key.pem \
  4. https://192.168.91.19:6443/api

正常返回如下信息,说明 apiserver 服务运行正常

  1. {
  2. "kind": "APIVersions",
  3. "versions": [
  4. "v1"
  5. ],
  6. "serverAddressByClientCIDRs": [
  7. {
  8. "clientCIDR": "0.0.0.0/0",
  9. "serverAddress": "192.168.91.19:6443"
  10. }
  11. ]
  12. }

查看 k8s 的所有 kind (对象类别)

  1. curl -s -k --cacert /etc/kubernetes/ssl/ca.pem \
  2. --cert /etc/kubernetes/ssl/kubernetes.pem \
  3. --key /etc/kubernetes/ssl/kubernetes-key.pem \
  4. https://192.168.91.19:6443/api/v1/ | grep kind | sort -u
  1. "kind": "APIResourceList",
  2. "kind": "Binding",
  3. "kind": "ComponentStatus",
  4. "kind": "ConfigMap",
  5. "kind": "Endpoints",
  6. "kind": "Event",
  7. "kind": "Eviction",
  8. "kind": "LimitRange",
  9. "kind": "Namespace",
  10. "kind": "Node",
  11. "kind": "NodeProxyOptions",
  12. "kind": "PersistentVolume",
  13. "kind": "PersistentVolumeClaim",
  14. "kind": "Pod",
  15. "kind": "PodAttachOptions",
  16. "kind": "PodExecOptions",
  17. "kind": "PodPortForwardOptions",
  18. "kind": "PodProxyOptions",
  19. "kind": "PodTemplate",
  20. "kind": "ReplicationController",
  21. "kind": "ResourceQuota",
  22. "kind": "Scale",
  23. "kind": "Secret",
  24. "kind": "Service",
  25. "kind": "ServiceAccount",
  26. "kind": "ServiceProxyOptions",
  27. "kind": "TokenRequest",

配置 kubectl 管理

创建 admin 证书
  1. vim /approot1/k8s/tmp/ssl/admin-csr.json
  1. {
  2. "CN": "admin",
  3. "hosts": [
  4. ],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "ST": "ShangHai",
  13. "L": "ShangHai",
  14. "O": "system:masters",
  15. "OU": "System"
  16. }
  17. ]
  18. }
  1. cd /approot1/k8s/tmp/ssl/
  2. cfssl gencert -ca=ca.pem \
  3. -ca-key=ca-key.pem \
  4. -config=ca-config.json \
  5. -profile=kubernetes admin-csr.json | cfssljson -bare admin
创建 kubeconfig 证书

设置集群参数

--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
  3. --certificate-authority=ca.pem \
  4. --embed-certs=true \
  5. --server=https://192.168.91.19:6443 \
  6. --kubeconfig=kubectl.kubeconfig

设置客户端认证参数

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials admin \
  3. --client-certificate=admin.pem \
  4. --client-key=admin-key.pem \
  5. --embed-certs=true \
  6. --kubeconfig=kubectl.kubeconfig

设置上下文参数

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-context kubernetes \
  3. --cluster=kubernetes \
  4. --user=admin \
  5. --kubeconfig=kubectl.kubeconfig

设置默认上下文

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
分发 kubeconfig 证书到所有 master 节点

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

  1. for i in 192.168.91.19;do \
  2. ssh $i "mkdir -p /etc/kubernetes/ssl"; \
  3. ssh $i "mkdir -p /approot1/k8s/bin"; \
  4. ssh $i "mkdir -p $HOME/.kube"; \
  5. scp /approot1/k8s/pkg/kubernetes/bin/kubectl $i:/approot1/k8s/bin/; \
  6. ssh $i "echo 'source <(kubectl completion bash)' >> $HOME/.bashrc"
  7. scp /approot1/k8s/tmp/ssl/kubectl.kubeconfig $i:$HOME/.kube/config; \
  8. done

部署 controller-manager 组件

创建 controller-manager 证书

  1. vim /approot1/k8s/tmp/ssl/kube-controller-manager-csr.json

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

注意json的格式

  1. {
  2. "CN": "system:kube-controller-manager",
  3. "key": {
  4. "algo": "rsa",
  5. "size": 2048
  6. },
  7. "hosts": [
  8. "127.0.0.1",
  9. "192.168.91.19"
  10. ],
  11. "names": [
  12. {
  13. "C": "CN",
  14. "ST": "ShangHai",
  15. "L": "ShangHai",
  16. "O": "system:kube-controller-manager",
  17. "OU": "System"
  18. }
  19. ]
  20. }
  1. cd /approot1/k8s/tmp/ssl/
  2. cfssl gencert -ca=ca.pem \
  3. -ca-key=ca-key.pem \
  4. -config=ca-config.json \
  5. -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

创建 kubeconfig 证书

设置集群参数

--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
  3. --certificate-authority=ca.pem \
  4. --embed-certs=true \
  5. --server=https://192.168.91.19:6443 \
  6. --kubeconfig=kube-controller-manager.kubeconfig

设置客户端认证参数

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:kube-controller-manager \
  3. --client-certificate=kube-controller-manager.pem \
  4. --client-key=kube-controller-manager-key.pem \
  5. --embed-certs=true \
  6. --kubeconfig=kube-controller-manager.kubeconfig

设置上下文参数

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-context system:kube-controller-manager \
  3. --cluster=kubernetes \
  4. --user=system:kube-controller-manager \
  5. --kubeconfig=kube-controller-manager.kubeconfig

设置默认上下文

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config \
  3. use-context system:kube-controller-manager \
  4. --kubeconfig=kube-controller-manager.kubeconfig

配置 controller-manager 为 systemctl 管理

  1. vim /approot1/k8s/tmp/service/kube-controller-manager.service

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

--service-cluster-ip-range 参数的 ip 网段要和 kubernetes-csr.json 里面的 10.88.0.1 是一个网段的

--cluster-cidr 为 pod 运行的网段,要和 --service-cluster-ip-range 参数的网段以及现有的网络不一致,避免出现冲突

controller-manager 参数

  1. [Unit]
  2. Description=Kubernetes Controller Manager
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. [Service]
  5. ExecStart=/approot1/k8s/bin/kube-controller-manager \
  6. --bind-address=0.0.0.0 \
  7. --allocate-node-cidrs=true \
  8. --cluster-cidr=172.20.0.0/16 \
  9. --cluster-name=kubernetes \
  10. --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  11. --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  12. --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  13. --leader-elect=true \
  14. --node-cidr-mask-size=24 \
  15. --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  16. --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  17. --service-cluster-ip-range=10.88.0.0/16 \
  18. --use-service-account-credentials=true \
  19. --v=2
  20. Restart=always
  21. RestartSec=5
  22. [Install]
  23. WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

  1. for i in 192.168.91.19;do \
  2. ssh $i "mkdir -p /etc/kubernetes/ssl"; \
  3. ssh $i "mkdir -p /approot1/k8s/bin"; \
  4. scp /approot1/k8s/tmp/ssl/kube-controller-manager.kubeconfig $i:/etc/kubernetes/; \
  5. scp /approot1/k8s/tmp/ssl/ca*.pem $i:/etc/kubernetes/ssl/; \
  6. scp /approot1/k8s/tmp/service/kube-controller-manager.service $i:/etc/systemd/system/; \
  7. scp /approot1/k8s/pkg/kubernetes/bin/kube-controller-manager $i:/approot1/k8s/bin/; \
  8. done

启动 controller-manager 服务

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

  1. for i in 192.168.91.19;do \
  2. ssh $i "systemctl daemon-reload"; \
  3. ssh $i "systemctl enable kube-controller-manager"; \
  4. ssh $i "systemctl restart kube-controller-manager --no-block"; \
  5. ssh $i "systemctl is-active kube-controller-manager"; \
  6. done

返回 activating 表示 controller-manager 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-controller-manager";done

返回active表示 controller-manager 启动成功

部署 scheduler 组件

创建 scheduler 证书

  1. vim /approot1/k8s/tmp/ssl/kube-scheduler-csr.json

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

注意json的格式

  1. {
  2. "CN": "system:kube-scheduler",
  3. "key": {
  4. "algo": "rsa",
  5. "size": 2048
  6. },
  7. "hosts": [
  8. "127.0.0.1",
  9. "192.168.91.19"
  10. ],
  11. "names": [
  12. {
  13. "C": "CN",
  14. "ST": "ShangHai",
  15. "L": "ShangHai",
  16. "O": "system:kube-scheduler",
  17. "OU": "System"
  18. }
  19. ]
  20. }
  1. cd /approot1/k8s/tmp/ssl/
  2. cfssl gencert -ca=ca.pem \
  3. -ca-key=ca-key.pem \
  4. -config=ca-config.json \
  5. -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

创建 kubeconfig 证书

设置集群参数

--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
  3. --certificate-authority=ca.pem \
  4. --embed-certs=true \
  5. --server=https://192.168.91.19:6443 \
  6. --kubeconfig=kube-scheduler.kubeconfig

设置客户端认证参数

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:kube-scheduler \
  3. --client-certificate=kube-scheduler.pem \
  4. --client-key=kube-scheduler-key.pem \
  5. --embed-certs=true \
  6. --kubeconfig=kube-scheduler.kubeconfig

设置上下文参数

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-context system:kube-scheduler \
  3. --cluster=kubernetes \
  4. --user=system:kube-scheduler \
  5. --kubeconfig=kube-scheduler.kubeconfig

设置默认上下文

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config \
  3. use-context system:kube-scheduler \
  4. --kubeconfig=kube-scheduler.kubeconfig

配置 scheduler 为 systemctl 管理

  1. vim /approot1/k8s/tmp/service/kube-scheduler.service

scheduler 参数

  1. [Unit]
  2. Description=Kubernetes Scheduler
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. [Service]
  5. ExecStart=/approot1/k8s/bin/kube-scheduler \
  6. --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  7. --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  8. --bind-address=0.0.0.0 \
  9. --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  10. --leader-elect=true \
  11. --v=2
  12. Restart=always
  13. RestartSec=5
  14. [Install]
  15. WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

  1. for i in 192.168.91.19;do \
  2. ssh $i "mkdir -p /etc/kubernetes/ssl"; \
  3. ssh $i "mkdir -p /approot1/k8s/bin"; \
  4. scp /approot1/k8s/tmp/ssl/{ca*.pem,kube-scheduler.kubeconfig} $i:/etc/kubernetes/; \
  5. scp /approot1/k8s/tmp/service/kube-scheduler.service $i:/etc/systemd/system/; \
  6. scp /approot1/k8s/pkg/kubernetes/bin/kube-scheduler $i:/approot1/k8s/bin/; \
  7. done

启动 scheduler 服务

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

  1. for i in 192.168.91.19;do \
  2. ssh $i "systemctl daemon-reload"; \
  3. ssh $i "systemctl enable kube-scheduler"; \
  4. ssh $i "systemctl restart kube-scheduler --no-block"; \
  5. ssh $i "systemctl is-active kube-scheduler"; \
  6. done

返回 activating 表示 scheduler 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-scheduler";done

返回active表示 scheduler 启动成功

部署 work 节点

部署 containerd 组件

下载二进制文件

github 下载 containerd 的时候,记得选择cri-containerd-cni 开头的文件,这个包里面包含了 containerd 以及 crictl 管理工具和 cni 网络插件,包括 systemd service 文件、config.toml 、 crictl.yaml 以及 cni 配置文件都是配置好的,简单修改一下就可以使用了

虽然 cri-containerd-cni 也有 runc ,但是缺少依赖,所以还是要去 runc github 重新下载一个

  1. wget -O /approot1/k8s/pkg/containerd.tar.gz \
  2. https://github.com/containerd/containerd/releases/download/v1.5.9/cri-containerd-cni-1.5.9-linux-amd64.tar.gz
  3. wget -O /approot1/k8s/pkg/runc https://github.com/opencontainers/runc/releases/download/v1.0.3/runc.amd64
  4. mkdir /approot1/k8s/pkg/containerd
  5. cd /approot1/k8s/pkg/
  6. for i in $(ls *containerd*.tar.gz);do tar xvf $i -C /approot1/k8s/pkg/containerd && rm -f $i;done
  7. chmod +x /approot1/k8s/pkg/runc
  8. mv /approot1/k8s/pkg/containerd/usr/local/bin/{containerd,containerd-shim*,crictl,ctr} /approot1/k8s/pkg/containerd/
  9. mv /approot1/k8s/pkg/containerd/opt/cni/bin/{bridge,flannel,host-local,loopback,portmap} /approot1/k8s/pkg/containerd/
  10. rm -rf /approot1/k8s/pkg/containerd/{etc,opt,usr}

配置 containerd 为 systemctl 管理

  1. vim /approot1/k8s/tmp/service/containerd.service

注意二进制文件存放路径

如果 runc 二进制文件不在 /usr/bin/ 目录下,需要有 Environment 参数,指定 runc 二进制文件的路径给 PATH ,否则当 k8s 启动 pod 的时候会报错 exec: "runc": executable file not found in $PATH: unknown

  1. [Unit]
  2. Description=containerd container runtime
  3. Documentation=https://containerd.io
  4. After=network.target
  5. [Service]
  6. Environment="PATH=$PATH:/approot1/k8s/bin"
  7. ExecStartPre=-/sbin/modprobe overlay
  8. ExecStart=/approot1/k8s/bin/containerd
  9. Restart=always
  10. RestartSec=5
  11. Delegate=yes
  12. KillMode=process
  13. OOMScoreAdjust=-999
  14. LimitNOFILE=1048576
  15. # Having non-zero Limit*s causes performance problems due to accounting overhead
  16. # in the kernel. We recommend using cgroups to do container-local accounting.
  17. LimitNPROC=infinity
  18. LimitCORE=infinity
  19. [Install]
  20. WantedBy=multi-user.target

配置 containerd 配置文件

  1. vim /approot1/k8s/tmp/service/config.toml

root 容器存储路径,修改成磁盘空间充足的路径

bin_dir containerd 服务以及 cni 插件存储路径

sandbox_image pause 镜像名称以及镜像tag

  1. disabled_plugins = []
  2. imports = []
  3. oom_score = 0
  4. plugin_dir = ""
  5. required_plugins = []
  6. root = "/approot1/data/containerd"
  7. state = "/run/containerd"
  8. version = 2
  9. [cgroup]
  10. path = ""
  11. [debug]
  12. address = ""
  13. format = ""
  14. gid = 0
  15. level = ""
  16. uid = 0
  17. [grpc]
  18. address = "/run/containerd/containerd.sock"
  19. gid = 0
  20. max_recv_message_size = 16777216
  21. max_send_message_size = 16777216
  22. tcp_address = ""
  23. tcp_tls_cert = ""
  24. tcp_tls_key = ""
  25. uid = 0
  26. [metrics]
  27. address = ""
  28. grpc_histogram = false
  29. [plugins]
  30. [plugins."io.containerd.gc.v1.scheduler"]
  31. deletion_threshold = 0
  32. mutation_threshold = 100
  33. pause_threshold = 0.02
  34. schedule_delay = "0s"
  35. startup_delay = "100ms"
  36. [plugins."io.containerd.grpc.v1.cri"]
  37. disable_apparmor = false
  38. disable_cgroup = false
  39. disable_hugetlb_controller = true
  40. disable_proc_mount = false
  41. disable_tcp_service = true
  42. enable_selinux = false
  43. enable_tls_streaming = false
  44. ignore_image_defined_volumes = false
  45. max_concurrent_downloads = 3
  46. max_container_log_line_size = 16384
  47. netns_mounts_under_state_dir = false
  48. restrict_oom_score_adj = false
  49. sandbox_image = "k8s.gcr.io/pause:3.6"
  50. selinux_category_range = 1024
  51. stats_collect_period = 10
  52. stream_idle_timeout = "4h0m0s"
  53. stream_server_address = "127.0.0.1"
  54. stream_server_port = "0"
  55. systemd_cgroup = false
  56. tolerate_missing_hugetlb_controller = true
  57. unset_seccomp_profile = ""
  58. [plugins."io.containerd.grpc.v1.cri".cni]
  59. bin_dir = "/approot1/k8s/bin"
  60. conf_dir = "/etc/cni/net.d"
  61. conf_template = "/etc/cni/net.d/cni-default.conf"
  62. max_conf_num = 1
  63. [plugins."io.containerd.grpc.v1.cri".containerd]
  64. default_runtime_name = "runc"
  65. disable_snapshot_annotations = true
  66. discard_unpacked_layers = false
  67. no_pivot = false
  68. snapshotter = "overlayfs"
  69. [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
  70. base_runtime_spec = ""
  71. container_annotations = []
  72. pod_annotations = []
  73. privileged_without_host_devices = false
  74. runtime_engine = ""
  75. runtime_root = ""
  76. runtime_type = ""
  77. [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
  78. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
  79. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  80. base_runtime_spec = ""
  81. container_annotations = []
  82. pod_annotations = []
  83. privileged_without_host_devices = false
  84. runtime_engine = ""
  85. runtime_root = ""
  86. runtime_type = "io.containerd.runc.v2"
  87. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  88. BinaryName = ""
  89. CriuImagePath = ""
  90. CriuPath = ""
  91. CriuWorkPath = ""
  92. IoGid = 0
  93. IoUid = 0
  94. NoNewKeyring = false
  95. NoPivotRoot = false
  96. Root = ""
  97. ShimCgroup = ""
  98. SystemdCgroup = true
  99. [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
  100. base_runtime_spec = ""
  101. container_annotations = []
  102. pod_annotations = []
  103. privileged_without_host_devices = false
  104. runtime_engine = ""
  105. runtime_root = ""
  106. runtime_type = ""
  107. [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
  108. [plugins."io.containerd.grpc.v1.cri".image_decryption]
  109. key_model = "node"
  110. [plugins."io.containerd.grpc.v1.cri".registry]
  111. config_path = ""
  112. [plugins."io.containerd.grpc.v1.cri".registry.auths]
  113. [plugins."io.containerd.grpc.v1.cri".registry.configs]
  114. [plugins."io.containerd.grpc.v1.cri".registry.headers]
  115. [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
  116. [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
  117. endpoint = ["https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com"]
  118. [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
  119. endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
  120. [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
  121. endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]
  122. [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
  123. endpoint = ["https://quay.mirrors.ustc.edu.cn"]
  124. [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
  125. tls_cert_file = ""
  126. tls_key_file = ""
  127. [plugins."io.containerd.internal.v1.opt"]
  128. path = "/opt/containerd"
  129. [plugins."io.containerd.internal.v1.restart"]
  130. interval = "10s"
  131. [plugins."io.containerd.metadata.v1.bolt"]
  132. content_sharing_policy = "shared"
  133. [plugins."io.containerd.monitor.v1.cgroups"]
  134. no_prometheus = false
  135. [plugins."io.containerd.runtime.v1.linux"]
  136. no_shim = false
  137. runtime = "runc"
  138. runtime_root = ""
  139. shim = "containerd-shim"
  140. shim_debug = false
  141. [plugins."io.containerd.runtime.v2.task"]
  142. platforms = ["linux/amd64"]
  143. [plugins."io.containerd.service.v1.diff-service"]
  144. default = ["walking"]
  145. [plugins."io.containerd.snapshotter.v1.aufs"]
  146. root_path = ""
  147. [plugins."io.containerd.snapshotter.v1.btrfs"]
  148. root_path = ""
  149. [plugins."io.containerd.snapshotter.v1.devmapper"]
  150. async_remove = false
  151. base_image_size = ""
  152. pool_name = ""
  153. root_path = ""
  154. [plugins."io.containerd.snapshotter.v1.native"]
  155. root_path = ""
  156. [plugins."io.containerd.snapshotter.v1.overlayfs"]
  157. root_path = ""
  158. [plugins."io.containerd.snapshotter.v1.zfs"]
  159. root_path = ""
  160. [proxy_plugins]
  161. [stream_processors]
  162. [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
  163. accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
  164. args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
  165. env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
  166. path = "ctd-decoder"
  167. returns = "application/vnd.oci.image.layer.v1.tar"
  168. [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
  169. accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
  170. args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
  171. env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
  172. path = "ctd-decoder"
  173. returns = "application/vnd.oci.image.layer.v1.tar+gzip"
  174. [timeouts]
  175. "io.containerd.timeout.shim.cleanup" = "5s"
  176. "io.containerd.timeout.shim.load" = "5s"
  177. "io.containerd.timeout.shim.shutdown" = "3s"
  178. "io.containerd.timeout.task.state" = "2s"
  179. [ttrpc]
  180. address = ""
  181. gid = 0
  182. uid = 0

配置 crictl 管理工具

  1. vim /approot1/k8s/tmp/service/crictl.yaml
  1. runtime-endpoint: unix:///run/containerd/containerd.sock

配置 cni 网络插件

  1. vim /approot1/k8s/tmp/service/cni-default.conf

subnet 参数要和 controller-manager--cluster-cidr 参数一致

  1. {
  2. "name": "mynet",
  3. "cniVersion": "0.3.1",
  4. "type": "bridge",
  5. "bridge": "mynet0",
  6. "isDefaultGateway": true,
  7. "ipMasq": true,
  8. "hairpinMode": true,
  9. "ipam": {
  10. "type": "host-local",
  11. "subnet": "172.20.0.0/16"
  12. }
  13. }

分发配置文件以及创建相关路径

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "mkdir -p /etc/containerd"; \
  3. ssh $i "mkdir -p /approot1/k8s/bin"; \
  4. ssh $i "mkdir -p /etc/cni/net.d"; \
  5. scp /approot1/k8s/tmp/service/containerd.service $i:/etc/systemd/system/; \
  6. scp /approot1/k8s/tmp/service/config.toml $i:/etc/containerd/; \
  7. scp /approot1/k8s/tmp/service/cni-default.conf $i:/etc/cni/net.d/; \
  8. scp /approot1/k8s/tmp/service/crictl.yaml $i:/etc/; \
  9. scp /approot1/k8s/pkg/containerd/* $i:/approot1/k8s/bin/; \
  10. scp /approot1/k8s/pkg/runc $i:/approot1/k8s/bin/; \
  11. done

启动 containerd 服务

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "systemctl daemon-reload"; \
  3. ssh $i "systemctl enable containerd"; \
  4. ssh $i "systemctl restart containerd --no-block"; \
  5. ssh $i "systemctl is-active containerd"; \
  6. done

返回 activating 表示 containerd 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active containerd";done

返回active表示 containerd 启动成功

导入 pause 镜像

ctr 导入镜像有一个特殊的地方,如果导入的镜像想要 k8s 可以使用,需要加上 -n k8s.io 参数,而且必须是ctr -n k8s.io image import <xxx.tar> 这样的格式,如果是 ctr image import <xxx.tar> -n k8s.io 就会报错 ctr: flag provided but not defined: -n 这个操作确实有点骚气,不太适应

如果镜像导入的时候没有加上 -n k8s.io ,启动 pod 的时候 kubelet 会重新去拉取 pause 容器,如果配置的镜像仓库没有这个 tag 的镜像就会报错

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. scp /approot1/k8s/images/pause-v3.6.tar $i:/tmp/
  3. ssh $i "ctr -n=k8s.io image import /tmp/pause-v3.6.tar && rm -f /tmp/pause-v3.6.tar"; \
  4. done

查看镜像

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "ctr -n=k8s.io image list | grep pause"; \
  3. done

部署 kubelet 组件

创建 kubelet 证书

  1. vim /approot1/k8s/tmp/ssl/kubelet-csr.json.192.168.91.19

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个json文件,json文件内的 ip 也要修改为 work 节点的 ip,别重复了

  1. {
  2. "CN": "system:node:192.168.91.19",
  3. "key": {
  4. "algo": "rsa",
  5. "size": 2048
  6. },
  7. "hosts": [
  8. "127.0.0.1",
  9. "192.168.91.19"
  10. ],
  11. "names": [
  12. {
  13. "C": "CN",
  14. "ST": "ShangHai",
  15. "L": "ShangHai",
  16. "O": "system:nodes",
  17. "OU": "System"
  18. }
  19. ]
  20. }
  1. for i in 192.168.91.19 192.168.91.20;do \
  2. cd /approot1/k8s/tmp/ssl/; \
  3. cfssl gencert -ca=ca.pem \
  4. -ca-key=ca-key.pem \
  5. -config=ca-config.json \
  6. -profile=kubernetes kubelet-csr.json.$i | cfssljson -bare kubelet.$i; \
  7. done

创建 kubeconfig 证书

设置集群参数

--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. cd /approot1/k8s/tmp/ssl/; \
  3. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
  4. --certificate-authority=ca.pem \
  5. --embed-certs=true \
  6. --server=https://192.168.91.19:6443 \
  7. --kubeconfig=kubelet.kubeconfig.$i; \
  8. done

设置客户端认证参数

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. cd /approot1/k8s/tmp/ssl/; \
  3. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:node:$i \
  4. --client-certificate=kubelet.$i.pem \
  5. --client-key=kubelet.$i-key.pem \
  6. --embed-certs=true \
  7. --kubeconfig=kubelet.kubeconfig.$i; \
  8. done

设置上下文参数

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. cd /approot1/k8s/tmp/ssl/; \
  3. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-context default \
  4. --cluster=kubernetes \
  5. --user=system:node:$i \
  6. --kubeconfig=kubelet.kubeconfig.$i; \
  7. done

设置默认上下文

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. cd /approot1/k8s/tmp/ssl/; \
  3. /approot1/k8s/pkg/kubernetes/bin/kubectl config \
  4. use-context default \
  5. --kubeconfig=kubelet.kubeconfig.$i; \
  6. done

配置 kubelet 配置文件

  1. vim /approot1/k8s/tmp/service/config.yaml

clusterDNS 参数的 ip 注意修改,和 apiserver--service-cluster-ip-range 参数一个网段,和 k8s 服务 ip 要不一样,一般 k8s 服务的 ip 取网段第一个ip, clusterdns 选网段的第二个ip

  1. kind: KubeletConfiguration
  2. apiVersion: kubelet.config.k8s.io/v1beta1
  3. address: 0.0.0.0
  4. authentication:
  5. anonymous:
  6. enabled: false
  7. webhook:
  8. cacheTTL: 2m0s
  9. enabled: true
  10. x509:
  11. clientCAFile: /etc/kubernetes/ssl/ca.pem
  12. authorization:
  13. mode: Webhook
  14. webhook:
  15. cacheAuthorizedTTL: 5m0s
  16. cacheUnauthorizedTTL: 30s
  17. cgroupDriver: systemd
  18. cgroupsPerQOS: true
  19. clusterDNS:
  20. - 10.88.0.2
  21. clusterDomain: cluster.local
  22. configMapAndSecretChangeDetectionStrategy: Watch
  23. containerLogMaxFiles: 3
  24. containerLogMaxSize: 10Mi
  25. enforceNodeAllocatable:
  26. - pods
  27. eventBurst: 10
  28. eventRecordQPS: 5
  29. evictionHard:
  30. imagefs.available: 15%
  31. memory.available: 300Mi
  32. nodefs.available: 10%
  33. nodefs.inodesFree: 5%
  34. evictionPressureTransitionPeriod: 5m0s
  35. failSwapOn: true
  36. fileCheckFrequency: 40s
  37. hairpinMode: hairpin-veth
  38. healthzBindAddress: 0.0.0.0
  39. healthzPort: 10248
  40. httpCheckFrequency: 40s
  41. imageGCHighThresholdPercent: 85
  42. imageGCLowThresholdPercent: 80
  43. imageMinimumGCAge: 2m0s
  44. kubeAPIBurst: 100
  45. kubeAPIQPS: 50
  46. makeIPTablesUtilChains: true
  47. maxOpenFiles: 1000000
  48. maxPods: 110
  49. nodeLeaseDurationSeconds: 40
  50. nodeStatusReportFrequency: 1m0s
  51. nodeStatusUpdateFrequency: 10s
  52. oomScoreAdj: -999
  53. podPidsLimit: -1
  54. port: 10250
  55. # disable readOnlyPort
  56. readOnlyPort: 0
  57. resolvConf: /etc/resolv.conf
  58. runtimeRequestTimeout: 2m0s
  59. serializeImagePulls: true
  60. streamingConnectionIdleTimeout: 4h0m0s
  61. syncFrequency: 1m0s
  62. tlsCertFile: /etc/kubernetes/ssl/kubelet.pem
  63. tlsPrivateKeyFile: /etc/kubernetes/ssl/kubelet-key.pem

配置 kubelet 为 systemctl 管理

  1. vim /approot1/k8s/tmp/service/kubelet.service.192.168.91.19

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个service文件,service 文件内的 ip 也要修改为 work 节点的 ip,别重复了

--container-runtime 参数默认是 docker ,如果使用 docker 以外的,需要配置为 remote ,并且要配置 --container-runtime-endpoint 参数来指定 sock 文件的路径

kubelet 参数

  1. [Unit]
  2. Description=Kubernetes Kubelet
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. [Service]
  5. WorkingDirectory=/approot1/k8s/data/kubelet
  6. ExecStart=/approot1/k8s/bin/kubelet \
  7. --config=/approot1/k8s/data/kubelet/config.yaml \
  8. --cni-bin-dir=/approot1/k8s/bin \
  9. --cni-conf-dir=/etc/cni/net.d \
  10. --container-runtime=remote \
  11. --container-runtime-endpoint=unix:///run/containerd/containerd.sock \
  12. --hostname-override=192.168.91.19 \
  13. --image-pull-progress-deadline=5m \
  14. --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  15. --network-plugin=cni \
  16. --pod-infra-container-image=k8s.gcr.io/pause:3.6 \
  17. --root-dir=/approot1/k8s/data/kubelet \
  18. --v=2
  19. Restart=always
  20. RestartSec=5
  21. [Install]
  22. WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "mkdir -p /approot1/k8s/data/kubelet"; \
  3. ssh $i "mkdir -p /approot1/k8s/bin"; \
  4. ssh $i "mkdir -p /etc/kubernetes/ssl"; \
  5. scp /approot1/k8s/tmp/ssl/ca*.pem $i:/etc/kubernetes/ssl/; \
  6. scp /approot1/k8s/tmp/ssl/kubelet.$i.pem $i:/etc/kubernetes/ssl/kubelet.pem; \
  7. scp /approot1/k8s/tmp/ssl/kubelet.$i-key.pem $i:/etc/kubernetes/ssl/kubelet-key.pem; \
  8. scp /approot1/k8s/tmp/ssl/kubelet.kubeconfig.$i $i:/etc/kubernetes/kubelet.kubeconfig; \
  9. scp /approot1/k8s/tmp/service/kubelet.service.$i $i:/etc/systemd/system/kubelet.service; \
  10. scp /approot1/k8s/tmp/service/config.yaml $i:/approot1/k8s/data/kubelet/; \
  11. scp /approot1/k8s/pkg/kubernetes/bin/kubelet $i:/approot1/k8s/bin/; \
  12. done

启动 kubelet 服务

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "systemctl daemon-reload"; \
  3. ssh $i "systemctl enable kubelet"; \
  4. ssh $i "systemctl restart kubelet --no-block"; \
  5. ssh $i "systemctl is-active kubelet"; \
  6. done

返回 activating 表示 kubelet 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active kubelet";done

返回active表示 kubelet 启动成功

查看节点是否 Ready

  1. kubectl get node

预期出现类似如下输出,STATUS 字段为 Ready 表示节点正常

  1. NAME STATUS ROLES AGE VERSION
  2. 192.168.91.19 Ready <none> 20m v1.23.3
  3. 192.168.91.20 Ready <none> 20m v1.23.3

部署 proxy 组件

创建 proxy 证书

  1. vim /approot1/k8s/tmp/ssl/kube-proxy-csr.json
  1. {
  2. "CN": "system:kube-proxy",
  3. "key": {
  4. "algo": "rsa",
  5. "size": 2048
  6. },
  7. "hosts": [],
  8. "names": [
  9. {
  10. "C": "CN",
  11. "ST": "ShangHai",
  12. "L": "ShangHai",
  13. "O": "system:kube-proxy",
  14. "OU": "System"
  15. }
  16. ]
  17. }
  1. cd /approot1/k8s/tmp/ssl/; \
  2. cfssl gencert -ca=ca.pem \
  3. -ca-key=ca-key.pem \
  4. -config=ca-config.json \
  5. -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

创建 kubeconfig 证书

设置集群参数

--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
  3. --certificate-authority=ca.pem \
  4. --embed-certs=true \
  5. --server=https://192.168.91.19:6443 \
  6. --kubeconfig=kube-proxy.kubeconfig

设置客户端认证参数

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials kube-proxy \
  3. --client-certificate=kube-proxy.pem \
  4. --client-key=kube-proxy-key.pem \
  5. --embed-certs=true \
  6. --kubeconfig=kube-proxy.kubeconfig

设置上下文参数

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config set-context default \
  3. --cluster=kubernetes \
  4. --user=kube-proxy \
  5. --kubeconfig=kube-proxy.kubeconfig

设置默认上下文

  1. cd /approot1/k8s/tmp/ssl/
  2. /approot1/k8s/pkg/kubernetes/bin/kubectl config \
  3. use-context default \
  4. --kubeconfig=kube-proxy.kubeconfig

配置 kube-proxy 配置文件

  1. vim /approot1/k8s/tmp/service/kube-proxy-config.yaml.192.168.91.19

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个service文件,service 文件内的 ip 也要修改为 work 节点的 ip,别重复了

clusterCIDR 参数要和 controller-manager--cluster-cidr 参数一致

hostnameOverride 要和 kubelet--hostname-override 参数一致,否则会出现 node not found 的报错

  1. kind: KubeProxyConfiguration
  2. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  3. bindAddress: 0.0.0.0
  4. clientConnection:
  5. kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
  6. clusterCIDR: "172.20.0.0/16"
  7. conntrack:
  8. maxPerCore: 32768
  9. min: 131072
  10. tcpCloseWaitTimeout: 1h0m0s
  11. tcpEstablishedTimeout: 24h0m0s
  12. healthzBindAddress: 0.0.0.0:10256
  13. hostnameOverride: "192.168.91.19"
  14. metricsBindAddress: 0.0.0.0:10249
  15. mode: "ipvs"

配置 proxy 为 systemctl 管理

  1. vim /approot1/k8s/tmp/service/kube-proxy.service
  1. [Unit]
  2. Description=Kubernetes Kube-Proxy Server
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. After=network.target
  5. [Service]
  6. # kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量
  7. ## 指定 --cluster-cidr 或 --masquerade-all 选项后
  8. ## kube-proxy 会对访问 Service IP 的请求做 SNAT
  9. WorkingDirectory=/approot1/k8s/data/kube-proxy
  10. ExecStart=/approot1/k8s/bin/kube-proxy \
  11. --config=/approot1/k8s/data/kube-proxy/kube-proxy-config.yaml
  12. Restart=always
  13. RestartSec=5
  14. LimitNOFILE=65536
  15. [Install]
  16. WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "mkdir -p /approot1/k8s/data//kube-proxy"; \
  3. ssh $i "mkdir -p /approot1/k8s/bin"; \
  4. ssh $i "mkdir -p /etc/kubernetes/ssl"; \
  5. scp /approot1/k8s/tmp/ssl/kube-proxy.kubeconfig $i:/etc/kubernetes/; \
  6. scp /approot1/k8s/tmp/service/kube-proxy.service $i:/etc/systemd/system/; \
  7. scp /approot1/k8s/tmp/service/kube-proxy-config.yaml.$i $i:/approot1/k8s/data/kube-proxy/kube-proxy-config.yaml; \
  8. scp /approot1/k8s/pkg/kubernetes/bin/kube-proxy $i:/approot1/k8s/bin/; \
  9. done

启动 kube-proxy 服务

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "systemctl daemon-reload"; \
  3. ssh $i "systemctl enable kube-proxy"; \
  4. ssh $i "systemctl restart kube-proxy --no-block"; \
  5. ssh $i "systemctl is-active kube-proxy"; \
  6. done

返回 activating 表示 kubelet 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active kubelet";done

返回active表示 kubelet 启动成功

部署 flannel 组件

flannel github

配置 flannel yaml 文件

  1. vim /approot1/k8s/tmp/service/flannel.yaml

net-conf.json 内的 Network 参数需要和 controller-manager--cluster-cidr 参数一致

  1. ---
  2. apiVersion: policy/v1beta1
  3. kind: PodSecurityPolicy
  4. metadata:
  5. name: psp.flannel.unprivileged
  6. annotations:
  7. seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  8. seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  9. apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  10. apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
  11. spec:
  12. privileged: false
  13. volumes:
  14. - configMap
  15. - secret
  16. - emptyDir
  17. - hostPath
  18. allowedHostPaths:
  19. - pathPrefix: "/etc/cni/net.d"
  20. - pathPrefix: "/etc/kube-flannel"
  21. - pathPrefix: "/run/flannel"
  22. readOnlyRootFilesystem: false
  23. # Users and groups
  24. runAsUser:
  25. rule: RunAsAny
  26. supplementalGroups:
  27. rule: RunAsAny
  28. fsGroup:
  29. rule: RunAsAny
  30. # Privilege Escalation
  31. allowPrivilegeEscalation: false
  32. defaultAllowPrivilegeEscalation: false
  33. # Capabilities
  34. allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  35. defaultAddCapabilities: []
  36. requiredDropCapabilities: []
  37. # Host namespaces
  38. hostPID: false
  39. hostIPC: false
  40. hostNetwork: true
  41. hostPorts:
  42. - min: 0
  43. max: 65535
  44. # SELinux
  45. seLinux:
  46. # SELinux is unused in CaaSP
  47. rule: 'RunAsAny'
  48. ---
  49. kind: ClusterRole
  50. apiVersion: rbac.authorization.k8s.io/v1
  51. metadata:
  52. name: flannel
  53. rules:
  54. - apiGroups: ['policy']
  55. resources: ['podsecuritypolicies']
  56. verbs: ['use']
  57. resourceNames: ['psp.flannel.unprivileged']
  58. - apiGroups:
  59. - ""
  60. resources:
  61. - pods
  62. verbs:
  63. - get
  64. - apiGroups:
  65. - ""
  66. resources:
  67. - nodes
  68. verbs:
  69. - list
  70. - watch
  71. - apiGroups:
  72. - ""
  73. resources:
  74. - nodes/status
  75. verbs:
  76. - patch
  77. ---
  78. kind: ClusterRoleBinding
  79. apiVersion: rbac.authorization.k8s.io/v1
  80. metadata:
  81. name: flannel
  82. roleRef:
  83. apiGroup: rbac.authorization.k8s.io
  84. kind: ClusterRole
  85. name: flannel
  86. subjects:
  87. - kind: ServiceAccount
  88. name: flannel
  89. namespace: kube-system
  90. ---
  91. apiVersion: v1
  92. kind: ServiceAccount
  93. metadata:
  94. name: flannel
  95. namespace: kube-system
  96. ---
  97. kind: ConfigMap
  98. apiVersion: v1
  99. metadata:
  100. name: kube-flannel-cfg
  101. namespace: kube-system
  102. labels:
  103. tier: node
  104. app: flannel
  105. data:
  106. cni-conf.json: |
  107. {
  108. "name": "cbr0",
  109. "cniVersion": "0.3.1",
  110. "plugins": [
  111. {
  112. "type": "flannel",
  113. "delegate": {
  114. "hairpinMode": true,
  115. "isDefaultGateway": true
  116. }
  117. },
  118. {
  119. "type": "portmap",
  120. "capabilities": {
  121. "portMappings": true
  122. }
  123. }
  124. ]
  125. }
  126. net-conf.json: |
  127. {
  128. "Network": "172.20.0.0/16",
  129. "Backend": {
  130. "Type": "vxlan"
  131. }
  132. }
  133. ---
  134. apiVersion: apps/v1
  135. kind: DaemonSet
  136. metadata:
  137. name: kube-flannel-ds
  138. namespace: kube-system
  139. labels:
  140. tier: node
  141. app: flannel
  142. spec:
  143. selector:
  144. matchLabels:
  145. app: flannel
  146. template:
  147. metadata:
  148. labels:
  149. tier: node
  150. app: flannel
  151. spec:
  152. affinity:
  153. nodeAffinity:
  154. requiredDuringSchedulingIgnoredDuringExecution:
  155. nodeSelectorTerms:
  156. - matchExpressions:
  157. - key: kubernetes.io/os
  158. operator: In
  159. values:
  160. - linux
  161. hostNetwork: true
  162. priorityClassName: system-node-critical
  163. tolerations:
  164. - operator: Exists
  165. effect: NoSchedule
  166. serviceAccountName: flannel
  167. initContainers:
  168. - name: install-cni
  169. image: quay.io/coreos/flannel:v0.15.1
  170. command:
  171. - cp
  172. args:
  173. - -f
  174. - /etc/kube-flannel/cni-conf.json
  175. - /etc/cni/net.d/10-flannel.conflist
  176. volumeMounts:
  177. - name: cni
  178. mountPath: /etc/cni/net.d
  179. - name: flannel-cfg
  180. mountPath: /etc/kube-flannel/
  181. containers:
  182. - name: kube-flannel
  183. image: quay.io/coreos/flannel:v0.15.1
  184. command:
  185. - /opt/bin/flanneld
  186. args:
  187. - --ip-masq
  188. - --kube-subnet-mgr
  189. resources:
  190. requests:
  191. cpu: "100m"
  192. memory: "50Mi"
  193. limits:
  194. cpu: "100m"
  195. memory: "50Mi"
  196. securityContext:
  197. privileged: false
  198. capabilities:
  199. add: ["NET_ADMIN", "NET_RAW"]
  200. env:
  201. - name: POD_NAME
  202. valueFrom:
  203. fieldRef:
  204. fieldPath: metadata.name
  205. - name: POD_NAMESPACE
  206. valueFrom:
  207. fieldRef:
  208. fieldPath: metadata.namespace
  209. volumeMounts:
  210. - name: run
  211. mountPath: /run/flannel
  212. - name: flannel-cfg
  213. mountPath: /etc/kube-flannel/
  214. volumes:
  215. - name: run
  216. hostPath:
  217. path: /run/flannel
  218. - name: cni
  219. hostPath:
  220. path: /etc/cni/net.d
  221. - name: flannel-cfg
  222. configMap:
  223. name: kube-flannel-cfg

配置 flannel cni 网卡配置文件

  1. vim /approot1/k8s/tmp/service/10-flannel.conflist
  1. {
  2. "name": "cbr0",
  3. "cniVersion": "0.3.1",
  4. "plugins": [
  5. {
  6. "type": "flannel",
  7. "delegate": {
  8. "hairpinMode": true,
  9. "isDefaultGateway": true
  10. }
  11. },
  12. {
  13. "type": "portmap",
  14. "capabilities": {
  15. "portMappings": true
  16. }
  17. }
  18. ]
  19. }

导入 flannel 镜像

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. scp /approot1/k8s/images/flannel-v0.15.1.tar $i:/tmp/
  3. ssh $i "ctr -n=k8s.io image import /tmp/flannel-v0.15.1.tar && rm -f /tmp/flannel-v0.15.1.tar"; \
  4. done

查看镜像

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "ctr -n=k8s.io image list | grep flannel"; \
  3. done

分发 flannel cni 网卡配置文件

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "rm -f /etc/cni/net.d/10-default.conf"; \
  3. scp /approot1/k8s/tmp/service/10-flannel.conflist $i:/etc/cni/net.d/; \
  4. done

分发完 flannel cni 网卡配置文件后,节点会出现暂时的 NotReady 状态,需要等到节点都变回 Ready 状态后,再运行 flannel 组件

在 k8s 中运行 flannel 组件

  1. kubectl apply -f /approot1/k8s/tmp/service/flannel.yaml

检查 flannel pod 是否运行成功

  1. kubectl get pod -n kube-system | grep flannel

预期输出类似如下结果

flannel 属于 DaemonSet ,属于和节点共存亡类型的 pod ,k8s 有多少 node ,flannel 就有多少 pod ,当 node 被删除的时候, flannel pod 也会随之删除

  1. kube-flannel-ds-86rrv 1/1 Running 0 8m54s
  2. kube-flannel-ds-bkgzx 1/1 Running 0 8m53s

suse 12 发行版会出现 Init:CreateContainerError 的情况,此时需要 kubectl describe pod -n kube-system <flannel_pod_name> 查看报错原因,Error: failed to create containerd container: get apparmor_parser version: exec: "apparmor_parser": executable file not found in $PATH 出现这个报错,只需要使用 which apparmor_parser 找到 apparmor_parser 所在路径,然后做一个软连接到 kubelet 命令所在目录即可,然后重启 pod ,注意,所有 flannel 所在节点都需要执行这个软连接操作

部署 coredns 组件

配置 coredns yaml 文件

  1. vim /approot1/k8s/tmp/service/coredns.yaml

clusterIP 参数要和 kubelet 配置文件的 clusterDNS 参数一致

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: coredns
  5. namespace: kube-system
  6. labels:
  7. kubernetes.io/cluster-service: "true"
  8. addonmanager.kubernetes.io/mode: Reconcile
  9. ---
  10. apiVersion: rbac.authorization.k8s.io/v1
  11. kind: ClusterRole
  12. metadata:
  13. labels:
  14. kubernetes.io/bootstrapping: rbac-defaults
  15. addonmanager.kubernetes.io/mode: Reconcile
  16. name: system:coredns
  17. rules:
  18. - apiGroups:
  19. - ""
  20. resources:
  21. - endpoints
  22. - services
  23. - pods
  24. - namespaces
  25. verbs:
  26. - list
  27. - watch
  28. - apiGroups:
  29. - ""
  30. resources:
  31. - nodes
  32. verbs:
  33. - get
  34. - apiGroups:
  35. - discovery.k8s.io
  36. resources:
  37. - endpointslices
  38. verbs:
  39. - list
  40. - watch
  41. ---
  42. apiVersion: rbac.authorization.k8s.io/v1
  43. kind: ClusterRoleBinding
  44. metadata:
  45. annotations:
  46. rbac.authorization.kubernetes.io/autoupdate: "true"
  47. labels:
  48. kubernetes.io/bootstrapping: rbac-defaults
  49. addonmanager.kubernetes.io/mode: EnsureExists
  50. name: system:coredns
  51. roleRef:
  52. apiGroup: rbac.authorization.k8s.io
  53. kind: ClusterRole
  54. name: system:coredns
  55. subjects:
  56. - kind: ServiceAccount
  57. name: coredns
  58. namespace: kube-system
  59. ---
  60. apiVersion: v1
  61. kind: ConfigMap
  62. metadata:
  63. name: coredns
  64. namespace: kube-system
  65. labels:
  66. addonmanager.kubernetes.io/mode: EnsureExists
  67. data:
  68. Corefile: |
  69. .:53 {
  70. errors
  71. health {
  72. lameduck 5s
  73. }
  74. ready
  75. kubernetes cluster.local in-addr.arpa ip6.arpa {
  76. pods insecure
  77. fallthrough in-addr.arpa ip6.arpa
  78. ttl 30
  79. }
  80. prometheus :9153
  81. forward . /etc/resolv.conf {
  82. max_concurrent 1000
  83. }
  84. cache 30
  85. reload
  86. loadbalance
  87. }
  88. ---
  89. apiVersion: apps/v1
  90. kind: Deployment
  91. metadata:
  92. name: coredns
  93. namespace: kube-system
  94. labels:
  95. k8s-app: kube-dns
  96. kubernetes.io/cluster-service: "true"
  97. addonmanager.kubernetes.io/mode: Reconcile
  98. kubernetes.io/name: "CoreDNS"
  99. spec:
  100. replicas: 1
  101. strategy:
  102. type: RollingUpdate
  103. rollingUpdate:
  104. maxUnavailable: 1
  105. selector:
  106. matchLabels:
  107. k8s-app: kube-dns
  108. template:
  109. metadata:
  110. labels:
  111. k8s-app: kube-dns
  112. spec:
  113. securityContext:
  114. seccompProfile:
  115. type: RuntimeDefault
  116. priorityClassName: system-cluster-critical
  117. serviceAccountName: coredns
  118. affinity:
  119. podAntiAffinity:
  120. preferredDuringSchedulingIgnoredDuringExecution:
  121. - weight: 100
  122. podAffinityTerm:
  123. labelSelector:
  124. matchExpressions:
  125. - key: k8s-app
  126. operator: In
  127. values: ["kube-dns"]
  128. topologyKey: kubernetes.io/hostname
  129. tolerations:
  130. - key: "CriticalAddonsOnly"
  131. operator: "Exists"
  132. nodeSelector:
  133. kubernetes.io/os: linux
  134. containers:
  135. - name: coredns
  136. image: docker.io/coredns/coredns:1.8.6
  137. imagePullPolicy: IfNotPresent
  138. resources:
  139. limits:
  140. memory: 300Mi
  141. requests:
  142. cpu: 100m
  143. memory: 70Mi
  144. args: [ "-conf", "/etc/coredns/Corefile" ]
  145. volumeMounts:
  146. - name: config-volume
  147. mountPath: /etc/coredns
  148. readOnly: true
  149. ports:
  150. - containerPort: 53
  151. name: dns
  152. protocol: UDP
  153. - containerPort: 53
  154. name: dns-tcp
  155. protocol: TCP
  156. - containerPort: 9153
  157. name: metrics
  158. protocol: TCP
  159. livenessProbe:
  160. httpGet:
  161. path: /health
  162. port: 8080
  163. scheme: HTTP
  164. initialDelaySeconds: 60
  165. timeoutSeconds: 5
  166. successThreshold: 1
  167. failureThreshold: 5
  168. readinessProbe:
  169. httpGet:
  170. path: /ready
  171. port: 8181
  172. scheme: HTTP
  173. securityContext:
  174. allowPrivilegeEscalation: false
  175. capabilities:
  176. add:
  177. - NET_BIND_SERVICE
  178. drop:
  179. - all
  180. readOnlyRootFilesystem: true
  181. dnsPolicy: Default
  182. volumes:
  183. - name: config-volume
  184. configMap:
  185. name: coredns
  186. items:
  187. - key: Corefile
  188. path: Corefile
  189. ---
  190. apiVersion: v1
  191. kind: Service
  192. metadata:
  193. name: kube-dns
  194. namespace: kube-system
  195. annotations:
  196. prometheus.io/port: "9153"
  197. prometheus.io/scrape: "true"
  198. labels:
  199. k8s-app: kube-dns
  200. kubernetes.io/cluster-service: "true"
  201. addonmanager.kubernetes.io/mode: Reconcile
  202. kubernetes.io/name: "CoreDNS"
  203. spec:
  204. selector:
  205. k8s-app: kube-dns
  206. clusterIP: 10.88.0.2
  207. ports:
  208. - name: dns
  209. port: 53
  210. protocol: UDP
  211. - name: dns-tcp
  212. port: 53
  213. protocol: TCP
  214. - name: metrics
  215. port: 9153
  216. protocol: TCP

导入 coredns 镜像

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. scp /approot1/k8s/images/coredns-v1.8.6.tar $i:/tmp/
  3. ssh $i "ctr -n=k8s.io image import /tmp/coredns-v1.8.6.tar && rm -f /tmp/coredns-v1.8.6.tar"; \
  4. done

查看镜像

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "ctr -n=k8s.io image list | grep coredns"; \
  3. done

在 k8s 中运行 coredns 组件

  1. kubectl apply -f /approot1/k8s/tmp/service/coredns.yaml

检查 coredns pod 是否运行成功

  1. kubectl get pod -n kube-system | grep coredns

预期输出类似如下结果

因为 coredns yaml 文件内的 replicas 参数是 1 ,因此这里只有一个 pod ,如果改成 2 ,就会出现两个 pod

  1. coredns-5fd74ff788-cddqf 1/1 Running 0 10s

部署 metrics-server 组件

配置 metrics-server yaml 文件

  1. vim /approot1/k8s/tmp/service/metrics-server.yaml
  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. labels:
  5. k8s-app: metrics-server
  6. name: metrics-server
  7. namespace: kube-system
  8. ---
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. kind: ClusterRole
  11. metadata:
  12. labels:
  13. k8s-app: metrics-server
  14. rbac.authorization.k8s.io/aggregate-to-admin: "true"
  15. rbac.authorization.k8s.io/aggregate-to-edit: "true"
  16. rbac.authorization.k8s.io/aggregate-to-view: "true"
  17. name: system:aggregated-metrics-reader
  18. rules:
  19. - apiGroups:
  20. - metrics.k8s.io
  21. resources:
  22. - pods
  23. - nodes
  24. verbs:
  25. - get
  26. - list
  27. - watch
  28. ---
  29. apiVersion: rbac.authorization.k8s.io/v1
  30. kind: ClusterRole
  31. metadata:
  32. labels:
  33. k8s-app: metrics-server
  34. name: system:metrics-server
  35. rules:
  36. - apiGroups:
  37. - ""
  38. resources:
  39. - pods
  40. - nodes
  41. - nodes/stats
  42. - namespaces
  43. - configmaps
  44. verbs:
  45. - get
  46. - list
  47. - watch
  48. ---
  49. apiVersion: rbac.authorization.k8s.io/v1
  50. kind: RoleBinding
  51. metadata:
  52. labels:
  53. k8s-app: metrics-server
  54. name: metrics-server-auth-reader
  55. namespace: kube-system
  56. roleRef:
  57. apiGroup: rbac.authorization.k8s.io
  58. kind: Role
  59. name: extension-apiserver-authentication-reader
  60. subjects:
  61. - kind: ServiceAccount
  62. name: metrics-server
  63. namespace: kube-system
  64. ---
  65. apiVersion: rbac.authorization.k8s.io/v1
  66. kind: ClusterRoleBinding
  67. metadata:
  68. labels:
  69. k8s-app: metrics-server
  70. name: metrics-server:system:auth-delegator
  71. roleRef:
  72. apiGroup: rbac.authorization.k8s.io
  73. kind: ClusterRole
  74. name: system:auth-delegator
  75. subjects:
  76. - kind: ServiceAccount
  77. name: metrics-server
  78. namespace: kube-system
  79. ---
  80. apiVersion: rbac.authorization.k8s.io/v1
  81. kind: ClusterRoleBinding
  82. metadata:
  83. labels:
  84. k8s-app: metrics-server
  85. name: system:metrics-server
  86. roleRef:
  87. apiGroup: rbac.authorization.k8s.io
  88. kind: ClusterRole
  89. name: system:metrics-server
  90. subjects:
  91. - kind: ServiceAccount
  92. name: metrics-server
  93. namespace: kube-system
  94. ---
  95. apiVersion: v1
  96. kind: Service
  97. metadata:
  98. labels:
  99. k8s-app: metrics-server
  100. name: metrics-server
  101. namespace: kube-system
  102. spec:
  103. ports:
  104. - name: https
  105. port: 443
  106. protocol: TCP
  107. targetPort: https
  108. selector:
  109. k8s-app: metrics-server
  110. ---
  111. apiVersion: apps/v1
  112. kind: Deployment
  113. metadata:
  114. labels:
  115. k8s-app: metrics-server
  116. name: metrics-server
  117. namespace: kube-system
  118. spec:
  119. selector:
  120. matchLabels:
  121. k8s-app: metrics-server
  122. strategy:
  123. rollingUpdate:
  124. maxUnavailable: 0
  125. template:
  126. metadata:
  127. labels:
  128. k8s-app: metrics-server
  129. spec:
  130. containers:
  131. - args:
  132. - --cert-dir=/tmp
  133. - --secure-port=4443
  134. - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  135. - --kubelet-insecure-tls
  136. - --kubelet-use-node-status-port
  137. - --metric-resolution=15s
  138. image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2
  139. imagePullPolicy: IfNotPresent
  140. livenessProbe:
  141. failureThreshold: 3
  142. httpGet:
  143. path: /livez
  144. port: https
  145. scheme: HTTPS
  146. periodSeconds: 10
  147. name: metrics-server
  148. ports:
  149. - containerPort: 4443
  150. name: https
  151. protocol: TCP
  152. readinessProbe:
  153. failureThreshold: 3
  154. httpGet:
  155. path: /readyz
  156. port: https
  157. scheme: HTTPS
  158. initialDelaySeconds: 20
  159. periodSeconds: 10
  160. resources:
  161. requests:
  162. cpu: 100m
  163. memory: 200Mi
  164. securityContext:
  165. readOnlyRootFilesystem: true
  166. runAsNonRoot: true
  167. runAsUser: 1000
  168. volumeMounts:
  169. - mountPath: /tmp
  170. name: tmp-dir
  171. nodeSelector:
  172. kubernetes.io/os: linux
  173. priorityClassName: system-cluster-critical
  174. serviceAccountName: metrics-server
  175. volumes:
  176. - emptyDir: {}
  177. name: tmp-dir
  178. ---
  179. apiVersion: apiregistration.k8s.io/v1
  180. kind: APIService
  181. metadata:
  182. labels:
  183. k8s-app: metrics-server
  184. name: v1beta1.metrics.k8s.io
  185. spec:
  186. group: metrics.k8s.io
  187. groupPriorityMinimum: 100
  188. insecureSkipTLSVerify: true
  189. service:
  190. name: metrics-server
  191. namespace: kube-system
  192. version: v1beta1
  193. versionPriority: 100

导入 metrics-server 镜像

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. scp /approot1/k8s/images/metrics-server-v0.5.2.tar $i:/tmp/
  3. ssh $i "ctr -n=k8s.io image import /tmp/metrics-server-v0.5.2.tar && rm -f /tmp/metrics-server-v0.5.2.tar"; \
  4. done

查看镜像

  1. for i in 192.168.91.19 192.168.91.20;do \
  2. ssh $i "ctr -n=k8s.io image list | grep metrics-server"; \
  3. done

在 k8s 中运行 metrics-server 组件

  1. kubectl apply -f /approot1/k8s/tmp/service/metrics-server.yaml

检查 metrics-server pod 是否运行成功

  1. kubectl get pod -n kube-system | grep metrics-server

预期输出类似如下结果

  1. metrics-server-6c95598969-qnc76 1/1 Running 0 71s

验证 metrics-server 功能

查看节点资源使用情况

  1. kubectl top node

预期输出类似如下结果

metrics-server 启动会偏慢,速度取决于机器配置,如果输出 is not yet 或者 is not ready 就等一会再执行一次 kubectl top node

  1. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  2. 192.168.91.19 285m 4% 2513Mi 32%
  3. 192.168.91.20 71m 3% 792Mi 21%

查看指定 namespace 的 pod 资源使用情况

  1. kubectl top pod -n kube-system

预期输出类似如下结果

  1. NAME CPU(cores) MEMORY(bytes)
  2. coredns-5fd74ff788-cddqf 11m 18Mi
  3. kube-flannel-ds-86rrv 4m 18Mi
  4. kube-flannel-ds-bkgzx 6m 22Mi
  5. kube-flannel-ds-v25xc 6m 22Mi
  6. metrics-server-6c95598969-qnc76 6m 22Mi

部署 dashboard 组件

配置 dashboard yaml 文件

  1. vim /approot1/k8s/tmp/service/dashboard.yaml
  1. ---
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: admin-user
  6. namespace: kube-system
  7. ---
  8. apiVersion: rbac.authorization.k8s.io/v1
  9. kind: ClusterRoleBinding
  10. metadata:
  11. name: admin-user
  12. roleRef:
  13. apiGroup: rbac.authorization.k8s.io
  14. kind: ClusterRole
  15. name: cluster-admin
  16. subjects:
  17. - kind: ServiceAccount
  18. name: admin-user
  19. namespace: kube-system
  20. ---
  21. apiVersion: v1
  22. kind: ServiceAccount
  23. metadata:
  24. name: dashboard-read-user
  25. namespace: kube-system
  26. ---
  27. apiVersion: rbac.authorization.k8s.io/v1
  28. kind: ClusterRoleBinding
  29. metadata:
  30. name: dashboard-read-binding
  31. roleRef:
  32. apiGroup: rbac.authorization.k8s.io
  33. kind: ClusterRole
  34. name: dashboard-read-clusterrole
  35. subjects:
  36. - kind: ServiceAccount
  37. name: dashboard-read-user
  38. namespace: kube-system
  39. ---
  40. apiVersion: rbac.authorization.k8s.io/v1
  41. kind: ClusterRole
  42. metadata:
  43. name: dashboard-read-clusterrole
  44. rules:
  45. - apiGroups:
  46. - ""
  47. resources:
  48. - configmaps
  49. - endpoints
  50. - nodes
  51. - persistentvolumes
  52. - persistentvolumeclaims
  53. - persistentvolumeclaims/status
  54. - pods
  55. - replicationcontrollers
  56. - replicationcontrollers/scale
  57. - serviceaccounts
  58. - services
  59. - services/status
  60. verbs:
  61. - get
  62. - list
  63. - watch
  64. - apiGroups:
  65. - ""
  66. resources:
  67. - bindings
  68. - events
  69. - limitranges
  70. - namespaces/status
  71. - pods/log
  72. - pods/status
  73. - replicationcontrollers/status
  74. - resourcequotas
  75. - resourcequotas/status
  76. verbs:
  77. - get
  78. - list
  79. - watch
  80. - apiGroups:
  81. - ""
  82. resources:
  83. - namespaces
  84. verbs:
  85. - get
  86. - list
  87. - watch
  88. - apiGroups:
  89. - apps
  90. resources:
  91. - controllerrevisions
  92. - daemonsets
  93. - daemonsets/status
  94. - deployments
  95. - deployments/scale
  96. - deployments/status
  97. - replicasets
  98. - replicasets/scale
  99. - replicasets/status
  100. - statefulsets
  101. - statefulsets/scale
  102. - statefulsets/status
  103. verbs:
  104. - get
  105. - list
  106. - watch
  107. - apiGroups:
  108. - autoscaling
  109. resources:
  110. - horizontalpodautoscalers
  111. - horizontalpodautoscalers/status
  112. verbs:
  113. - get
  114. - list
  115. - watch
  116. - apiGroups:
  117. - batch
  118. resources:
  119. - cronjobs
  120. - cronjobs/status
  121. - jobs
  122. - jobs/status
  123. verbs:
  124. - get
  125. - list
  126. - watch
  127. - apiGroups:
  128. - extensions
  129. resources:
  130. - daemonsets
  131. - daemonsets/status
  132. - deployments
  133. - deployments/scale
  134. - deployments/status
  135. - ingresses
  136. - ingresses/status
  137. - replicasets
  138. - replicasets/scale
  139. - replicasets/status
  140. - replicationcontrollers/scale
  141. verbs:
  142. - get
  143. - list
  144. - watch
  145. - apiGroups:
  146. - policy
  147. resources:
  148. - poddisruptionbudgets
  149. - poddisruptionbudgets/status
  150. verbs:
  151. - get
  152. - list
  153. - watch
  154. - apiGroups:
  155. - networking.k8s.io
  156. resources:
  157. - ingresses
  158. - ingresses/status
  159. - networkpolicies
  160. verbs:
  161. - get
  162. - list
  163. - watch
  164. - apiGroups:
  165. - storage.k8s.io
  166. resources:
  167. - storageclasses
  168. - volumeattachments
  169. verbs:
  170. - get
  171. - list
  172. - watch
  173. - apiGroups:
  174. - rbac.authorization.k8s.io
  175. resources:
  176. - clusterrolebindings
  177. - clusterroles
  178. - roles
  179. - rolebindings
  180. verbs:
  181. - get
  182. - list
  183. - watch
  184. ---
  185. apiVersion: v1
  186. kind: ServiceAccount
  187. metadata:
  188. labels:
  189. k8s-app: kubernetes-dashboard
  190. name: kubernetes-dashboard
  191. namespace: kube-system
  192. ---
  193. kind: Service
  194. apiVersion: v1
  195. metadata:
  196. labels:
  197. k8s-app: kubernetes-dashboard
  198. kubernetes.io/cluster-service: "true"
  199. name: kubernetes-dashboard
  200. namespace: kube-system
  201. spec:
  202. ports:
  203. - port: 443
  204. targetPort: 8443
  205. selector:
  206. k8s-app: kubernetes-dashboard
  207. type: NodePort
  208. ---
  209. apiVersion: v1
  210. kind: Secret
  211. metadata:
  212. labels:
  213. k8s-app: kubernetes-dashboard
  214. name: kubernetes-dashboard-certs
  215. namespace: kube-system
  216. type: Opaque
  217. ---
  218. apiVersion: v1
  219. kind: Secret
  220. metadata:
  221. labels:
  222. k8s-app: kubernetes-dashboard
  223. name: kubernetes-dashboard-csrf
  224. namespace: kube-system
  225. type: Opaque
  226. data:
  227. csrf: ""
  228. ---
  229. apiVersion: v1
  230. kind: Secret
  231. metadata:
  232. labels:
  233. k8s-app: kubernetes-dashboard
  234. name: kubernetes-dashboard-key-holder
  235. namespace: kube-system
  236. type: Opaque
  237. ---
  238. kind: ConfigMap
  239. apiVersion: v1
  240. metadata:
  241. labels:
  242. k8s-app: kubernetes-dashboard
  243. name: kubernetes-dashboard-settings
  244. namespace: kube-system
  245. ---
  246. kind: Role
  247. apiVersion: rbac.authorization.k8s.io/v1
  248. metadata:
  249. labels:
  250. k8s-app: kubernetes-dashboard
  251. name: kubernetes-dashboard
  252. namespace: kube-system
  253. rules:
  254. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  255. - apiGroups: [""]
  256. resources: ["secrets"]
  257. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
  258. verbs: ["get", "update", "delete"]
  259. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  260. - apiGroups: [""]
  261. resources: ["configmaps"]
  262. resourceNames: ["kubernetes-dashboard-settings"]
  263. verbs: ["get", "update"]
  264. # Allow Dashboard to get metrics.
  265. - apiGroups: [""]
  266. resources: ["services"]
  267. resourceNames: ["heapster", "dashboard-metrics-scraper"]
  268. verbs: ["proxy"]
  269. - apiGroups: [""]
  270. resources: ["services/proxy"]
  271. resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
  272. verbs: ["get"]
  273. ---
  274. kind: ClusterRole
  275. apiVersion: rbac.authorization.k8s.io/v1
  276. metadata:
  277. labels:
  278. k8s-app: kubernetes-dashboard
  279. name: kubernetes-dashboard
  280. rules:
  281. # Allow Metrics Scraper to get metrics from the Metrics server
  282. - apiGroups: ["metrics.k8s.io"]
  283. resources: ["pods", "nodes"]
  284. verbs: ["get", "list", "watch"]
  285. ---
  286. apiVersion: rbac.authorization.k8s.io/v1
  287. kind: RoleBinding
  288. metadata:
  289. labels:
  290. k8s-app: kubernetes-dashboard
  291. name: kubernetes-dashboard
  292. namespace: kube-system
  293. roleRef:
  294. apiGroup: rbac.authorization.k8s.io
  295. kind: Role
  296. name: kubernetes-dashboard
  297. subjects:
  298. - kind: ServiceAccount
  299. name: kubernetes-dashboard
  300. namespace: kube-system
  301. ---
  302. apiVersion: rbac.authorization.k8s.io/v1
  303. kind: ClusterRoleBinding
  304. metadata:
  305. name: kubernetes-dashboard
  306. roleRef:
  307. apiGroup: rbac.authorization.k8s.io
  308. kind: ClusterRole
  309. name: kubernetes-dashboard
  310. subjects:
  311. - kind: ServiceAccount
  312. name: kubernetes-dashboard
  313. namespace: kube-system
  314. ---
  315. kind: Deployment
  316. apiVersion: apps/v1
  317. metadata:
  318. labels:
  319. k8s-app: kubernetes-dashboard
  320. name: kubernetes-dashboard
  321. namespace: kube-system
  322. spec:
  323. replicas: 1
  324. revisionHistoryLimit: 10
  325. selector:
  326. matchLabels:
  327. k8s-app: kubernetes-dashboard
  328. template:
  329. metadata:
  330. labels:
  331. k8s-app: kubernetes-dashboard
  332. spec:
  333. containers:
  334. - name: kubernetes-dashboard
  335. image: kubernetesui/dashboard:v2.4.0
  336. imagePullPolicy: IfNotPresent
  337. ports:
  338. - containerPort: 8443
  339. protocol: TCP
  340. args:
  341. - --auto-generate-certificates
  342. - --namespace=kube-system
  343. - --token-ttl=1800
  344. - --sidecar-host=http://dashboard-metrics-scraper:8000
  345. # Uncomment the following line to manually specify Kubernetes API server Host
  346. # If not specified, Dashboard will attempt to auto discover the API server and connect
  347. # to it. Uncomment only if the default does not work.
  348. # - --apiserver-host=http://my-address:port
  349. volumeMounts:
  350. - name: kubernetes-dashboard-certs
  351. mountPath: /certs
  352. # Create on-disk volume to store exec logs
  353. - mountPath: /tmp
  354. name: tmp-volume
  355. livenessProbe:
  356. httpGet:
  357. scheme: HTTPS
  358. path: /
  359. port: 8443
  360. initialDelaySeconds: 30
  361. timeoutSeconds: 30
  362. securityContext:
  363. allowPrivilegeEscalation: false
  364. readOnlyRootFilesystem: true
  365. runAsUser: 1001
  366. runAsGroup: 2001
  367. volumes:
  368. - name: kubernetes-dashboard-certs
  369. secret:
  370. secretName: kubernetes-dashboard-certs
  371. - name: tmp-volume
  372. emptyDir: {}
  373. serviceAccountName: kubernetes-dashboard
  374. nodeSelector:
  375. "kubernetes.io/os": linux
  376. # Comment the following tolerations if Dashboard must not be deployed on master
  377. tolerations:
  378. - key: node-role.kubernetes.io/master
  379. effect: NoSchedule
  380. ---
  381. kind: Service
  382. apiVersion: v1
  383. metadata:
  384. labels:
  385. k8s-app: dashboard-metrics-scraper
  386. name: dashboard-metrics-scraper
  387. namespace: kube-system
  388. spec:
  389. ports:
  390. - port: 8000
  391. targetPort: 8000
  392. selector:
  393. k8s-app: dashboard-metrics-scraper
  394. ---
  395. kind: Deployment
  396. apiVersion: apps/v1
  397. metadata:
  398. labels:
  399. k8s-app: dashboard-metrics-scraper
  400. name: dashboard-metrics-scraper
  401. namespace: kube-system
  402. spec:
  403. replicas: 1
  404. revisionHistoryLimit: 10
  405. selector:
  406. matchLabels:
  407. k8s-app: dashboard-metrics-scraper
  408. template:
  409. metadata:
  410. labels:
  411. k8s-app: dashboard-metrics-scraper
  412. spec:
  413. securityContext:
  414. seccompProfile:
  415. type: RuntimeDefault
  416. containers:
  417. - name: dashboard-metrics-scraper
  418. image: kubernetesui/metrics-scraper:v1.0.7
  419. imagePullPolicy: IfNotPresent
  420. ports:
  421. - containerPort: 8000
  422. protocol: TCP
  423. livenessProbe:
  424. httpGet:
  425. scheme: HTTP
  426. path: /
  427. port: 8000
  428. initialDelaySeconds: 30
  429. timeoutSeconds: 30
  430. volumeMounts:
  431. - mountPath: /tmp
  432. name: tmp-volume
  433. securityContext:
  434. allowPrivilegeEscalation: false
  435. readOnlyRootFilesystem: true
  436. runAsUser: 1001
  437. runAsGroup: 2001
  438. serviceAccountName: kubernetes-dashboard
  439. nodeSelector:
  440. "kubernetes.io/os": linux
  441. # Comment the following tolerations if Dashboard must not be deployed on master
  442. tolerations:
  443. - key: node-role.kubernetes.io/master
  444. effect: NoSchedule
  445. volumes:
  446. - name: tmp-volume
  447. emptyDir: {}

导入 dashboard 镜像

for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/dashboard-*.tar $i:/tmp/
ssh $i "ctr -n=k8s.io image import /tmp/dashboard-v2.4.0.tar && rm -f /tmp/dashboard-v2.4.0.tar"; \
ssh $i "ctr -n=k8s.io image import /tmp/dashboard-metrics-scraper-v1.0.7.tar && rm -f /tmp/dashboard-metrics-scraper-v1.0.7.tar"; \
done

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "ctr -n=k8s.io image list | egrep 'dashboard|metrics-scraper'"; \
done

在 k8s 中运行 dashboard 组件

kubectl apply -f /approot1/k8s/tmp/service/dashboard.yaml

检查 dashboard pod 是否运行成功

kubectl get pod -n kube-system | grep dashboard

预期输出类似如下结果

dashboard-metrics-scraper-799d786dbf-v28pm   1/1     Running       0          2m55s
kubernetes-dashboard-9f8c8b989-rhb7z 1/1 Running 0 2m55s

查看 dashboard 访问端口

在 service 当中没有指定 dashboard 的访问端口,所以需要自己获取,也可以修改 yaml 文件指定访问端口

预期输出类似如下结果

我这边是将 30210 端口映射给 pod 的 443 端口

kubernetes-dashboard        NodePort    10.88.127.68    <none>        443:30210/TCP            5m30s

根据得到的端口访问 dashboard 页面,例如: https://192.168.91.19:30210

查看 dashboard 登录 token

获取 token 文件名称

kubectl get secrets -n kube-system | grep admin

预期输出类似如下结果

admin-user-token-zvrst                           kubernetes.io/service-account-token   3      9m2s

获取 token 内容

kubectl get secrets -n kube-system admin-user-token-zvrst -o jsonpath={.data.token}|base64 -d

预期输出类似如下结果

eyJhbGciOiJSUzI1NiIsImtpZCI6InA4M1lhZVgwNkJtekhUd3Vqdm9vTE1ma1JYQ1ZuZ3c3ZE1WZmJhUXR4bUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXp2cnN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhYTE3NTg1ZC1hM2JiLTQ0YWYtOWNhZS0yNjQ5YzA0YThmZWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.K2o9p5St9tvIbXk7mCQCwsZQV11zICwN-JXhRv1hAnc9KFcAcDOiO4NxIeicvC2H9tHQBIJsREowVwY3yGWHj_MQa57EdBNWMrN1hJ5u-XzpzJ6JbQxns8ZBrCpIR8Fxt468rpTyMyqsO2UBo-oXQ0_ZXKss6X6jjxtGLCQFkz1ZfFTQW3n49L4ENzW40sSj4dnaX-PsmosVOpsKRHa8TPndusAT-58aujcqt31Z77C4M13X_vAdjyDLK9r5ZXwV2ryOdONwJye_VtXXrExBt9FWYtLGCQjKn41pwXqEfidT8cY6xbA7XgUVTr9miAmZ-jf1UeEw-nm8FOw9Bb5v6A

到此,基于 containerd 二进制部署 k8s v1.23.3 就结束了

基于containerd二进制部署k8s-v1.23.3的更多相关文章

  1. 二进制部署k8s

    一.二进制部署 k8s集群 1)参考文章 博客: https://blog.qikqiak.com 文章: https://www.qikqiak.com/post/manual-install-hi ...

  2. Centos7.6部署k8s v1.16.4高可用集群(主备模式)

    一.部署环境 主机列表: 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 master01 7.6. ...

  3. lvs+keepalived部署k8s v1.16.4高可用集群

    一.部署环境 1.1 主机列表 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 lvs-keepal ...

  4. 【原】二进制部署 k8s 1.18.3

    二进制部署 k8s 1.18.3 1.相关前置信息 1.1 版本信息 kube_version: v1.18.3 etcd_version: v3.4.9 flannel: v0.12.0 cored ...

  5. [原创]自动化部署K8S(v1.10.11)集群

          标准运维实现自动化部署K8S集群主要分两步,第一步是部署gse-agent,拱第二步执行部署. 第一步:部署gse-agent.如下: 第二步:部署k8s集群.主要通过作业平台分为5小步执 ...

  6. K8S学习笔记之二进制部署Kubernetes v1.13.4 高可用集群

    0x00 概述 本次采用二进制文件方式部署,本文过程写成了更详细更多可选方案的ansible部署方案 https://github.com/zhangguanzhang/Kubernetes-ansi ...

  7. 基于Containerd安装部署高可用Kubernetes集群

    转载自:https://blog.weiyigeek.top/2021/7-30-623.html 简述 Kubernetes(后续简称k8s)是 Google(2014年6月) 开源的一个容器编排引 ...

  8. kubeadm 使用 Calico CNI 以及外部 etcd 部署 kubernetes v1.23.1 高可用集群

    文章转载自:https://mp.weixin.qq.com/s/2sWHt6SeCf7GGam0LJEkkA 一.环境准备 使用服务器 Centos 8.4 镜像,默认操作系统版本 4.18.0-3 ...

  9. 使用kubeadm部署K8S v1.17.0集群

    kubeadm部署K8S集群 安装前的准备 集群机器 172.22.34.34 K8S00 172.22.34.35 K8S01 172.22.34.36 K8S02 注意: 本文档中的 etcd . ...

随机推荐

  1. 计算机视觉2-> 深度学习 | anaconda+cuda+pytorch环境配置

    00 想说的 深度学习的环境我配置了两个阶段,暑假的时候在一个主攻视觉的实验室干活,闲暇时候就顺手想给自己的Ubuntu1804配置一个深度学习的环境.这会儿配到了anaconda+pytorch+c ...

  2. 【C】C语言大作业——学生学籍管理系统

    文章目录 学生管理系统 界面 主界面 登陆界面 注册界面 管理界面 学生界面 退出界面 链接 注意 学生管理系统 学C语言时写的一个大作业,弄了一个带图形界面的,使用的是VS配合EasyX图形库进行实 ...

  3. leetcode 986. 区间列表的交集

    问题描述 给定两个由一些 闭区间 组成的列表,每个区间列表都是成对不相交的,并且已经排序. 返回这两个区间列表的交集. (形式上,闭区间 [a, b](其中 a <= b)表示实数 x 的集合, ...

  4. vscode控制台中文乱码

    原因 vscode中文控制台乱码原因是调用的cmd的显示. 所以问题实际上是cmd的显示中文乱码问题.当然还有其他方法仅仅修改vscode的显示,这里不在说明. cmd中国版本windows默认是93 ...

  5. golang中json包序列化与反序列化

    package main import ( "encoding/json" "fmt" "reflect" ) type Info stru ...

  6. 「数据结构」Link-Cut Tree(LCT)

    #1.0 简述 #1.1 动态树问题 维护一个森林,支持删除某条边,加入某条边,并保证加边.删边之后仍然是森林.我们需要维护这个森林的一些信息. 一般的操作有两点连通性,两点路径权值和等等. #1.2 ...

  7. 一劳永逸,解决.NET发布云服务器的时区问题

    国内大多数开发者使用的电脑,都是使用的北京时间,日常开发的过程中其实并没有什么不便:不过,等遇到了阿里云等云服务器,系统默认使用的时间大多为UTC时间,这个时候,时区和时间的问题,就是不容忽视的大问题 ...

  8. 学习JAVAWEB第一天

    第一天:单元测试(junit)黑盒测试:不需要写代码,给输入值,看程序能否给出期望值白盒测试:需要写代码,关注程序的具体执行流程junit使用步骤:步骤1:定义一个测试类建议类名,被测试类名后面加一个 ...

  9. hive 常用日期格式转换

    固定日期转换成时间戳select unix_timestamp('2016-08-16','yyyy-MM-dd') --1471276800select unix_timestamp('201608 ...

  10. 「JOISC 2014 Day1」巴士走读

    「JOISC 2014 Day1」巴士走读 将询问离线下来. 从终点出发到起点. 由于在每个点(除了终点)的时间被过来的边固定,因此如果一个点不被新的边更新,是不会发生变化的. 因此可以按照时间顺序, ...