使用kubeadm部署k8s集群

环境

IP地址 主机名 节点
10.0.0.63 k8s-master1 master1
10.0.0.65 k8s-node1 node1
10.0.0.66 k8s-node2 node2

1. 简要

kubeadm是官方社区推出的快速部署kubernetes集群工具

部署环境适用于学习和使用k8s相关软件和功能

2. 安装要求

  1. 3台纯净centos虚拟机,版本为7.x及以上
  2. 机器配置 24G以上 x3
  3. 服务器网络互通
  4. 禁止swap分区

3. 学习目标

学会使用kubeadm来安装一个集群,便于学习k8s相关知识

4. 环境准备

  1. # 1. 关闭防火墙功能
  2. systemctl stop firewalld
  3. systemctl disable firewalld
  4. # 2.关闭selinux
  5. sed -i 's/enforcing/disabled/' /etc/selinux/config
  6. setenforce 0
  7. # 3. 关闭swap
  8. swapoff -a
  9. #或将命令加入开机启动
  10. echo "swapoff -a" >>/etc/profile
  11. source /etc/profile
  12. # 4. 服务器规划
  13. cat > /etc/hosts << EOF
  14. 10.0.0.63 k8s-master1
  15. 10.0.0.65 k8s-node1
  16. 10.0.0.66 k8s-node2
  17. EOF
  18. #5. 临时主机名配置方法:
  19. hostnamectl set-hostname k8s-master1
  20. bash
  21. #6. 时间同步配置
  22. yum install -y ntpdate
  23. ntpdate time.windows.com
  24. #开启转发
  25. cat > /etc/sysctl.d/k8s.conf << EOF
  26. net.bridge.bridge-nf-call-ip6tables = 1
  27. net.bridge.bridge-nf-call-iptables = 1
  28. EOF
  29. sysctl --system
  30. #7. 时间同步
  31. echo '*/5 * * * * /usr/sbin/ntpdate -u ntp.api.bz' >>/var/spool/cron/root
  32. systemctl restart crond.service
  33. crontab -l
  34. # 以上可以全部复制粘贴直接运行,但是主机名配置需要重新修改

5. docker安装[所有节点都需要安装]

  1. #源添加
  2. wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  3. wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repo
  4. wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  5. yum clean all
  6. yum install -y bash-completion.noarch
  7. # 安装指定版版本
  8. yum -y install docker-ce-18.09.9-3.el7
  9. #也可以查看版本安装
  10. yum list docker-ce --showduplicates | sort -r
  11. #启动docker
  12. systemctl enable docker
  13. systemctl start docker
  14. systemctl status docker

6. docker配置cgroup驱动[所有节点]

  1. rm -f /etc/docker/*
  2. sudo mkdir -p /etc/docker
  3. sudo tee /etc/docker/daemon.json <<-'EOF'
  4. {
  5. "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  6. "exec-opts": ["native.cgroupdriver=systemd"]
  7. }
  8. EOF
  9. sudo systemctl daemon-reload
  10. sudo systemctl restart docker
  11. 拉取flanel镜像:
  12. docker pull lizhenliang/flannel:v0.11.0-amd64

7. 镜像加速[所有节点]

  1. curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
  2. systemctl restart docker
  3. #如果源太多容易出错. 错了就删除一个.bak源试试看
  4. #保留 curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
  5. 这个是阿里云配置的加速
  6. https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors

8.kubernetes源配置[所有节点]

  1. cat > /etc/yum.repos.d/kubernetes.repo << EOF
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

9. 安装kubeadm,kubelet和kubectl[所有节点]

  1. yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
  2. systemctl enable kubelet

10. 部署Kubernetes Master [ master 10.0.0.63]

  1. kubeadm init \
  2. --apiserver-advertise-address=10.0.0.63 \
  3. --image-repository registry.aliyuncs.com/google_containers \
  4. --kubernetes-version v1.18.0 \
  5. --service-cidr=10.1.0.0/16 \
  6. --pod-network-cidr=10.244.0.0/16
  7. #成功后加入环境变量:
  8. mkdir -p $HOME/.kube
  9. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  10. sudo chown $(id -u):$(id -g) $HOME/.kube/config

初始化后获取到token:

kubeadm join 10.0.0.63:6443 --token 2cdgi6.79j20fhly6xpgfud

--discovery-token-ca-cert-hash sha256:3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4

记住token,后面使用

注意:

W0507 00:43:52.681429 3118 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2 [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... To see the stack trace of this error execute with --v=5 or higher

10.1 报错处理

报错1: 需要修改docker驱动为systemd /etc/docker/daemon.json 文件中加入: "exec-opts": ["native.cgroupdriver=systemd"]

报错2: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

出现该报错,是cpu有限制,将cpu修改为2核4G以上配置即可

报错2: 出现该报错,是cpu有限制,将cpu修改为2核4G以上配置即可

报错3: 加入集群出现报错:

  1. W0507 01:19:49.406337 26642 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
  2. [preflight] Running pre-flight checks
  3. error execution phase preflight: [preflight] Some fatal errors occurred:
  4. [ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
  5. [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
  6. [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
  7. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
  8. To see the stack trace of this error execute with --v=5 or higher
  9. [root@k8s-master2 yum.repos.d]# kubeadm join 10.0.0.63:6443 --token q8bfij.fipmsxdgv8sgcyq4 \
  10. > --discovery-token-ca-cert-hash sha256:26fc15b6e52385074810fdbbd53d1ba23269b39ca2e3ec3bac9376ed807b595c
  11. > --discovery-token-ca-cert-hash sha256:26fc15b6e52385074810fdbbd53d1ba23269b39ca2e3ec3bac9376ed807b595c
  12. W0507 01:20:26.246981 26853 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
  13. [preflight] Running pre-flight checks
  14. error execution phase preflight: [preflight] Some fatal errors occurred:
  15. [ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
  16. [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
  17. [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
  18. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
  19. To see the stack trace of this error execute with --v=5 or higher
  20. 解决办法:
  21. 执行: kubeadm reset 重新加入

10.2. kubectl命令工具配置[master]:

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  4. #获取节点信息
  5. # kubectl get nodes
  6. [root@k8s-master1 ~]# kubectl get nodes
  7. NAME STATUS ROLES AGE VERSION
  8. k8s-master1 NotReady master 2m59s v1.18.0
  9. k8s-node1 NotReady <none> 86s v1.18.0
  10. k8s-node2 NotReady <none> 85s v1.18.0
  11. #可以获取到其他主机的状态信息,证明集群完毕,另一台k8s-master2 没有加入到集群中,是因为要做多master,这里就不加了.

10.2. 安装网络插件[master]

  1. [直在master上操作]上传kube-flannel.yaml,并执行:
  2. kubectl apply -f kube-flannel.yaml
  3. kubectl get pods -n kube-system
  4. 下载地址:
  5. https://www.chenleilei.net/soft/k8s/kube-flannel.yaml
  6. [必须全部运行起来,否则有问题.]
  7. [root@k8s-master1 ~]# kubectl get pods -n kube-system
  8. NAME READY STATUS RESTARTS AGE
  9. coredns-7ff77c879f-5dq4s 1/1 Running 0 13m
  10. coredns-7ff77c879f-v68pc 1/1 Running 0 13m
  11. etcd-k8s-master1 1/1 Running 0 13m
  12. kube-apiserver-k8s-master1 1/1 Running 0 13m
  13. kube-controller-manager-k8s-master1 1/1 Running 0 13m
  14. kube-flannel-ds-amd64-2ktxw 1/1 Running 0 3m45s
  15. kube-flannel-ds-amd64-fd2cb 1/1 Running 0 3m45s
  16. kube-flannel-ds-amd64-hb2zr 1/1 Running 0 3m45s
  17. kube-proxy-4vt8f 1/1 Running 0 13m
  18. kube-proxy-5nv5t 1/1 Running 0 12m
  19. kube-proxy-9fgzh 1/1 Running 0 12m
  20. kube-scheduler-k8s-master1 1/1 Running 0 13m
  21. [root@k8s-master1 ~]# kubectl get nodes
  22. NAME STATUS ROLES AGE VERSION
  23. k8s-master1 Ready master 14m v1.18.0
  24. k8s-node1 Ready <none> 12m v1.18.0
  25. k8s-node2 Ready <none> 12m v1.18.0

11. 将node1 node2 加入master

node1 node2加入集群配置

  1. 在要加入的节点种执行以下命令来加入:
  2. kubeadm join 10.0.0.63:6443 --token fs0uwh.7yuiawec8tov5igh \
  3. --discovery-token-ca-cert-hash sha256:471442895b5fb77174103553dc13a4b4681203fbff638e055ce244639342701d
  4. #这个配置在安装master的时候有过提示,请注意首先要配置cni网络插件
  5. #加入成功后,master节点检测:
  6. [root@k8s-master1 docker]# kubectl get nodes
  7. NAME STATUS ROLES AGE VERSION
  8. k8s-master1 Ready master 14m v1.18.0
  9. k8s-node1 Ready <none> 12m v1.18.0
  10. k8s-node2 Ready <none> 12m v1.18.0

12 token创建和查询

  1. 默认token会保存24消失,过期后就不可用,如果需要重新建立token,可在master节点使用以下命令重新生成:
  2. kubeadm token create
  3. kubeadm token list
  4. openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  5. 结果:
  6. 3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4
  7. token加入集群方法:
  8. kubeadm join 10.0.0.63:6443 --discovery-token nuja6n.o3jrhsffiqs9swnu --discovery-token-ca-cert-hash 63bca849e0e01691ae14eab449570284f0c3ddeea590f8da988c07fe2729e924

13. 安装dashboard界面

  1. [root@k8s-master1 ~]# kubectl get svc -n kubernetes-dashboard
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. dashboard-metrics-scraper ClusterIP 10.1.94.43 <none> 8000/TCP 7m58s
  4. kubernetes-dashboard NodePort 10.1.187.162 <none> 443:30001/TCP 7m58s

13.1 访问测试

10.0.0.63 10.0.0.64 10.0.0.65 集群任意一个角色访问30001端口都可以访问到dashboard页面.

13.2 获取dashboard token, 也就是创建service account并绑定默认cluster-admin管理员集群角色

  1. # kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
  2. # kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
  3. # kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')
  4. 将复制的token 填写到 上图中的 token选项,并选择token登录

14. 验证集群是否工作正常

  1. 验证集群状态是否正常有三个方面:
  2. 1. 能否正常部署应用
  3. 2. 集群网络是否正常
  4. 3. 集群内部dns解析是否正常

14.1 验证部署应用和日志查询

  1. #创建一个nginx应用
  2. kubectl create deployment k8s-status-checke --image=nginx
  3. #暴露80端口
  4. kubectl expose deployment k8s-status-checke --port=80 --target-port=80 --type=NodePort
  5. #删除这个deployment
  6. kubectl delete deployment k8s-status-checke
  7. #查询日志:
  8. [root@k8s-master1 ~]# kubectl logs -f nginx-f89759699-m5k5z

14.2 验证集群网络是否正常

  1. 1. 拿到一个应用地址
  2. [root@k8s-master1 ~]# kubectl get pods -o wide
  3. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED READINESS
  4. pod/nginx 1/1 Running 0 25h 10.244.2.18 k8s-node2 <none> <none>
  5. 2. 通过任意节点ping这个应用ip
  6. [root@k8s-node1 ~]# ping 10.244.2.18
  7. PING 10.244.2.18 (10.244.2.18) 56(84) bytes of data.
  8. 64 bytes from 10.244.2.18: icmp_seq=1 ttl=63 time=2.63 ms
  9. 64 bytes from 10.244.2.18: icmp_seq=2 ttl=63 time=0.515 ms
  10. 3. 访问节点
  11. [root@k8s-master1 ~]# curl -I 10.244.2.18
  12. HTTP/1.1 200 OK
  13. Server: nginx/1.17.10
  14. Date: Sun, 10 May 2020 13:19:02 GMT
  15. Content-Type: text/html
  16. Content-Length: 612
  17. Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
  18. Connection: keep-alive
  19. ETag: "5e95c66e-264"
  20. Accept-Ranges: bytes
  21. 4. 查询日志
  22. [root@k8s-master1 ~]# kubectl logs -f nginx
  23. 10.244.1.0 - - [10/May/2020:13:14:25 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" "-"

14.3 验证集群内部dns解析是否正常

  1. 检查DNS:
  2. [root@k8s-master1 ~]# kubectl get pods -n kube-system
  3. NAME READY STATUS RESTARTS AGE
  4. coredns-7ff77c879f-5dq4s 1/1 Running 1 4d #有时dns会出问题
  5. coredns-7ff77c879f-v68pc 1/1 Running 1 4d #有时dns会出问题
  6. etcd-k8s-master1 1/1 Running 4 4d
  7. kube-apiserver-k8s-master1 1/1 Running 3 4d
  8. kube-controller-manager-k8s-master1 1/1 Running 3 4d
  9. kube-flannel-ds-amd64-2ktxw 1/1 Running 1 4d
  10. kube-flannel-ds-amd64-fd2cb 1/1 Running 1 4d
  11. kube-flannel-ds-amd64-hb2zr 1/1 Running 4 4d
  12. kube-proxy-4vt8f 1/1 Running 4 4d
  13. kube-proxy-5nv5t 1/1 Running 2 4d
  14. kube-proxy-9fgzh 1/1 Running 2 4d
  15. kube-scheduler-k8s-master1 1/1 Running 4 4d
  16. #有时dns会出问题,解决方法:
  17. 1. 导出yaml文件
  18. kubectl get deploy coredns -n kube-system -o yaml >coredns.yaml
  19. 2. 删除coredons
  20. kubectl delete -f coredns.yaml
  21. 检查:
  22. [root@k8s-master1 ~]# kubectl get pods -n kube-system
  23. NAME READY STATUS RESTARTS AGE
  24. etcd-k8s-master1 1/1 Running 4 4d
  25. kube-apiserver-k8s-master1 1/1 Running 3 4d
  26. kube-controller-manager-k8s-master1 1/1 Running 3 4d
  27. kube-flannel-ds-amd64-2ktxw 1/1 Running 1 4d
  28. kube-flannel-ds-amd64-fd2cb 1/1 Running 1 4d
  29. kube-flannel-ds-amd64-hb2zr 1/1 Running 4 4d
  30. kube-proxy-4vt8f 1/1 Running 4 4d
  31. kube-proxy-5nv5t 1/1 Running 2 4d
  32. kube-proxy-9fgzh 1/1 Running 2 4d
  33. kube-scheduler-k8s-master1 1/1 Running 4 4d
  34. coredns已经删除了
  35. 3. 重建coredns
  36. kubectl apply -f coredns.yaml
  37. [root@k8s-master1 ~]# kubectl get pods -n kube-system
  38. NAME READY STATUS RESTARTS AGE
  39. coredns-7ff77c879f-5mmjg 1/1 Running 0 13s
  40. coredns-7ff77c879f-t74th 1/1 Running 0 13s
  41. etcd-k8s-master1 1/1 Running 4 4d
  42. kube-apiserver-k8s-master1 1/1 Running 3 4d
  43. kube-controller-manager-k8s-master1 1/1 Running 3 4d
  44. kube-flannel-ds-amd64-2ktxw 1/1 Running 1 4d
  45. kube-flannel-ds-amd64-fd2cb 1/1 Running 1 4d
  46. kube-flannel-ds-amd64-hb2zr 1/1 Running 4 4d
  47. kube-proxy-4vt8f 1/1 Running 4 4d
  48. kube-proxy-5nv5t 1/1 Running 2 4d
  49. kube-proxy-9fgzh 1/1 Running 2 4d
  50. kube-scheduler-k8s-master1 1/1 Running 4 4d
  51. 日志复查:
  52. coredns-7ff77c879f-5mmjg:
  53. [root@k8s-master1 ~]# kubectl logs coredns-7ff77c879f-5mmjg -n kube-system
  54. .:53
  55. [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
  56. CoreDNS-1.6.7
  57. linux/amd64, go1.13.6, da7f65b
  58. coredns-7ff77c879f-t74th:
  59. [root@k8s-master1 ~]# kubectl logs coredns-7ff77c879f-t74th -n kube-system
  60. .:53
  61. [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
  62. CoreDNS-1.6.7
  63. linux/amd64, go1.13.6, da7f65b
  64. #k8s创建一个容器验证dns
  65. [root@k8s-master1 ~]# kubectl run -it --rm --image=busybox:1.28.4 sh
  66. / # nslookup kubernetes
  67. Server: 10.1.0.10
  68. Address 1: 10.1.0.10 kube-dns.kube-system.svc.cluster.local
  69. Name: kubernetes
  70. Address 1: 10.1.0.1 kubernetes.default.svc.cluster.local
  71. #通过 nslookup来解析 kubernetes 能够出现解析,说明dns正常工作

15. 集群证书问题处理 [kuberadm部署的解决方案]

  1. 1. 删除默认的secret,使用自签证书创建新的secret
  2. kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
  3. kubectl create secret generic kubernetes-dashboard-certs \
  4. --from-file=/etc/kubernetes/pki/apiserver.key --from-file=/etc/kubernetes/pki/apiserver.crt -n kubernetes-dashboard
  5. 使用二进制部署的这里的证书需要根据自己当时存储的路径进行修改即可.
  6. 2. 证书配置后需要修改dashboard.yaml文件,重新构建dashboard
  7. wget https://www.chenleilei.net/soft/k8s/recommended.yaml
  8. vim recommended.yaml
  9. 找到: kind: Deployment,找到这里之后再次查找 args 看到这两行:
  10. - --auto-generate-certificates
  11. - --namespace=kubernetes-dashboard
  12. 改为[中间插入两行证书地址]:
  13. - --auto-generate-certificates
  14. - --tls-key-file=apiserver.key
  15. - --tls-cert-file=apiserver.crt
  16. - --namespace=kubernetes-dashboard
  17. [已修改的,可直接使用: wget https://www.chenleilei.net/soft/k8s/dashboard.yaml]
  18. 3. 修改完毕后重新应用 recommended.yaml
  19. kubectl apply -f recommended.yaml
  20. 应用后,可以看到触发了一次滚动更新,然后重新打开浏览器发现证书已经正常显示,不会提示不安全了.
  21. [root@k8s-master1 ~]# kubectl get pods -n kubernetes-dashboard
  22. NAME READY STATUS RESTARTS AGE
  23. dashboard-metrics-scraper-694557449d-r9h5r 1/1 Running 0 2d1h
  24. kubernetes-dashboard-5d8766c7cc-trdsv 1/1 Running 0 93s <---滚动更新.

报错处理:

问题1 k8s-node节点加入时报错::

  1. k8s-node节点加入时报错:
  2. W0315 22:16:20.123204 5795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. error execution phase preflight: [preflight] Some fatal errors occurred:
  6. [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
  7. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
  8. To see the stack trace of this error execute with --v=5 or higher
  9. 处理办法:
  10. echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
  11. 增加后重新加入:
  12. kubeadm join 10.0.0.63:6443 --token 0dr1pw.ejybkufnjpalb8k6 --discovery-token-ca-cert-hash sha256:ca1aa9cb753a26d0185e3df410cad09d8ec4af4d7432d127f503f41bc2b14f2a
  13. 这里的tokenkubadm服务器生成.

问题2: web页面无法访问处理:

  1. 重建dashboard
  2. 删除:
  3. kubectl delete -f dashboard.yaml
  4. 删除后创建:
  5. kubectl create -f dashboard.yaml
  6. 创建账户:
  7. kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
  8. 查看密码:
  9. kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')
  10. 重新打开登录即可

问题3: 部署dashboard失败

  1. 有可能是网络问题,需要切换一个别的网络,比如vpn,然后重新部署.

16. 在k8s中部署一个nginx

  1. [root@k8s-master1 ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
  2. service/nginx exposed
  3. [root@k8s-master1 ~]# kubectl get pod,svc
  4. NAME READY STATUS RESTARTS AGE
  5. pod/nginx-f89759699-dnfmg 0/1 ImagePullBackOff 0 3m41s
  6. ImagePullBackOff报错:
  7. 检查k8s日志: kubectl describe pod nginx-f89759699-dnfmg
  8. 结果:
  9. Normal Pulling 3m27s (x4 over 7m45s) kubelet, k8s-node2 Pulling image "nginx"
  10. Warning Failed 2m55s (x2 over 6m6s) kubelet, k8s-node2 Failed to pull image "nginx": rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/library/nginx/manifests/sha256:cccef6d6bdea671c394956e24b0d0c44cd82dbe83f543a47fdc790fadea48422: net/http: TLS handshake timeout
  11. 可以看到是因为docker下载镜像报错,需要更新别的docker
  12. [root@k8s-master1 ~]# cat /etc/docker/daemon.json
  13. {
  14. "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  15. "exec-opts": ["native.cgroupdriver=systemd"]
  16. }
  17. 使用其中一个node节点dockerpull nginx:
  18. 然后发现了错误:
  19. [root@k8s-node1 ~]# docker pull nginx
  20. Using default tag: latest
  21. latest: Pulling from library/nginx
  22. 54fec2fa59d0: Pulling fs layer
  23. 4ede6f09aefe: Pulling fs layer
  24. f9dc69acb465: Pulling fs layer
  25. Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout #源没有修改
  26. 重新修改源后:
  27. [root@k8s-master1 ~]# docker pull nginx
  28. Using default tag: latest
  29. latest: Pulling from library/nginx
  30. 54fec2fa59d0: Pull complete
  31. 4ede6f09aefe: Pull complete
  32. f9dc69acb465: Pull complete
  33. Digest: sha256:86ae264c3f4acb99b2dee4d0098c40cb8c46dcf9e1148f05d3a51c4df6758c12
  34. Status: Downloaded newer image for nginx:latest
  35. docker.io/library/nginx:latest
  36. 再次运行:
  37. kubectl delete pod,svc nginx
  38. kubectl create deployment nginx --image=nginx
  39. kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
  40. 这是一个k8s拉取镜像失败的排查过程:
  41. 1. k8s部署nginx失败,检查节点 kubectl get pod,svc
  42. 2. 检查k8s日志: Failed to pull image "nginx": rpc error: code = Unknown desc = Get https://registry-
  43. ...net/http: TLS handshake timeout [出现这个故障可以看到是源没有更换]
  44. 3. 修改docker源为阿里云的.然后重新启动docker
  45. cat /etc/docker/daemon.json
  46. {
  47. "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  48. "exec-opts": ["native.cgroupdriver=systemd"]
  49. }
  50. systemctl restart docker.service
  51. 4. 再次使用docker pull 来下载一个nginx镜像, 发现已经可以拉取成功
  52. 5. 删除docker下载好的nginx镜像 docker image rm -f [镜像名]
  53. 6. k8s中删除部署失败的nginx kubectl delete deployment nginx
  54. 7. 重新创建镜像 kubectl create deployment nginx --image=nginx
  55. 8. k8s重新部署应用: kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort

17. 暴露应用

  1. 1.创建镜像
  2. kubectl create deployment nginx --image=nginx
  3. 2.暴露应用
  4. kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort

18. 优化: k8s自动补全工具

  1. yum install -y bash-completion
  2. source <(kubectl completion bash)
  3. source /usr/share/bash-completion/bash_completion

19. 本节问题点:

  1. 一. token过期处理办法:
  2. 每隔24小时,之前创建的token就会过期,这样会无法登录集群的dashboard页面,此时需要重新生成token
  3. 生成命令:
  4. kubeadm token create
  5. kubeadm token list
  6. openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  7. 查询token
  8. openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  9. 3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4
  10. 然后使用新的token让新服务器加入:
  11. kubeadm join 10.0.0.63:6443 --token 0dr1pw.ejybkufnjpalb8k6 --discovery-token-ca-cert-hash sha256:3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4
  12. 二. dashboard登录密码获取
  13. kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')
  14. 三. k8s拉取镜像失败的排查过程
  15. 1. k8s部署nginx失败,检查节点 kubectl get pod,svc
  16. 2. 检查k8s日志: Failed to pull image "nginx": rpc error: code = Unknown desc = Get https://registry-
  17. ...net/http: TLS handshake timeout [出现这个故障可以看到是源没有更换]
  18. 3. 修改docker源为阿里云的.然后重新启动docker
  19. cat /etc/docker/daemon.json
  20. {
  21. "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  22. "exec-opts": ["native.cgroupdriver=systemd"]
  23. }
  24. systemctl restart docker.service
  25. 4. 再次使用docker pull 来下载一个nginx镜像, 发现已经可以拉取成功
  26. 5. 删除docker下载好的nginx镜像 docker image rm -f [镜像名]
  27. 6. k8s中删除部署失败的nginx kubectl delete deployment nginx
  28. 7. 重新创建镜像 kubectl create deployment nginx --image=nginx
  29. 8. k8s重新部署应用: kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort

20. YAML附件[请保存为 .yaml 为后缀]

https://www.chenleilei.net/soft/kubeadm快速部署一个Kubernetes集群yaml.zip

使用kubeadm部署k8s集群[v1.18.0]的更多相关文章

  1. kubeadm部署K8S集群v1.16.3

    本次先更新kubeadm快速安装K8S,二进制安装上次没写文档,后续更新,此次最新的版本是V1.16.3 1.关闭防火墙.关闭selinux.关闭swapoff -a systemctl stop f ...

  2. 【02】Kubernets:使用 kubeadm 部署 K8S 集群

    写在前面的话 通过上一节,知道了 K8S 有 Master / Node 组成,但是具体怎么个组成法,就是这一节具体谈的内容.概念性的东西我们会尽量以实验的形式将其复现. 部署 K8S 集群 互联网常 ...

  3. (二)Kubernetes kubeadm部署k8s集群

    kubeadm介绍 kubeadm是Kubernetes项目自带的及集群构建工具,负责执行构建一个最小化的可用集群以及将其启动等的必要基本步骤,kubeadm是Kubernetes集群全生命周期的管理 ...

  4. kubeadm部署k8s集群

    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具. 这个工具能通过两条指令完成一个kubernetes集群的部署: # 创建一个 Master 节点 kubeadm ini ...

  5. 企业运维实践-还不会部署高可用的kubernetes集群?使用kubeadm方式安装高可用k8s集群v1.23.7

    关注「WeiyiGeek」公众号 设为「特别关注」每天带你玩转网络安全运维.应用开发.物联网IOT学习! 希望各位看友[关注.点赞.评论.收藏.投币],助力每一个梦想. 文章目录: 0x00 前言简述 ...

  6. kubernetes系列03—kubeadm安装部署K8S集群

    本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...

  7. 二进制方法-部署k8s集群部署1.18版本

    二进制方法-部署k8s集群部署1.18版本 1. 前置知识点 1.1 生产环境可部署kubernetes集群的两种方式 目前生产部署Kubernetes集群主要有两种方式 kuberadm Kubea ...

  8. 通过kubeadm工具部署k8s集群

    1.概述 kubeadm是一工具箱,通过kubeadm工具,可以快速的创建一个最小的.可用的,并且符合最佳实践的k8s集群. 本文档介绍如何通过kubeadm工具快速部署一个k8s集群. 2.主机规划 ...

  9. 使用Kubeadm创建k8s集群之部署规划(三十)

    前言 上一篇我们讲述了使用Kubectl管理k8s集群,那么接下来,我们将使用kubeadm来启动k8s集群. 部署k8s集群存在一定的挑战,尤其是部署高可用的k8s集群更是颇为复杂(后续会讲).因此 ...

随机推荐

  1. Spring+Hibernate整合配置 --- 比较完整的spring、hibernate 配置

    Spring+Hibernate整合配置 分类: J2EE2010-11-25 17:21 16667人阅读 评论(1) 收藏 举报 springhibernateclassactionservlet ...

  2. fiddler composer post请求

    必加部分:Content-Type: application/json

  3. 委托的 `DynamicInvoke` 小优化

    委托的 DynamicInvoke 小优化 Intro 委托方法里有一个 DynamicInvoke 的方法,可以在不清楚委托实际类型的情况下执行委托方法,但是用 DynamicInvoke 去执行的 ...

  4. python2.7安装numpy和pandas

    扩展官网安装numpy,use [v][p][n]下载得会比较快 然后在CMD命令行下进入该文件夹然后输入pip install +numpy的路径+文件名.比如我的是:pip install num ...

  5. 高德APP启动耗时剖析与优化实践(iOS篇)

    前言最近高德地图APP完成了一次启动优化专项,超预期将双端启动的耗时都降低了65%以上,iOS在iPhone7上速度达到了400毫秒以内.就像产品们用后说的,快到不习惯.算一下每天为用户省下的时间,还 ...

  6. 利用SSIS进行SharePoint 列表数据的ETL

    好几年前写了一篇<SSIS利用Microsoft Connector for Oracle by Attunity组件进行ETL!>,IT技术真是日新月异,这种方式对于新的SQL SERV ...

  7. I - Fill The Bag codeforces 1303D

    题解:注意这里的数组a中的元素,全部都是2的整数幂.然后有二进制可以拼成任意数.只要一堆2的整数幂的和大于x,x也是2的整数幂,那么那一堆2的整数幂一定可以组成x. 思路:位运算,对每一位,如果该位置 ...

  8. I. Same String

    有两个只由小写字母组成的长度为n的字符串s1,s2和m组字母对应关系,每一组关系由两个字母c1和c2组成,代表c1可以直接变成c2,你需要判断s1是否可以通过这m组关系转换为s2. 输入格式 第一行输 ...

  9. linux上Docker安装gogs私服

    一.背景介绍 Gogs 是一款类似GitHub的开源文件/代码管理系统(基于Git),Gogs 的目标是打造一个最简单.最快速和最轻松的方式搭建自助 Git 服务.使用 Go 语言开发使得 Gogs ...

  10. 用long类型让我出了次生产事故,写代码还是要小心点

    昨天发现线上试跑期的一个程序挂了,平时都跑的好好的,查了下日志是因为昨天运营跑了一家美妆top级淘品牌店,会员量近千万,一下子就把128G的内存给爆了,当时并行跑了二个任务,没辙先速写一段代码限流,后 ...