K8s集群版本升级
k8s组件升级流程:
升级主管理节点→升级其他管理节点→升级工作节点
首先备份主管理节点的etcd,检查版本号,为了保证版本的兼容性,跨度最好不要超过两个版本。
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 9d v1.21.0
node1 Ready <none> 9d v1.21.0
node2 Ready <none> 9d v1.21.0
查找最新版本号
[root@master ~]# yum list --showduplicates kubeadm
kubeadm.x86_64 1.21.0-0 @kubernetes
可安装的软件包
kubeadm.x86_64 1.6.0-0 kubernetes
kubeadm.x86_64 1.6.1-0 kubernetes
kubeadm.x86_64 1.6.2-0 kubernetes
kubeadm.x86_64 1.6.3-0 kubernetes
kubeadm.x86_64 1.6.4-0 kubernetes
kubeadm.x86_64 1.6.5-0 kubernetes
kubeadm.x86_64 1.6.6-0 kubernetes
kubeadm.x86_64 1.6.7-0 kubernetes
kubeadm.x86_64 1.6.8-0 kubernetes
kubeadm.x86_64 1.6.9-0 kubernetes
kubeadm.x86_64 1.6.10-0 kubernetes
kubeadm.x86_64 1.6.11-0 kubernetes
kubeadm.x86_64 1.6.12-0 kubernetes
kubeadm.x86_64 1.6.13-0 kubernetes
kubeadm.x86_64 1.7.0-0 kubernetes
kubeadm.x86_64 1.7.1-0 kubernetes
kubeadm.x86_64 1.7.2-0 kubernetes
kubeadm.x86_64 1.7.3-1 kubernetes
kubeadm.x86_64 1.7.4-0 kubernetes
kubeadm.x86_64 1.7.5-0 kubernetes
kubeadm.x86_64 1.7.6-1 kubernetes
kubeadm.x86_64 1.7.7-1 kubernetes
kubeadm.x86_64 1.7.8-1 kubernetes
kubeadm.x86_64 1.7.9-0 kubernetes
kubeadm.x86_64 1.7.10-0 kubernetes
kubeadm.x86_64 1.7.11-0 kubernetes
kubeadm.x86_64 1.7.14-0 kubernetes
kubeadm.x86_64 1.7.15-0 kubernetes
kubeadm.x86_64 1.7.16-0 kubernetes
kubeadm.x86_64 1.8.0-0 kubernetes
kubeadm.x86_64 1.8.0-1 kubernetes
kubeadm.x86_64 1.8.1-0 kubernetes
kubeadm.x86_64 1.8.2-0 kubernetes
kubeadm.x86_64 1.8.3-0 kubernetes
kubeadm.x86_64 1.8.4-0 kubernetes
kubeadm.x86_64 1.8.5-0 kubernetes
kubeadm.x86_64 1.8.6-0 kubernetes
kubeadm.x86_64 1.8.7-0 kubernetes
kubeadm.x86_64 1.8.8-0 kubernetes
kubeadm.x86_64 1.8.9-0 kubernetes
kubeadm.x86_64 1.8.10-0 kubernetes
kubeadm.x86_64 1.8.11-0 kubernetes
kubeadm.x86_64 1.8.12-0 kubernetes
kubeadm.x86_64 1.8.13-0 kubernetes
kubeadm.x86_64 1.8.14-0 kubernetes
kubeadm.x86_64 1.8.15-0 kubernetes
kubeadm.x86_64 1.9.0-0 kubernetes
kubeadm.x86_64 1.9.1-0 kubernetes
kubeadm.x86_64 1.9.2-0 kubernetes
kubeadm.x86_64 1.9.3-0 kubernetes
kubeadm.x86_64 1.9.4-0 kubernetes
kubeadm.x86_64 1.9.5-0 kubernetes
kubeadm.x86_64 1.9.6-0 kubernetes
kubeadm.x86_64 1.9.7-0 kubernetes
kubeadm.x86_64 1.9.8-0 kubernetes
kubeadm.x86_64 1.9.9-0 kubernetes
kubeadm.x86_64 1.9.10-0 kubernetes
kubeadm.x86_64 1.9.11-0 kubernetes
kubeadm.x86_64 1.10.0-0 kubernetes
kubeadm.x86_64 1.10.1-0 kubernetes
kubeadm.x86_64 1.10.2-0 kubernetes
kubeadm.x86_64 1.10.3-0 kubernetes
kubeadm.x86_64 1.10.4-0 kubernetes
kubeadm.x86_64 1.10.5-0 kubernetes
kubeadm.x86_64 1.10.6-0 kubernetes
kubeadm.x86_64 1.10.7-0 kubernetes
kubeadm.x86_64 1.10.8-0 kubernetes
kubeadm.x86_64 1.10.9-0 kubernetes
kubeadm.x86_64 1.10.10-0 kubernetes
kubeadm.x86_64 1.10.11-0 kubernetes
kubeadm.x86_64 1.10.12-0 kubernetes
kubeadm.x86_64 1.10.13-0 kubernetes
kubeadm.x86_64 1.11.0-0 kubernetes
kubeadm.x86_64 1.11.1-0 kubernetes
kubeadm.x86_64 1.11.2-0 kubernetes
kubeadm.x86_64 1.11.3-0 kubernetes
kubeadm.x86_64 1.11.4-0 kubernetes
kubeadm.x86_64 1.11.5-0 kubernetes
kubeadm.x86_64 1.11.6-0 kubernetes
kubeadm.x86_64 1.11.7-0 kubernetes
kubeadm.x86_64 1.11.8-0 kubernetes
kubeadm.x86_64 1.11.9-0 kubernetes
kubeadm.x86_64 1.11.10-0 kubernetes
kubeadm.x86_64 1.12.0-0 kubernetes
kubeadm.x86_64 1.12.1-0 kubernetes
kubeadm.x86_64 1.12.2-0 kubernetes
kubeadm.x86_64 1.12.3-0 kubernetes
kubeadm.x86_64 1.12.4-0 kubernetes
kubeadm.x86_64 1.12.5-0 kubernetes
kubeadm.x86_64 1.12.6-0 kubernetes
kubeadm.x86_64 1.12.7-0 kubernetes
kubeadm.x86_64 1.12.8-0 kubernetes
kubeadm.x86_64 1.12.9-0 kubernetes
kubeadm.x86_64 1.12.10-0 kubernetes
kubeadm.x86_64 1.13.0-0 kubernetes
kubeadm.x86_64 1.13.1-0 kubernetes
kubeadm.x86_64 1.13.2-0 kubernetes
kubeadm.x86_64 1.13.3-0 kubernetes
kubeadm.x86_64 1.13.4-0 kubernetes
kubeadm.x86_64 1.13.5-0 kubernetes
kubeadm.x86_64 1.13.6-0 kubernetes
kubeadm.x86_64 1.13.7-0 kubernetes
kubeadm.x86_64 1.13.8-0 kubernetes
kubeadm.x86_64 1.13.9-0 kubernetes
kubeadm.x86_64 1.13.10-0 kubernetes
kubeadm.x86_64 1.13.11-0 kubernetes
kubeadm.x86_64 1.13.12-0 kubernetes
kubeadm.x86_64 1.14.0-0 kubernetes
kubeadm.x86_64 1.14.1-0 kubernetes
kubeadm.x86_64 1.14.2-0 kubernetes
kubeadm.x86_64 1.14.3-0 kubernetes
kubeadm.x86_64 1.14.4-0 kubernetes
kubeadm.x86_64 1.14.5-0 kubernetes
kubeadm.x86_64 1.14.6-0 kubernetes
kubeadm.x86_64 1.14.7-0 kubernetes
kubeadm.x86_64 1.14.8-0 kubernetes
kubeadm.x86_64 1.14.9-0 kubernetes
kubeadm.x86_64 1.14.10-0 kubernetes
kubeadm.x86_64 1.15.0-0 kubernetes
kubeadm.x86_64 1.15.1-0 kubernetes
kubeadm.x86_64 1.15.2-0 kubernetes
kubeadm.x86_64 1.15.3-0 kubernetes
kubeadm.x86_64 1.15.4-0 kubernetes
kubeadm.x86_64 1.15.5-0 kubernetes
kubeadm.x86_64 1.15.6-0 kubernetes
kubeadm.x86_64 1.15.7-0 kubernetes
kubeadm.x86_64 1.15.8-0 kubernetes
kubeadm.x86_64 1.15.9-0 kubernetes
kubeadm.x86_64 1.15.10-0 kubernetes
kubeadm.x86_64 1.15.11-0 kubernetes
kubeadm.x86_64 1.15.12-0 kubernetes
kubeadm.x86_64 1.16.0-0 kubernetes
kubeadm.x86_64 1.16.1-0 kubernetes
kubeadm.x86_64 1.16.2-0 kubernetes
kubeadm.x86_64 1.16.3-0 kubernetes
kubeadm.x86_64 1.16.4-0 kubernetes
kubeadm.x86_64 1.16.5-0 kubernetes
kubeadm.x86_64 1.16.6-0 kubernetes
kubeadm.x86_64 1.16.7-0 kubernetes
kubeadm.x86_64 1.16.8-0 kubernetes
kubeadm.x86_64 1.16.9-0 kubernetes
kubeadm.x86_64 1.16.10-0 kubernetes
kubeadm.x86_64 1.16.11-0 kubernetes
kubeadm.x86_64 1.16.11-1 kubernetes
kubeadm.x86_64 1.16.12-0 kubernetes
kubeadm.x86_64 1.16.13-0 kubernetes
kubeadm.x86_64 1.16.14-0 kubernetes
kubeadm.x86_64 1.16.15-0 kubernetes
kubeadm.x86_64 1.17.0-0 kubernetes
kubeadm.x86_64 1.17.1-0 kubernetes
kubeadm.x86_64 1.17.2-0 kubernetes
kubeadm.x86_64 1.17.3-0 kubernetes
kubeadm.x86_64 1.17.4-0 kubernetes
kubeadm.x86_64 1.17.5-0 kubernetes
kubeadm.x86_64 1.17.6-0 kubernetes
kubeadm.x86_64 1.17.7-0 kubernetes
kubeadm.x86_64 1.17.7-1 kubernetes
kubeadm.x86_64 1.17.8-0 kubernetes
kubeadm.x86_64 1.17.9-0 kubernetes
kubeadm.x86_64 1.17.11-0 kubernetes
kubeadm.x86_64 1.17.12-0 kubernetes
kubeadm.x86_64 1.17.13-0 kubernetes
kubeadm.x86_64 1.17.14-0 kubernetes
kubeadm.x86_64 1.17.15-0 kubernetes
kubeadm.x86_64 1.17.16-0 kubernetes
kubeadm.x86_64 1.17.17-0 kubernetes
kubeadm.x86_64 1.18.0-0 kubernetes
kubeadm.x86_64 1.18.1-0 kubernetes
kubeadm.x86_64 1.18.2-0 kubernetes
kubeadm.x86_64 1.18.3-0 kubernetes
kubeadm.x86_64 1.18.4-0 kubernetes
kubeadm.x86_64 1.18.4-1 kubernetes
kubeadm.x86_64 1.18.5-0 kubernetes
kubeadm.x86_64 1.18.6-0 kubernetes
kubeadm.x86_64 1.18.8-0 kubernetes
kubeadm.x86_64 1.18.9-0 kubernetes
kubeadm.x86_64 1.18.10-0 kubernetes
kubeadm.x86_64 1.18.12-0 kubernetes
kubeadm.x86_64 1.18.13-0 kubernetes
kubeadm.x86_64 1.18.14-0 kubernetes
kubeadm.x86_64 1.18.15-0 kubernetes
kubeadm.x86_64 1.18.16-0 kubernetes
kubeadm.x86_64 1.18.17-0 kubernetes
kubeadm.x86_64 1.18.18-0 kubernetes
kubeadm.x86_64 1.18.19-0 kubernetes
kubeadm.x86_64 1.18.20-0 kubernetes
kubeadm.x86_64 1.19.0-0 kubernetes
kubeadm.x86_64 1.19.1-0 kubernetes
kubeadm.x86_64 1.19.2-0 kubernetes
kubeadm.x86_64 1.19.3-0 kubernetes
kubeadm.x86_64 1.19.4-0 kubernetes
kubeadm.x86_64 1.19.5-0 kubernetes
kubeadm.x86_64 1.19.6-0 kubernetes
kubeadm.x86_64 1.19.7-0 kubernetes
kubeadm.x86_64 1.19.8-0 kubernetes
kubeadm.x86_64 1.19.9-0 kubernetes
kubeadm.x86_64 1.19.10-0 kubernetes
kubeadm.x86_64 1.19.11-0 kubernetes
kubeadm.x86_64 1.19.12-0 kubernetes
kubeadm.x86_64 1.19.13-0 kubernetes
kubeadm.x86_64 1.19.14-0 kubernetes
kubeadm.x86_64 1.19.15-0 kubernetes
kubeadm.x86_64 1.19.16-0 kubernetes
kubeadm.x86_64 1.20.0-0 kubernetes
kubeadm.x86_64 1.20.1-0 kubernetes
kubeadm.x86_64 1.20.2-0 kubernetes
kubeadm.x86_64 1.20.4-0 kubernetes
kubeadm.x86_64 1.20.5-0 kubernetes
kubeadm.x86_64 1.20.6-0 kubernetes
kubeadm.x86_64 1.20.7-0 kubernetes
kubeadm.x86_64 1.20.8-0 kubernetes
kubeadm.x86_64 1.20.9-0 kubernetes
kubeadm.x86_64 1.20.10-0 kubernetes
kubeadm.x86_64 1.20.11-0 kubernetes
kubeadm.x86_64 1.20.12-0 kubernetes
kubeadm.x86_64 1.20.13-0 kubernetes
kubeadm.x86_64 1.20.14-0 kubernetes
kubeadm.x86_64 1.20.15-0 kubernetes
kubeadm.x86_64 1.21.0-0 kubernetes
kubeadm.x86_64 1.21.1-0 kubernetes
kubeadm.x86_64 1.21.2-0 kubernetes
kubeadm.x86_64 1.21.3-0 kubernetes
kubeadm.x86_64 1.21.4-0 kubernetes
kubeadm.x86_64 1.21.5-0 kubernetes
kubeadm.x86_64 1.21.6-0 kubernetes
kubeadm.x86_64 1.21.7-0 kubernetes
kubeadm.x86_64 1.21.8-0 kubernetes
kubeadm.x86_64 1.21.9-0 kubernetes
kubeadm.x86_64 1.21.10-0 kubernetes
kubeadm.x86_64 1.21.11-0 kubernetes
kubeadm.x86_64 1.21.12-0 kubernetes
kubeadm.x86_64 1.21.13-0 kubernetes
kubeadm.x86_64 1.21.14-0 kubernetes
kubeadm.x86_64 1.22.0-0 kubernetes
kubeadm.x86_64 1.22.1-0 kubernetes
kubeadm.x86_64 1.22.2-0 kubernetes
kubeadm.x86_64 1.22.3-0 kubernetes
kubeadm.x86_64 1.22.4-0 kubernetes
kubeadm.x86_64 1.22.5-0 kubernetes
kubeadm.x86_64 1.22.6-0 kubernetes
kubeadm.x86_64 1.22.7-0 kubernetes
kubeadm.x86_64 1.22.8-0 kubernetes
kubeadm.x86_64 1.22.9-0 kubernetes
kubeadm.x86_64 1.22.10-0 kubernetes
kubeadm.x86_64 1.22.11-0 kubernetes
kubeadm.x86_64 1.22.12-0 kubernetes
kubeadm.x86_64 1.23.0-0 kubernetes
kubeadm.x86_64 1.23.1-0 kubernetes
kubeadm.x86_64 1.23.2-0 kubernetes
kubeadm.x86_64 1.23.3-0 kubernetes
kubeadm.x86_64 1.23.4-0 kubernetes
kubeadm.x86_64 1.23.5-0 kubernetes
kubeadm.x86_64 1.23.6-0 kubernetes
kubeadm.x86_64 1.23.7-0 kubernetes
kubeadm.x86_64 1.23.8-0 kubernetes
kubeadm.x86_64 1.23.9-0 kubernetes
kubeadm.x86_64 1.24.0-0 kubernetes
kubeadm.x86_64 1.24.1-0 kubernetes
kubeadm.x86_64 1.24.2-0 kubernetes
kubeadm.x86_64 1.24.3-0 kubernetes
yum list --showduplicates kubeadm
升级kubeadm,上面查看版本是1.21.0,我们升级到1.21.1
yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes
[root@master ~]# yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.ustc.edu.cn
* extras: mirrors.ustc.edu.cn
* updates: mirror.lzu.edu.cn
file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/repomd.xml"
正在尝试其它镜像。
没有可用软件包 kubeadm-。
没有可用软件包 1.21.1-0。
错误:无须任何处理
[root@master ~]# yum install -y kubeadm-1.21.1-0
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.ustc.edu.cn
* extras: mirrors.ustc.edu.cn
* updates: mirror.lzu.edu.cn
file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/repomd.xml"
正在尝试其它镜像。
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.21.0-0 将被 升级
---> 软件包 kubeadm.x86_64.0.1.21.1-0 将被 更新
--> 解决依赖关系完成 依赖关系解决 ==========================================================================================================================================================================================================================================
Package 架构 版本 源 大小
==========================================================================================================================================================================================================================================
正在更新:
kubeadm x86_64 1.21.1-0 kubernetes 9.5 M 事务概要
==========================================================================================================================================================================================================================================
升级 1 软件包 总下载量:9.5 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
e0511a4d8d070fa4c7bcd2a04217c80774ba11d44e4e0096614288189894f1c5-kubeadm-1.21.1-0.x86_64.rpm | 9.5 MB 00:00:32
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在更新 : kubeadm-1.21.1-0.x86_64 1/2
清理 : kubeadm-1.21.0-0.x86_64 2/2
验证中 : kubeadm-1.21.1-0.x86_64 1/2
验证中 : kubeadm-1.21.0-0.x86_64 2/2 更新完毕:
kubeadm.x86_64 0:1.21.1-0 完毕!
yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes
驱逐要升级的节点上的pod
[root@master ~]# kubectl drain master --ignore-daemonsets
node/master cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-6jk49, kube-system/kube-proxy-4r78x
evicting pod kube-system/coredns-545d6fc579-m2fc6
evicting pod kube-system/coredns-545d6fc579-c9889
pod/coredns-545d6fc579-c9889 evicted
pod/coredns-545d6fc579-m2fc6 evicted
node/master evicted
再次查看node,发现master不可调度
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready,SchedulingDisabled control-plane,master 9d v1.21.0
node1 Ready <none> 9d v1.21.0
node2 Ready <none> 9d v1.21.0
检查集群是否可以升级,并获取升级的版本
[root@master ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.1
I0813 16:46:29.019987 37514 version.go:254] remote version is much newer: v1.24.3; falling back to: stable-1.21
[upgrade/versions] Target version: v1.21.14
[upgrade/versions] Latest version in the v1.21 series: v1.21.14 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.21.0 v1.21.14 Upgrade to the latest version in the v1.21 series: COMPONENT CURRENT TARGET
kube-apiserver v1.21.0 v1.21.14
kube-controller-manager v1.21.0 v1.21.14
kube-scheduler v1.21.0 v1.21.14
kube-proxy v1.21.0 v1.21.14
CoreDNS v1.8.0 v1.8.0
etcd 3.4.13-0 3.4.13-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.21.14 Note: Before you can perform this upgrade, you have to update kubeadm to v1.21.14. _____________________________________________________________________ The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
执行上面命令,会自动生成一个可以升级的版本的命令
[root@master ~]# kubeadm upgrade apply v1.21.14
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.21.14"
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.1
[upgrade/version] FATAL: the --version argument is invalid due to these errors: - Specified version to upgrade to "v1.21.14" is higher than the kubeadm version "v1.21.1". Upgrade kubeadm first using the tool you used to install kubeadm Can be bypassed if you pass the --force flag
To see the stack trace of this error execute with --v=5 or higher
当运行这个命令时会发现升级失败,原因是升级的版本高于kubeadm的版本,所以我们升级的版本一定要与kubeadm版本保持一致
[root@master ~]# kubeadm upgrade apply v1.21.1
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.21.1"
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.1
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.21.1"...
Static pod: kube-apiserver-master hash: 99d9c6c8dd5e35d7e1fa9c4b3bdca894
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master hash: 49a143d1fd6753374a2970b880b3fe9a
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665603911"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-08-13-16-54-54/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: 99d9c6c8dd5e35d7e1fa9c4b3bdca894
Static pod: kube-apiserver-master hash: 99d9c6c8dd5e35d7e1fa9c4b3bdca894
Static pod: kube-apiserver-master hash: f188f9c5f1c338f870b0b13f31d3d667
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-08-13-16-54-54/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 7a467309ea170e9a8ab0a38462a67455
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-08-13-16-54-54/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: f0246658aa97264fd2ea2c6481d65a9d
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upgrade/postupgrade] Applying label node.kubernetes.io/exclude-from-external-load-balancers='' to control plane Nodes
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.1". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
又报错了,从上面信息看是kube-scheduler.yaml和kube-controller-manager.yaml文件在升级时被替换掉了,由于这两个组件禁用了非安全端口,所以导致secheduler和controller-manager失联。我们检擦集群状态看一下
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
将kube-scheduler.yaml和kube-controller-manager.yaml文件中的--port=0注释掉再重启一下kubelet,发现集群状态恢复正常。
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
取消节点上的master不可调度
[root@master ~]# kubectl uncordon master
node/master uncordoned
再升级kubectl和kubelet
[root@master ~]# yum install -y kubectl-1.21.1-0 kubelet-1.21.1-0 --disableexcludes=kubernetes
重启kubelet后查看,发现版本升级成功
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart kubelet
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 9d v1.21.1
node1 Ready <none> 9d v1.21.0
node2 Ready <none> 9d v1.21.0
升级node也是同上面步骤一样。
K8s集群版本升级的更多相关文章
- 基于 kubeadm 搭建高可用的kubernetes 1.18.2 (k8s)集群一 环境准备
本k8s集群参考了 Michael 的 https://gitee.com/pa/kubernetes-ha-kubeadm-private 这个项目,再此表示感谢! Michael的项目k8s版本为 ...
- 万级K8s集群背后etcd稳定性及性能优化实践
背景与挑战 随着腾讯自研上云及公有云用户的迅速增长,一方面,腾讯云容器服务TKE服务数量和核数大幅增长, 另一方面我们提供的容器服务类型(TKE托管及独立集群.EKS弹性集群.edge边缘计算集群.m ...
- 万级K8s集群背后 etcd 稳定性及性能优化实践
1背景与挑战随着腾讯自研上云及公有云用户的迅速增长,一方面,腾讯云容器服务TKE服务数量和核数大幅增长, 另一方面我们提供的容器服务类型(TKE托管及独立集群.EKS弹性集群.edge边缘计算集群.m ...
- China Azure中部署Kubernetes(K8S)集群
目前China Azure还不支持容器服务(ACS),使用名称"az acs create --orchestrator-type Kubernetes -g zymtest -n kube ...
- k8s集群Canal的网络控制 原
1 简介 直接上干货 public class DispatcherServlet extends HttpServlet { private Properties contextConfigProp ...
- kubernetes系列03—kubeadm安装部署K8S集群
本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...
- k8s重要概念及部署k8s集群(一)--技术流ken
重要概念 1. cluster cluster是 计算.存储和网络资源的集合,k8s利用这些资源运行各种基于容器的应用. 2.master master是cluster的大脑,他的主要职责是调度,即决 ...
- K8S集群 NOT READY的解决办法 1.13 错误信息:cni config uninitialized
今天给同事 一个k8s 集群 出现not ready了 花了 40min 才搞定 这里记录一下 避免下载 再遇到了 不清楚. 错误现象:untime network not ready: Networ ...
- Kubeadm安装的K8S集群1年证书过期问题的解决思路
这个问题,很多使用使用kubeadm的用户都会遇到. 网上也有类似的帖子,从源代码编译这种思路, 在生产环境,有些不现实. 还是使用kubeadm的命令操作,比较自然一点. 当然,自行生成一套证书,也 ...
- 自建k8s集群日志采集到阿里云日志服务
自建k8s集群 的master 节点安装 logtail 采集工具 wget http://logtail-release-cn-hangzhou.oss-cn-hangzhou.aliyuncs.c ...
随机推荐
- [OpenCV实战]23 使用OpenCV获取高动态范围成像HDR
目录 1 背景 1.1 什么是高动态范围(HDR)成像? 1.2 高动态范围(HDR)成像如何工作? 2 代码 2.1 运行环境配置 2.2 读取图像和曝光时间 2.3 图像对齐 2.4 恢复相机响应 ...
- UOJ33 [UR#2] 树上 GCD
UOJ33 [UR#2] 树上 GCD 简要题意: 给定一棵有根树,对于每个 \(i \in [1,n)\),求出下式的值: \[Ans[i] = \sum_{u<v} \gcd({\rm{di ...
- 痞子衡嵌入式:MCUBootUtility v4.0发布,开始支持MCX啦
-- 痞子衡维护的 NXP-MCUBootUtility 工具距离上一个大版本(v3.5.0)发布过去 9 个月了,这一次痞子衡为大家带来了版本升级 v4.0.0,这个版本主要有两个重要更新需要跟大家 ...
- 基于Udp通讯的Java局域网群聊小程序
/**基于Udp通讯的Java局域网群聊小程序 */package com.UdpDemo; import java.net.*; import java.awt.*; import java.awt ...
- 02安装一个最小化的Hadoop
安装一个最小化的Hadoop 为了学习HDFS和之后的MapReduce,我们需要安装一个Hadoop. Hadoop一共有3种运行模式 独立模式:不启动守护进程,所有程序运行在一个JVM进程中.独立 ...
- three.js实现分模块添加梦幻bloom辉光光晕方案--详细注释版本~~方案三版本~~
先上图对比方案1-2-3不同点,本文是方案3 方案1(旋转场景情况下发光体不应该遮住另一个,但是遮住了) 方案2(层次正常,发光正常) 方案3(层次正常,发光正常,但是转动场景时候部分辉光会被遮挡,但 ...
- SQLSERVER 快照隔离级别 到底怎么理解?
一:背景 1. 讲故事 上一篇写完 SQLSERVER 的四个事务隔离级别到底怎么理解? 之后,有朋友留言问什么时候可以把 snapshot 隔离级别给补上,这篇就来安排,快照隔离级别看起来很魔法,不 ...
- "万字" Java I/O 详解
Java I/O流讲解 每博一文案 谁让你读了这么多书,又知道了双水村以外还有一个大世界,如果从小你就在这个天地里,日出而作,日落而息. 那你现在就会和众乡亲抱同一理想:经过几年的辛劳,像大哥一样娶个 ...
- Cpp 友元简述
友元函数,友元类 使用友元,主要是易于直接访问数据,但友元本质是以破坏封装性为代价. 下例引用于: <C++程序设计(第2版)> 友元声明位置由程序设计者决定,且不受类中public.pr ...
- JAVA虚拟机02---JAVA虚拟机运行时数据区域简介
JAVA虚拟机运行时数据区域 1.程序计数器 1)它可以看做是当前线程执行的字节代码的行指示器,通过改变计数器的值来决定下一步执行的代码 2)它是线程私有的,每个线程都有自己的程序计数器(JAVA ...