使用 Kubeadm 升级 Kubernetes 版本
升级最新版 kubelet kubeadm kubectl (阿里云镜像)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
查看此版本的容器镜像版本
$ kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.12.2
k8s.gcr.io/kube-controller-manager:v1.12.2
k8s.gcr.io/kube-scheduler:v1.12.2
k8s.gcr.io/kube-proxy:v1.12.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2
查询可用的版本
$ yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet'
拉取容器镜像
$ touch pull_k8s_images.sh #!/bin/bash
images=(kube-proxy:v1.12.2 kube-scheduler:v1.12.2 kube-controller-manager:v1.12.2
kube-apiserver:v1.12.2 kubernetes-dashboard-amd64:v1.10.0
etcd:3.2.24 coredns:1.2.2 pause:3.1 )
for imageName in ${images[@]} ; do
docker pull anjia0532/google-containers.$imageName
docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
docker rmi anjia0532/google-containers.$imageName
done $ sh pull_k8s_images.sh
或
# docker pull gcrxio/kubernetes-dashboard-amd64:v1.10.0 #!/bin/bash
images=(kube-proxy:v1.13.0 kube-scheduler:v1.13.0 kube-controller-manager:v1.13.0
kube-apiserver:v1.13.0 kubernetes-dashboard-amd64:v1.10.0
etcd:3.2.24 coredns:1.2.6 pause:3.1 )
for imageName in ${images[@]} ; do
docker pull gcrxio/$imageName
docker tag gcrxio/$imageName k8s.gcr.io/$imageName
docker rmi gcrxio/$imageName
done
其他同步镜像源:
- https://github.com/mritd/gcrsync
- https://github.com/openthings/kubernetes-tools/blob/master/kubeadm/2-images/
- https://github.com/anjia0532/gcr.io_mirror
- https://www.jianshu.com/p/832bcd89bc07(
registry.cn-hangzhou.aliyuncs.com/google_containers)
查询需要升级的信息
$ kubeadm upgrade plan
Node 节点升级
升级对应的 kubelet kubeadm kubectl 的版本,拉取对应版本的镜像即可。
升级 Master 节点
$ kubeadm upgrade apply v1.12.2
[root@kubernetes-master opt]# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.12.1
[upgrade/versions] kubeadm version: v1.12.2
[upgrade/versions] Latest stable version: v1.12.2
[upgrade/versions] Latest version in the v1.12 series: v1.12.2 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 3 x v1.12.1 v1.12.2 Upgrade to the latest version in the v1.12 series: COMPONENT CURRENT AVAILABLE
API Server v1.12.1 v1.12.2
Controller Manager v1.12.1 v1.12.2
Scheduler v1.12.1 v1.12.2
Kube Proxy v1.12.1 v1.12.2
CoreDNS 1.2.2 1.2.2
Etcd 3.2.24 3.2.24 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.12.2 _____________________________________________________________________ [root@kubernetes-master opt]# kubeadm upgrade apply v1.12.2
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.12.2"
[upgrade/versions] Cluster version: v1.12.1
[upgrade/versions] kubeadm version: v1.12.2
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.2"...
Static pod: kube-apiserver-021rjsh216048s hash: 89c5b90341989424b0e19818b3f6e2be
Static pod: kube-controller-manager-021rjsh216048s hash: 8813655e72b0b808698db6f851c08ab4
Static pod: kube-scheduler-021rjsh216048s hash: 3c7cced3664379c67e8177757da3fa42
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-10-30-13-43-20/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-apiserver-021rjsh216048s hash: 89c5b90341989424b0e19818b3f6e2be
Static pod: kube-apiserver-021rjsh216048s hash: 37239b196c489a9b62de0bd5d3fa7fab
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-10-30-13-43-20/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-controller-manager-021rjsh216048s hash: 8813655e72b0b808698db6f851c08ab4
Static pod: kube-controller-manager-021rjsh216048s hash: c30d65a55ad9ffe79d5d36185a605db2
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-10-30-13-43-20/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-scheduler-021rjsh216048s hash: 3c7cced3664379c67e8177757da3fa42
Static pod: kube-scheduler-021rjsh216048s hash: ee7b1077c61516320f4273309e9b4690
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "021rjsh216048s" as an annotation
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.2". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
查询各节点信息与 pod 信息
$ systemctl enable kubelet && systemctl restart #各节点依次重启
$ kubelet$ kubectl get nodes
$ kubectl get pod --all-namespaces -o wide
REFER:
https://my.oschina.net/u/2306127/blog/2231184
https://github.com/kubernetes/kubeadm/issues/1054
使用 Kubeadm 升级 Kubernetes 版本的更多相关文章
- Kubernetes实践技巧:集群升级k8s版本
更新证书 使用 kubeadm 安装 kubernetes 集群非常方便,但是也有一个比较烦人的问题就是默认的证书有效期只有一年时间,所以需要考虑证书升级的问题,本文的演示集群版本为 v1.16.2 ...
- Kubernetes 集群升级docker版本
Kubernetes 集群升级docker版本 原则:升级完一台正常后再接着升下一台. Work Node 一.迁移上的pod(保证业务,但期间会出现抖动) kubectl drain $NODE ...
- Centos7 使用 kubeadm 安装Kubernetes 1.13.3
目录 目录 什么是Kubeadm? 什么是容器存储接口(CSI)? 什么是CoreDNS? 1.环境准备 1.1.网络配置 1.2.更改 hostname 1.3.配置 SSH 免密码登录登录 1.4 ...
- Centos 使用kubeadm安装Kubernetes 1.15.3
本来没打算搞这个文章的,第一里面有瑕疵(没搞定的地方),第二在我的Ubuntu 18 Kubernetes集群的安装和部署 以及Helm的安装 也有安装,第三 和社区的问文章比较雷同 https:// ...
- centos7使用kubeadm搭建kubernetes集群
一.本地实验环境准备 服务器虚拟机准备 IP CPU 内存 hostname 192.168.222.129 >=2c >=2G master 192.168.222.130 >=2 ...
- 02 . Kubeadm部署Kubernetes及简单应用
kubeadm部署Kubernetes kubeadm简介 # kubeadm是一位高中生的作品,他叫Lucas Kaldstrom,芬兰人,17岁用业余时间完成的一个社区项目: # kubeadm的 ...
- centos7.1使用kubeadm部署kubernetes 1.16.2的master高可用
机器列表,配置域名解析 cat /etc/hosts192.168.200.210 k8s-master1192.168.200.211 k8s-master2192.168.200.212 k8s- ...
- [经验交流] kubeadm 安装 kubernetes 一年过期的解决办法
kubeadm 是 kubernetes 提供的一个初始化集群的工具,使用起来非常方便.但是它创建的apiserver.controller-manager等证书默认只有一年的有效期,同时kubele ...
- 使用Kubeadm搭建Kubernetes(1.12.2)集群
Kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,在2018年将进入GA状态,说明离生产环境中使用的距离越来 ...
随机推荐
- Appium+Python自动化 3 -获取 app 包名和 activity
方法一: ①手机通过USB连接电脑 ②打开手机上被测app ③在电脑上 dos命令窗口,输入命令 adb shell dumpsys window w | findstr \/ | findstr n ...
- pycharm汉化(3.6版本)
step 1:下载pycharm汉化包 链接:https://pan.baidu.com/s/1htgcbZY 密码:8uia step 2:将pycharm安装目录下的lib文件夹内下的resou ...
- 洛谷P1386座位安排
座位安排 今天,在机房里做了这道题目,我来整理一下思路. 首先读懂题意,这n个人是不需要按1到n来一次安排的,也就是说你可以先安排任意一个人. 那么有一种很好排除的情况,那就是对于大于等于i的作为的需 ...
- 关于python-flask框架中的几个文件的理解
项目名.py— config.py—配置文件.一般数据库配置还有DEBUG的配置之类的卸载这个py文件中 models.py—模型文件.一般在这里面存储建立数据库的类. exts.py—过渡文件.因为 ...
- angularjs ng-bind-html的用法总结
angular中的$sanitize服务. 此服务依赖于ngSanitize模块.(这个模块需要加载angular-sanitize.js插件) 要学习这个服务,先要了解另一个指令: ng-bing- ...
- 继承中的prototype与_proto_
继承的核心是原型链,它的基本思想是利用原型让一个引用类型继承另一个引用类型的属性和方法. 例:SubType.prototype = new SuperType (); var instance = ...
- python文件操作打开模式 r,w,a,r+,w+,a+ 区别辨析
主要分成三大类: r 和 r+ "读"功能 r 只读 r+ 读写(先读后写) 辨析:对于r,只有读取功能,利用光标的移动,可以选择要读取的内容. 对于r+,同时具有读和写 ...
- 《Linux就该这么学》第十九天课程
今天对“Linux就该这么学”课程做个收尾 最后一张使用LNMP架构部署动态网站环境 第1步:下载及解压源码包文件.为了方便在网络中传输,源码包文件通常会在归档后使用gzip或bzip2等格式进行压缩 ...
- 深入理解java虚拟机(三)-----类加载机制
类加载机制jvm把描述类的数据从class文件加载到内存,并对数据进行校验.转换解析和初始化,最终形成可以被jvm直接使用的java类型.在java中,类型的加载.连接和初始化都是在程序运行期间完成的 ...
- 组件内守卫beforeRouteEnter和beforeRouteLeave
beforeRouteEnter用法和其他守卫差不多. 有个注意的地方就是beforeRouteEnter不能用this获取组件内收据. 在next()方法内存入vm这个参数,获取组件内数据. bef ...