处理K8S PVC删除后pod报错
报错如下
Jun 19 17:15:18 node1 kubelet[1722]: E0619 17:15:18.381558 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:18 node1 kubelet[1722]: E0619 17:15:18.581422 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:18 node1 kubelet[1722]: E0619 17:15:18.781432 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:18 node1 kubelet[1722]: E0619 17:15:18.981401 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:19 node1 kubelet[1722]: E0619 17:15:19.181612 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:19 node1 kubelet[1722]: E0619 17:15:19.381434 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:19 node1 kubelet[1722]: E0619 17:15:19.581538 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:19 node1 kubelet[1722]: E0619 17:15:19.781372 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:19 node1 kubelet[1722]: E0619 17:15:19.981466 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:20 node1 kubelet[1722]: E0619 17:15:20.182079 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
Jun 19 17:15:20 node1 kubelet[1722]: E0619 17:15:20.381529 1722 desired_state_of_world_populator.go:312] Error processing volume "manual165" for pod "gr8333e7-0_672f06d5992f4b4580ae04289e33dde4(311dd5e7-1ce9-484d-9862-c7e60eeba6e5)": error processing PVC 672f06d5992f4b4580ae04289e33dde4/manual165-gr8333e7-0: PVC is being deleted
处理办法
查找到该pod详情,将控制器删除即可
[root@master ~]# kubectl get po -A|grep gr8333e7
672f06d5992f4b4580ae04289e33dde4 gr8333e7-0
[root@master ~]# kubectl describe po gr8333e7-0 -n 672f06d5992f4b4580ae04289e33dde4
Name: gr8333e7-0
Namespace: 672f06d5992f4b4580ae04289e33dde4
Priority: 0
Node: node1/172.31.200.68
Start Time: Fri, 19 Jun 2020 14:54:34 +0800
Labels: controller-revision-hash=gr8333e7-595ff46986
creater_id=1592549674520418745
Annotations: rainbond.com/tolerate-unready-endpoints: true
Status: Running
IP: 10.244.3.196
IPs:
IP: 10.244.3.196
Controlled By: StatefulSet/gr8333e7
Containers:
f6719b6d0f2adace1d930dc5f48333e7:
Container ID: docker://2258ee0b766f2ce261563de4bda331f8bcb172ec474f0c78a9e0627eb6dbe708
Image: goodrain.me/f6719b6d0f2adace1d930dc5f48333e7:20200619145017
Image ID: docker-pullable://goodrain.me/0b8f5af437254bb55b4d8907a0bbb3ab@sha256:8e4eca55761ebadacc6503acced877fa69689389b27c56a15ba165810e563e31
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 19 Jun 2020 14:54:36 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 1280m
memory: 1Gi
Requests:
cpu: 240m
memory: 1Gi
Readiness: tcp-socket :3306 delay=4s timeout=5s period=3s #success=1 #failure=3
Environment:
LOGGER_DRIVER_NAME: streamlog
REVERSE_DEPEND_SERVICE: gr512123:28d93ce6688d13325dc7986169512123,gr58ee27:599b46254ee0690d3ee750b5ab58ee27,gr5efe93:9c69edab427540f0aecc9bd0bb5efe93
DB_HOST: 127.0.0.1
DB_PORT: 3306
TENANT_ID: 672f06d5992f4b4580ae04289e33dde4
SERVICE_ID: f6719b6d0f2adace1d930dc5f48333e7
MEMORY_SIZE: large
SERVICE_NAME: gr8333e7
SERVICE_POD_NUM: 1
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
Mounts:
/var/lib/mysql from manual165 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lctjg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
manual165:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: manual165-gr8333e7-0
ReadOnly: false
default-token-lctjg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lctjg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
[root@master ~]# grctl service get gr8333e7 -t npwrtv4l
Namespace: 672f06d5992f4b4580ae04289e33dde4
ServiceID: f6719b6d0f2adace1d930dc5f48333e7
ReplicationType: statefulset
ReplicationID: gr8333e7
Status: running
------------Service------------
+---------------------+----------------+------------+
| Name | IP | Port |
+---------------------+----------------+------------+
| gr8333e7 | None | (TCP:3306) |
| service-392-3306 | 10.108.223.74 | (TCP:3306) |
| service-392-3306out | 10.105.125.132 | (TCP:3306) |
+---------------------+----------------+------------+
------------Ingress------------
+------+------+
| Name | Host |
+------+------+
+------+------+
-------------------Pod_1-----------------------
PodName: gr8333e7-0
PodStatus: Initialized : True Ready : True ContainersReady : True PodScheduled : True
PodIP: 10.244.3.196
PodHostIP: 172.31.200.68
PodHostName: node1
PodVolumePath:
PodStratTime: 2020-06-19T14:54:34+08:00
Containers:
+--------------+----------------------------------+-------------------------------------------------------------+------------------------------------+
| ID | Name | Image | State |
+--------------+----------------------------------+-------------------------------------------------------------+------------------------------------+
| 2258ee0b766f | f6719b6d0f2adace1d930dc5f48333e7 | goodrain.me/0b8f5af437254bb55b4d8907a0bbb3ab:20200424215058 | Running(2020-06-19T14:54:36+08:00) |
+--------------+----------------------------------+-------------------------------------------------------------+------------------------------------+
[root@master ~]# kubectl delete sts gr8333e7 -n 672f06d5992f4b4580ae04289e33dde4
statefulset.apps "gr8333e7" deleted
查看日志即正常
处理K8S PVC删除后pod报错的更多相关文章
- k8s 执行 ingress yaml 文件报错: error when creating "ingress-myapp.yaml": Internal error occurred: failed calling webhook
k8s 执行 ingress yaml 文件报错:错误如下: [root@k8s-master01 baremetal]# kubectl apply -f ingress-test.yaml Err ...
- dialogic d300语音卡驱动重装后启动报错问题解决方法
dialogic d300 驱动重装后 dlstart 报错解决 问题描述:dlstart 后如下报错 [root@BJAPQ091 data]#dlstop Stopping Dialogic ...
- Heka 编译安装后 运行报错 panic: runtime error: cgo argument has Go pointer to Go pointer
Heka 编译安装后 运行报错 panic: runtime error: cgo argument has Go pointer to Go pointer 解决办法: 1. Start heka ...
- 安装mongodb后启动报错libstdc++
安装mongo后启动报错如下图 显然说是libstdc++.so文件版本的问题,这种一般都是gcc版本太低了 接着查询gcc的版本 strings /usr/lib/libstdc++.so.6 ...
- Maven项目下update maven后Eclipse报错:java.lang.ClassNotFoundException: ContextLoaderL
Maven项目下update maven后Eclipse报错:java.lang.ClassNotFoundException: ContextLoaderL 严重: Error config ...
- linux上安装完torch后仍报错:ImportError: No module named torch
linux上安装完torch后仍报错: Traceback (most recent call last): File , in <module> import torch ImportE ...
- 安卓工作室 android studio 汉化后,报错。 设置界面打不开。Can't find resource for bundle java.util.PropertyResourceBundle, key emmet.bem.class.name.element.separator.label
安卓工作室 android studio 汉化后,报错. 设置界面打不开. Android studio has been sinified and reported wrong.The setup ...
- Python首次安装后运行报错(0xc000007b)的解决方法
最近在安装完Python后运行发现居然报错了,错误代码是0xc000007b,于是通过往上查找发现是因为首次安装Python缺乏VC++库的原因,下面通过这篇文章看看如何解决这个问题吧. 错误提示 ...
- Maven项目下update maven后Eclipse报错
Maven项目下update maven后Eclipse报错:java.lang.ClassNotFoundException: ContextLoaderL 严重: Error config ...
随机推荐
- maven继承父工程统一版本号
一.建立一个maven工程 pom类型 统一管理依赖以及版本号 子工程不会使用所有的定义的依赖 子工程使用依赖时无需指定版本号 pom.xml <project xmlns="http ...
- openstack高可用集群16-ceph介绍和部署
Ceph Ceph是一个可靠.自动重均衡.自动恢复的分布式存储系统,根据场景划分可以将Ceph分为三大块,分别是对象存储.块设备和文件系统服务.块设备存储是Ceph的强项. Ceph的主要优点是分布式 ...
- Core3.0路由配置
前言 MSDN文档,对ASP.NETCore中的路由完整的介绍 https://docs.microsoft.com/zh-cn/aspnet/core/fundamentals/routing?vi ...
- MVC中使用T4模板
参考博文 http://www.cnblogs.com/heyuquan/archive/2012/07/26/2610959.html 图片释义 1.简单示例,对基本的模块标记 2.根据上图生成的类 ...
- 运行jar提示“没有主清单属性”的解决方法
以下记录的是我export jar包后运行遇到问题的解决方法,如有错误,欢迎批评指正. 1.运行导出jar包,提示"没有主清单属性" 2.回想自己导出jar的操作是否有误,重新ex ...
- Git 是如何存储文件的
01.存储方式 Git 从核心上来看不过是简单地存储键值对(key-value).它允许插入任意类型的内容,并会返回一个键值,通过该键值可以在任何时候再取出该内容. Git 存储数据内容的方式,为每份 ...
- FeignClient spi 调用 短路异常 & 线程池配置
FeignClient spi 调用 短路异常 & 线程池配置 默认配置见:HystrixThreadPoolProperties 线程池对象:com.netflix.hystrix.Hyst ...
- idea提交svn不显示新建文件
在idea中,使用svn提交时可能会出现 预期文件没出现在提交目录里. 是因为没有把新建文件添加到版本控制里. 解决办法:右键选择文件→subversion→add to vcs. 自动把新文件添加 ...
- [leetcode]200. Number of Islands岛屿数量
dfs的第一题 被边界和0包围的1才是岛屿,问题就是分理出连续的1 思路是遍历数组数岛屿,dfs四个方向,遇到1后把周围连续的1置零,代表一个岛屿. /* 思路是:遍历二维数组,遇到1就把周围连续的1 ...
- Tiny6410烧入uboot,linux内核,文件系统
好久没有玩tiny6410了,今天拿出来试试.之前学习一直是跟着视频学习的.今天自己动手来做一下. 首先我将光盘linux目录下的linux-2.6.38-20150708.tgz rootfs_r ...