默认的健康检查

这里Pod的restartPolicy设置为OnFailure,默认为Always.
[machangwei@mcwk8s-master ~]$ cat mcwHealthcheck.yml #下面这个配置存在问题
apiVersion: v1
kind: Pod
metadata:
name: mcw-healthcheck
labels:
test: mcw-healthcheck
spec:
restartPolicy: OnFailure
selector:
matchLabels:
test: mcw-healthcheck
containers:
- name: mcw-healthcheck
image: busybox
args:
- /bin/sh
- -c
- sleep 10;exit 1
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwH
mcwHealthcheck.yml mcwHttpdService.yml mcwHttpd.yml
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwHealthcheck.yml
error: error validating "mcwHealthcheck.yml": error validating data: ValidationError(Pod.spec): unknown field "selector" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
[machangwei@mcwk8s-master ~]$ #上面的配置存在问题,把选择器删掉,应该是POD类型资源没有选择器这个参数,pod不需要去匹配
[machangwei@mcwk8s-master ~]$ #标签,然后做指定标签做什么操作吧
[machangwei@mcwk8s-master ~]$ vim mcwHealthcheck.yml
[machangwei@mcwk8s-master ~]$ cat mcwHealthcheck.yml
apiVersion: v1
kind: Pod
metadata:
name: mcw-healthcheck
labels:
test: mcw-healthcheck
spec:
restartPolicy: OnFailure
containers:
- name: mcw-healthcheck
image: busybox
args:
- /bin/sh
- -c
- sleep 10;exit 1
[machangwei@mcwk8s-master ~]#上面是正确的配置。restartPolicy为 OnFailure,也就是容器进程返回值非零,
[machangwei@mcwk8s-master ~]#则认为容器发生故障需要重启。命令参数的意思是,模拟容器启动10秒后发生故障,退出非0
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwHealthcheck.yml #部署pod
pod/mcw-healthcheck created
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-healthcheck #查看这个容器,现在珍藏运行
NAME READY STATUS RESTARTS AGE
mcw-healthcheck 1/1 Running 0 31s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-healthcheck #容器退出,发生错误
NAME READY STATUS RESTARTS AGE
mcw-healthcheck 0/1 Error 0 45s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-healthcheck #容器重启1次,又成功运行
NAME READY STATUS RESTARTS AGE
mcw-healthcheck 1/1 Running 1 (22s ago) 62s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-healthcheck #容器再次退出
NAME READY STATUS RESTARTS AGE
mcw-healthcheck 0/1 Error 1 (38s ago) 78s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-healthcheck #
NAME READY STATUS RESTARTS AGE
mcw-healthcheck 0/1 CrashLoopBackOff 1 (24s ago) 90s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-healthcheck #再次成功运行
NAME READY STATUS RESTARTS AGE
mcw-healthcheck 1/1 Running 2 (41s ago) 107s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-healthcheck
NAME READY STATUS RESTARTS AGE
mcw-healthcheck 1/1 Running 3 (44s ago) 2m33s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-healthcheck #重启过3次
NAME READY STATUS RESTARTS AGE
mcw-healthcheck 0/1 Error 3 (53s ago) 2m42s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-healthcheck #重启过8次
NAME READY STATUS RESTARTS AGE
mcw-healthcheck 0/1 CrashLoopBackOff 8 (3m19s ago) 23m
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-healthcheck --show-labels #查看我们设置的标签
NAME READY STATUS RESTARTS AGE LABELS
mcw-healthcheck 0/1 CrashLoopBackOff 9 (4m37s ago) 30m test=mcw-healthcheck

Liveness探测,

存活探测吧。
存活探测几次失败后,容器会被杀掉重启。

1、错误的配置文件

[machangwei@mcwk8s-master ~]$ cat mcwLiveness.yml #下面有问题,不知道为啥,
apiVersion: v1
kind: Pod
metadata:
name: mcw-liveness
labels:
test: mcw-liveness
spec:
restartPolicy: OnFailure
containers:
- name: mcw-liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSenconds: 5
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwLiveness.yml
error: error validating "mcwLiveness.yml": error validating data: ValidationError(Pod.spec.containers[0].livenessProbe): unknown field "periodSenconds" in io.k8s.api.core.v1.Probe; if you choose to ignore these errors, turn validation off with --validate=false
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ vim mcwLiveness.yml
[machangwei@mcwk8s-master ~]$ cat mcwLiveness.yml #将存活探测间隔时间改成超时时间,并没有实现想要的效果
apiVersion: v1
kind: Pod
metadata:
name: mcw-liveness
labels:
test: mcw-liveness
spec:
restartPolicy: OnFailure
containers:
- name: mcw-liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
timeoutSeconds: 5
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwLiveness.yml
pod/mcw-liveness created
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 32s
mcw-liveness 1/1 Running 0 114s
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ vim mcwLiveness.yml
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ cat mcwLiveness.yml #修改sleep 600为120,
apiVersion: v1
kind: Pod
metadata:
name: mcw-liveness
labels:
test: mcw-liveness
spec:
restartPolicy: OnFailure
containers:
- name: mcw-liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 120
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
timeoutSeconds: 5
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwLiveness.yml #pod重新部署不行,还是有存活探针的重新部署不行呢?
The Pod "mcw-liveness" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds`, `spec.tolerations` (only additions to existing tolerations) or `spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)
  core.PodSpec{
   Volumes: {{Name: "kube-api-access-cv2xz", VolumeSource: {Projected: &{Sources: {{ServiceAccountToken: &{ExpirationSeconds: 3607, Path: "token"}}, {ConfigMap: &{LocalObjectReference: {Name: "kube-root-ca.crt"}, Items: {{Key: "ca.crt", Path: "ca.crt"}}}}, {DownwardAPI: &{Items: {{Path: "namespace", FieldRef: &{APIVersion: "v1", FieldPath: "metadata.namespace"}}}}}}, DefaultMode: &420}}}},
   InitContainers: nil,
   Containers: []core.Container{
   {
   Name: "mcw-liveness",
   Image: "busybox",
   Command: nil,
   Args: []string{
   "/bin/sh",
   "-c",
-  "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 120",
+  "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600",
   },
   WorkingDir: "",
   Ports: nil,
   ... // 16 identical fields
   },
   },
   EphemeralContainers: nil,
   RestartPolicy: "OnFailure",
   ... // 26 identical fields
  } [machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl delete -f mcwLiveness.yml #那就删掉重新部署吧。
pod "mcw-liveness" deleted
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwLiveness.yml
pod/mcw-liveness created
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness #没看错啥效果,我应该看日志的,唉,忘记了
NAME READY STATUS RESTARTS AGE
mcw-liveness 0/1 ContainerCreating 0 15s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 21s
mcw-liveness 1/1 Running 2 (115s ago) 5m15s
[machangwei@mcwk8s-master ~]$

2、正确的配置文件

[machangwei@mcwk8s-master ~]$ ls
mcwHealthcheck.yml mcwhttpd2quanyml mcwHttpdService.yml mcw.httpd.v16.yml mcw.httpd.v17.yml mcw.httpd.v18.yml mcwHttpd.yml mcwLiveness.yml mm.yml
[machangwei@mcwk8s-master ~]$ vim mcwLiveness.yml
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ cat mcwLiveness.yml #重新修改为间隔时间参数,值为5s。这看着和前面也没啥区别,这个是好的,这是为啥?
apiVersion: v1
kind: Pod
metadata:
name: mcw-liveness
labels:
test: mcw-liveness
spec:
restartPolicy: OnFailure
containers:
- name: mcw-liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 120
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5
[machangwei@mcwk8s-master ~]$ kubectl delete -f mcwLiveness.yml
pod "mcw-liveness" deleted
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwLiveness.yml
pod/mcw-liveness created
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 23s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness #查看一直显示运行呢
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 26s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 30s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 31s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 33s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 34s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 36s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 39s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 40s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 42s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 43s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 45s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 47s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 48s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 49s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 51s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 54s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 55s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 57s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 60s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 63s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 65s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 66s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 68s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 70s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 74s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 76s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 78s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 0 81s
[machangwei@mcwk8s-master ~]$ kubectl describe pod mcw-liveness #看下它的具体信息
Name: mcw-liveness
Namespace: default
Priority: 0
Node: mcwk8s-node2/10.0.0.6
Start Time: Sat, 22 Jan 2022 09:47:53 +0800
Labels: test=mcw-liveness
Annotations: <none>
Status: Running
IP: 10.244.2.20
IPs:
IP: 10.244.2.20
Containers:
mcw-liveness:
Container ID: docker://a2c603fc88bd32bd021c05234a719c11a8fe2fb31a1db1b90433bdfd5c1ebecc
Image: busybox
Image ID: docker-pullable://busybox@sha256:5acba83a746c7608ed544dc1533b87c737a0b0fb730301639a0179f9344b1678
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 120
State: Running
Started: Sat, 22 Jan 2022 09:48:10 +0800
Ready: True
Restart Count: 0
Liveness: exec [cat /tmp/healthy] delay=10s timeout=1s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lhb6w (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-lhb6w:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 97s default-scheduler Successfully assigned default/mcw-liveness to mcwk8s-node2
Normal Pulled 81s kubelet Successfully pulled image "busybox" in 15.677393047s
Normal Created 81s kubelet Created container mcw-liveness
Normal Started 81s kubelet Started container mcw-liveness
Warning Unhealthy 37s (x3 over 47s) kubelet Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Normal Killing 37s kubelet Container mcw-liveness failed liveness probe, will be restarted
Normal Pulling 7s (x2 over 97s) kubelet Pulling image "busybox"
[machangwei@mcwk8s-master ~]$ #由上可知,某个时间段存活探测了三次失败x3,然后杀掉容器,重新拉取镜像要重新部署容器了,这拉取镜像是第二次拉取镜像。x2
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness #下面再看,显示pod已经重启了一次了。
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 1 (25s ago) 116s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 1 (49s ago) 2m20s
[machangwei@mcwk8s-master ~]$ kubectl describe pod mcw-liveness #再次查看pod详情
.......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m24s default-scheduler Successfully assigned default/mcw-liveness to mcwk8s-node2
Normal Pulled 2m8s kubelet Successfully pulled image "busybox" in 15.677393047s
Normal Killing 84s kubelet Container mcw-liveness failed liveness probe, will be restarted
Normal Pulling 54s (x2 over 2m24s) kubelet Pulling image "busybox"
Normal Created 39s (x2 over 2m8s) kubelet Created container mcw-liveness
Normal Pulled 39s kubelet Successfully pulled image "busybox" in 15.73643804s
Normal Started 38s (x2 over 2m8s) kubelet Started container mcw-liveness
Warning Unhealthy 4s (x4 over 94s) kubelet Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
[machangwei@mcwk8s-master ~]$ #由上可知,第二轮探测失败开始了。第四次探测失败。重新生成的容器肯定是有文件的。但是后面又被删掉了
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness
NAME READY STATUS RESTARTS AGE
mcw-liveness 1/1 Running 1 (115s ago) 3m26s
[machangwei@mcwk8s-master ~]$ kubectl describe pod mcw-liveness #过一会再看pod详情
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m20s default-scheduler Successfully assigned default/mcw-liveness to mcwk8s-node2
Normal Pulled 4m4s kubelet Successfully pulled image "busybox" in 15.677393047s
Normal Pulled 2m35s kubelet Successfully pulled image "busybox" in 15.73643804s
Normal Created 2m35s (x2 over 4m4s) kubelet Created container mcw-liveness
Normal Started 2m34s (x2 over 4m4s) kubelet Started container mcw-liveness
Warning Unhealthy 110s (x6 over 3m30s) kubelet Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Normal Killing 110s (x2 over 3m20s) kubelet Container mcw-liveness failed liveness probe, will be restarted
Normal Pulling 80s (x3 over 4m20s) kubelet Pulling image "busybox"
Warning Failed 11s kubelet Failed to pull image "busybox": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 223.5.5.5:53: read udp 10.0.0.6:45742->223.5.5.5:53: i/o timeout
Warning Failed 11s kubelet Error: ErrImagePull
Normal BackOff 10s kubelet Back-off pulling image "busybox"
Warning Failed 10s kubelet Error: ImagePullBackOff
[machangwei@mcwk8s-master ~]$ kubectl describe pod mcw-liveness
......#由上可知拉取镜像三次了,第三次拉取失败了,下面继续拉取第四次,pulling x4,
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m31s default-scheduler Successfully assigned default/mcw-liveness to mcwk8s-node2
Normal Pulled 5m15s kubelet Successfully pulled image "busybox" in 15.677393047s
Normal Pulled 3m46s kubelet Successfully pulled image "busybox" in 15.73643804s
Normal Created 3m46s (x2 over 5m15s) kubelet Created container mcw-liveness
Normal Started 3m45s (x2 over 5m15s) kubelet Started container mcw-liveness
Warning Unhealthy 3m1s (x6 over 4m41s) kubelet Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Normal Killing 3m1s (x2 over 4m31s) kubelet Container mcw-liveness failed liveness probe, will be restarted
Warning Failed 82s kubelet Failed to pull image "busybox": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 223.5.5.5:53: read udp 10.0.0.6:45742->223.5.5.5:53: i/o timeout
Warning Failed 82s kubelet Error: ErrImagePull
Normal BackOff 81s kubelet Back-off pulling image "busybox"
Warning Failed 81s kubelet Error: ImagePullBackOff
Normal Pulling 67s (x4 over 5m31s) kubelet Pulling image "busybox"
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness #查看pod状态,镜像拉取失败的状态
NAME READY STATUS RESTARTS AGE
mcw-liveness 0/1 ErrImagePull 1 (2m48s ago) 5m49s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-liveness -o wide #这次拉取成功,容器又重新运行了
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mcw-liveness 1/1 Running 2 (3m25s ago) 6m26s 10.244.2.20 mcwk8s-node2 <none> <none>
[machangwei@mcwk8s-master ~]$

3、livenss配置文件简介

[machangwei@mcwk8s-master ~]$ cat mcwLiveness.yml
apiVersion: v1
kind: Pod
metadata:
name: mcw-liveness
labels:
test: mcw-liveness
spec:
restartPolicy: OnFailure #失败的时候重启
containers:
- name: mcw-liveness
image: busybox
args:
- /bin/sh #模拟故障。容器运行后,创建文件,30秒后删除文件,后面这个睡眠时间不清楚是干啥的。
- -c #下面的探测是容器启动10秒后就探测,在30秒删除之前,也就是会有探测成功的,被删除之后才会探测失败
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 120
livenessProbe: #添加存活探针的参数。根据执行命令的返回值判断。返回值为0,正常,返回值非0,探测失败
exec: #这里好像默认连续探测三次失败,杀掉容器,重建容器
command: #
- cat
- /tmp/healthy
initialDelaySeconds: 10 #容器启动10秒之后执行存活探测。应该根据应用启动时间来决定这个时间。在应用成功启动的时间之后进行探测
periodSeconds: 5 #指定探测的间隔时间,连续执行三个探测失败,杀掉容器重新建立 [machangwei@mcwk8s-master ~]$ cat mcwLiveness.yml
apiVersion: v1
kind: Pod
metadata:
name: mcw-liveness
labels:
test: mcw-liveness
spec:
restartPolicy: OnFailure
containers:
- name: mcw-liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 120
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5

Readiness探测

删不掉,强制删除
mcw-readiness 0/1 Terminating 0 17m
[machangwei@mcwk8s-master ~]$ kubectl delete pod mcw-readiness --grace-period=0 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "mcw-readiness" force deleted [machangwei@mcwk8s-master ~]$ ls
mcwHealthcheck.yml mcwhttpd2quanyml mcwHttpdService.yml mcw.httpd.v16.yml mcw.httpd.v17.yml mcw.httpd.v18.yml mcwHttpd.yml mcwLiveness.yml mcwReadiness.yml mm.yml
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwReadiness.yml
pod/mcw-readiness created
[machangwei@mcwk8s-master ~]$ cat mcwReadiness.yml #跟liveness配置一样,只是将探针改成readinessProbe
apiVersion: v1
kind: Pod
metadata:
name: mcw-readiness
labels:
test: mcw-readiness
spec:
restartPolicy: OnFailure
containers:
- name: mcw-readiness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 120
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-readiness
NAME READY STATUS RESTARTS AGE
mcw-readiness 0/1 Running 0 24s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-readiness #运行成功
NAME READY STATUS RESTARTS AGE
mcw-readiness 1/1 Running 0 32s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-readiness
NAME READY STATUS RESTARTS AGE
mcw-readiness 1/1 Running 0 44s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-readiness
NAME READY STATUS RESTARTS AGE
mcw-readiness 1/1 Running 0 49s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-readiness
NAME READY STATUS RESTARTS AGE
mcw-readiness 1/1 Running 0 54s
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-readiness #连续三次探测失败,设置ready为0,即ready设置为不可用
NAME READY STATUS RESTARTS AGE
mcw-readiness 0/1 Running 0 64s
[machangwei@mcwk8s-master ~]$ kubectl describe pod mcw-readiness #查看pod详情
........
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 90s default-scheduler Successfully assigned default/mcw-readiness to mcwk8s-node2
Normal Pulling 89s kubelet Pulling image "busybox"
Normal Pulled 73s kubelet Successfully pulled image "busybox" in 15.706951792s
Normal Created 73s kubelet Created container mcw-readiness
Normal Started 73s kubelet Started container mcw-readiness
Warning Unhealthy 0s (x10 over 40s) kubelet Readiness probe failed: cat: can't open '/tmp/healthy': No such file or directory
[machangwei@mcwk8s-master ~]$ #如上,探测到失败,没有那么文件,因为已经被删除掉了,这适合
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-readiness
NAME READY STATUS RESTARTS AGE
mcw-readiness 0/1 Running 0 2m35s
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl get pod mcw-readiness
NAME READY STATUS RESTARTS AGE
mcw-readiness 0/1 Completed 0 24m

Health Check 在 Scale Up中的应用。httpget探测方式

放弃了,回头研究

[machangwei@mcwk8s-master ~]$ cat mcwhttpd2quanyml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcw-httpd2
namespace: kube-public
spec:
replicas: 3
selector:
matchLabels:
run: mcw-httpd2
template:
metadata:
labels:
run: mcw-httpd2
spec:
containers:
- name: mcw-httpd2
image: httpd
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: httpd2-svc
namespace: kube-public
spec:
selector:
run: mcw-httpd2
ports:
- protocol: TCP
port: 8080
targetPort: 80
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ vim mcwRforSaleup.yml
[machangwei@mcwk8s-master ~]$ cat mcwRforSaleup.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcw-web
spec:
replicas: 3
selector:
matchLabels:
run: mcw-web
template:
metadata:
labels:
run: mcw-web
spec:
containers:
- name: mcw-web
image: httpd
ports:
- containerPort: 80
readinessProbe:
httpGet:
scheme: HTTP
path: /healthy
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: web-svc
spec:
selector:
run: mcw-web
ports:
- protocol: TCP
port: 8080
targetPort: 80
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwRforSaleup.yml
deployment.apps/mcw-web created
service/web-svc created
[machangwei@mcwk8s-master ~]$ kubectl get pod -o wide|grep mcw-web
mcw-web-76577f844c-lx9mm 0/1 Running 0 2m56s 10.244.2.24 mcwk8s-node2 <none> <none>
mcw-web-76577f844c-pk9sd 0/1 ContainerCreating 0 2s <none> mcwk8s-node1 <none> <none>
mcw-web-76577f844c-w554g 0/1 Running 0 11m 10.244.2.23 mcwk8s-node2 <none> <none>
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl describe pod mcw-web-76577f844c-lx9mm
.......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m9s default-scheduler Successfully assigned default/mcw-web-76577f844c-lx9mm to mcwk8s-node2
Normal Pulling 5m8s kubelet Pulling image "httpd"
Normal Pulled 4m52s kubelet Successfully pulled image "httpd" in 15.695204115s
Normal Created 4m52s kubelet Created container mcw-web
Normal Started 4m52s kubelet Started container mcw-web
Warning Unhealthy 4s (x59 over 4m39s) kubelet Readiness probe failed: Get "http://10.244.2.24:8080/healthy": dial tcp 10.244.2.24:8080: connect: connection refused
[machangwei@mcwk8s-master ~]$

Health Check 在滚动更新中的应用

1、滚动更新失败,不影响业务案例

apiVersion: v1
kind: Pod
metadata:
name: mcw-readiness
labels:
test: mcw-readiness
spec:
restartPolicy: OnFailure
containers:
- name: mcw-readiness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 120
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5 [machangwei@mcwk8s-master ~]$ kubectl get deployment mcw-app
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 0/2 2 0 26s
[machangwei@mcwk8s-master ~]$ vim mcwApp.v1.yml
[machangwei@mcwk8s-master ~]$ cat mcwApp.v1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcw-app
spec:
replicas: 10
selector:
matchLabels:
app: mcw-app
template:
metadata:
labels:
app: mcw-app
spec:
containers:
- name: mcw-app
image: busybox
args:
- /bin/sh
- -c
- sleep 20;touch /tmp/healthy; sleep 30000
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwApp.v1.yml --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/mcw-app configured
[machangwei@mcwk8s-master ~]$ kubectl get deployment mcw-app
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 2/10 10 2 59s
[machangwei@mcwk8s-master ~]$ kubectl get deployment mcw-app
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 2/10 10 2 67s
[machangwei@mcwk8s-master ~]$ kubectl get deployment mcw-app
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 10/10 10 10 37m
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mcw-app-5d455c9874-2g7zl 1/1 Running 0 37m
mcw-app-5d455c9874-5r4xx 1/1 Running 0 38m
mcw-app-5d455c9874-7gcnw 1/1 Running 0 37m
mcw-app-5d455c9874-87lqk 1/1 Running 0 38m
mcw-app-5d455c9874-ctzps 1/1 Running 0 37m
mcw-app-5d455c9874-gwpz6 1/1 Running 0 37m
mcw-app-5d455c9874-jhvvz 1/1 Running 0 37m
mcw-app-5d455c9874-kmxqs 1/1 Running 0 37m
mcw-app-5d455c9874-ksljt 1/1 Running 0 37m
mcw-app-5d455c9874-skvv6 1/1 Running 0 37m
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ cp mcwApp.v1.yml mcwApp.v2.yml
[machangwei@mcwk8s-master ~]$ vim mcwApp.v2.yml
[machangwei@mcwk8s-master ~]$ cat mcwApp.v2.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcw-app
spec:
replicas: 10
selector:
matchLabels:
app: mcw-app
template:
metadata:
labels:
app: mcw-app
spec:
containers:
- name: mcw-app
image: busybox
args:
- /bin/sh
- -c
- sleep 3000
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwApp.v2.yml --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/mcw-app configured
[machangwei@mcwk8s-master ~]$ kubectl get deployment mcw-app
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 8/10 5 8 39m
[machangwei@mcwk8s-master ~]$ kubectl get deployment mcw-app
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 8/10 5 8 39m
[machangwei@mcwk8s-master ~]$ kubectl get pod #5个pod是新副本,旧副本从10个减少到8个,
NAME READY STATUS RESTARTS AGE
mcw-app-5d455c9874-2g7zl 1/1 Running 0 39m
mcw-app-5d455c9874-5r4xx 1/1 Running 0 40m
mcw-app-5d455c9874-7gcnw 1/1 Running 0 39m
mcw-app-5d455c9874-87lqk 1/1 Running 0 40m
mcw-app-5d455c9874-ctzps 1/1 Terminating 0 39m
mcw-app-5d455c9874-gwpz6 1/1 Running 0 39m
mcw-app-5d455c9874-jhvvz 1/1 Terminating 0 39m
mcw-app-5d455c9874-kmxqs 1/1 Running 0 39m
mcw-app-5d455c9874-ksljt 1/1 Running 0 39m
mcw-app-5d455c9874-skvv6 1/1 Running 0 39m
mcw-app-6fc84f4f96-bcqmr 0/1 Running 0 22s
mcw-app-6fc84f4f96-bv2zj 0/1 Running 0 22s
mcw-app-6fc84f4f96-pf6kd 0/1 ContainerCreating 0 22s
mcw-app-6fc84f4f96-qsb8k 0/1 ContainerCreating 0 22s
mcw-app-6fc84f4f96-xxzpq 0/1 ContainerCreating 0 22s
[machangwei@mcwk8s-master ~]$ kubectl get deployment mcw-app
NAME READY UP-TO-DATE AVAILABLE AGE #8个旧+5个新副本 完成更新的副本5,但是没有准备好,因为健康检查失败
mcw-app 8/10 5 8 40m
[machangwei@mcwk8s-master ~]$ kubectl get deployment mcw-app
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 8/10 5 8 42m
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mcw-app-5d455c9874-2g7zl 1/1 Running 0 42m
mcw-app-5d455c9874-5r4xx 1/1 Running 0 43m
mcw-app-5d455c9874-7gcnw 1/1 Running 0 42m
mcw-app-5d455c9874-87lqk 1/1 Running 0 43m
mcw-app-5d455c9874-gwpz6 1/1 Running 0 42m
mcw-app-5d455c9874-kmxqs 1/1 Running 0 42m
mcw-app-5d455c9874-ksljt 1/1 Running 0 42m
mcw-app-5d455c9874-skvv6 1/1 Running 0 42m
mcw-app-6fc84f4f96-bcqmr 0/1 Running 0 3m17s
mcw-app-6fc84f4f96-bv2zj 0/1 Running 0 3m17s
mcw-app-6fc84f4f96-pf6kd 0/1 Running 0 3m17s
mcw-app-6fc84f4f96-qsb8k 0/1 Running 0 3m17s
mcw-app-6fc84f4f96-xxzpq 0/1 Running 0 3m17s
[machangwei@mcwk8s-master ~]$ kubectl get deployment mcw-app -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
mcw-app 8/10 5 8 43m mcw-app busybox app=mcw-app
[machangwei@mcwk8s-master ~]$

2、滚动升级失败回退

[machangwei@mcwk8s-master ~]$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 8/10 5 8 47h
[machangwei@mcwk8s-master ~]$ kubectl describe deployment mcw-app
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing False ProgressDeadlineExceeded
OldReplicaSets: mcw-app-5d455c9874 (8/8 replicas created)
NewReplicaSet: mcw-app-6fc84f4f96 (5/5 replicas created)
Events: <none> [machangwei@mcwk8s-master ~]$ kubectl rollout history deployment mcw-app
deployment.apps/mcw-app
REVISION CHANGE-CAUSE
1 kubectl apply --filename=mcwApp.v1.yml --record=true
2 kubectl apply --filename=mcwApp.v2.yml --record=true [machangwei@mcwk8s-master ~]$ kubectl rollout undo deployment mcw-app --to-revision=1 #回退到版本1
deployment.apps/mcw-app rolled back
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 8/10 10 8 47h
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 8/10 10 8 47h
[machangwei@mcwk8s-master ~]$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 10/10 10 10 47h
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mcw-app-5d455c9874-2g7zl 1/1 Running 2 (42m ago) 47h
mcw-app-5d455c9874-5r4xx 1/1 Running 2 (44m ago) 47h
mcw-app-5d455c9874-7gcnw 1/1 Running 2 (42m ago) 47h
mcw-app-5d455c9874-87lqk 1/1 Running 2 (44m ago) 47h
mcw-app-5d455c9874-gwpz6 1/1 Running 2 (43m ago) 47h
mcw-app-5d455c9874-kmxqs 1/1 Running 2 (43m ago) 47h
mcw-app-5d455c9874-ksljt 1/1 Running 2 (42m ago) 47h
mcw-app-5d455c9874-mzgds 1/1 Running 0 55s
mcw-app-5d455c9874-skvv6 1/1 Running 2 (42m ago) 47h
mcw-app-5d455c9874-z5qn2 1/1 Running 0 56s
[machangwei@mcwk8s-master ~]$

3、滚动升级定制maxSurge和maxUnavailable

需要添加:

  strategy:
rollingUpdate:
maxSurge: 35%
maxUnavailable: 35%

maxSurge: 滚动更新中副本总数超过目标数的上限。上面案例是默认25%,所以副本总数最大值为10+10*25%=1+2.5=13 向上取整,所以滚动更新时最大副本数是13
maxUnavailable:不可用副本占目标数(可设置为整数), 目标值10-roundDown(10*25%)=10-roundDown(2.5)=10-2=8 向下取整,所以滚动更新时最大不可用副本数是2

[machangwei@mcwk8s-master ~]$ cp mcwApp.v2.yml  mcwApp.v3.yml
[machangwei@mcwk8s-master ~]$ vim mcwApp.v3.yml
[machangwei@mcwk8s-master ~]$ cat mcwApp.v3.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcw-app
spec:
strategy:
rollingUpdate:
maxSurge: 35%
maxUnavailable: 35%
replicas: 10
selector:
matchLabels:
app: mcw-app
template:
metadata:
labels:
app: mcw-app
spec:
containers:
- name: mcw-app
image: busybox
args:
- /bin/sh
- -c
- sleep 3000
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5
[machangwei@mcwk8s-master ~]$ [machangwei@mcwk8s-master ~]$ kubectl apply -f mcwApp.v3.yml
deployment.apps/mcw-app configured
[machangwei@mcwk8s-master ~]$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 7/10 7 7 47h
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mcw-app-5d455c9874-2g7zl 1/1 Running 2 (57m ago) 47h
mcw-app-5d455c9874-5r4xx 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-7gcnw 1/1 Running 2 (57m ago) 47h
mcw-app-5d455c9874-87lqk 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-gwpz6 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-kmxqs 1/1 Running 2 (57m ago) 47h
mcw-app-5d455c9874-ksljt 1/1 Running 2 (57m ago) 47h
mcw-app-5d455c9874-mzgds 1/1 Terminating 0 15m
mcw-app-5d455c9874-skvv6 1/1 Terminating 2 (57m ago) 47h
mcw-app-5d455c9874-z5qn2 1/1 Terminating 0 15m
mcw-app-6fc84f4f96-68qxf 0/1 ContainerCreating 0 19s
mcw-app-6fc84f4f96-9fsgv 0/1 ContainerCreating 0 19s
mcw-app-6fc84f4f96-fnvq9 0/1 ContainerCreating 0 19s
mcw-app-6fc84f4f96-m9ts7 0/1 ContainerCreating 0 19s
mcw-app-6fc84f4f96-pg5hb 0/1 ContainerCreating 0 19s
mcw-app-6fc84f4f96-wsdq8 0/1 ContainerCreating 0 19s
mcw-app-6fc84f4f96-z8rnm 0/1 ContainerCreating 0 19s
[machangwei@mcwk8s-master ~]$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 7/10 7 7 47h
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mcw-app-5d455c9874-2g7zl 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-5r4xx 1/1 Running 2 (59m ago) 47h
mcw-app-5d455c9874-7gcnw 1/1 Running 2 (57m ago) 47h
mcw-app-5d455c9874-87lqk 1/1 Running 2 (59m ago) 47h
mcw-app-5d455c9874-gwpz6 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-kmxqs 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-ksljt 1/1 Running 2 (57m ago) 47h
mcw-app-6fc84f4f96-68qxf 0/1 ContainerCreating 0 44s
mcw-app-6fc84f4f96-9fsgv 0/1 Running 0 44s
mcw-app-6fc84f4f96-fnvq9 0/1 Running 0 44s
mcw-app-6fc84f4f96-m9ts7 0/1 ContainerCreating 0 44s
mcw-app-6fc84f4f96-pg5hb 0/1 ContainerCreating 0 44s
mcw-app-6fc84f4f96-wsdq8 0/1 Running 0 44s
mcw-app-6fc84f4f96-z8rnm 0/1 Running 0 44s
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mcw-app-5d455c9874-2g7zl 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-5r4xx 1/1 Running 2 (59m ago) 47h
mcw-app-5d455c9874-7gcnw 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-87lqk 1/1 Running 2 (59m ago) 47h
mcw-app-5d455c9874-gwpz6 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-kmxqs 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-ksljt 1/1 Running 2 (57m ago) 47h
mcw-app-6fc84f4f96-68qxf 0/1 ContainerCreating 0 56s
mcw-app-6fc84f4f96-9fsgv 0/1 Running 0 56s
mcw-app-6fc84f4f96-fnvq9 0/1 Running 0 56s
mcw-app-6fc84f4f96-m9ts7 0/1 ContainerCreating 0 56s
mcw-app-6fc84f4f96-pg5hb 0/1 Running 0 56s
mcw-app-6fc84f4f96-wsdq8 0/1 Running 0 56s
mcw-app-6fc84f4f96-z8rnm 0/1 Running 0 56s
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mcw-app-5d455c9874-2g7zl 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-5r4xx 1/1 Running 2 (59m ago) 47h
mcw-app-5d455c9874-7gcnw 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-87lqk 1/1 Running 2 (59m ago) 47h
mcw-app-5d455c9874-gwpz6 1/1 Running 2 (59m ago) 47h
mcw-app-5d455c9874-kmxqs 1/1 Running 2 (58m ago) 47h
mcw-app-5d455c9874-ksljt 1/1 Running 2 (58m ago) 47h
mcw-app-6fc84f4f96-68qxf 0/1 Running 0 72s
mcw-app-6fc84f4f96-9fsgv 0/1 Running 0 72s
mcw-app-6fc84f4f96-fnvq9 0/1 Running 0 72s
mcw-app-6fc84f4f96-m9ts7 0/1 Running 0 72s
mcw-app-6fc84f4f96-pg5hb 0/1 Running 0 72s
mcw-app-6fc84f4f96-wsdq8 0/1 Running 0 72s
mcw-app-6fc84f4f96-z8rnm 0/1 Running 0 72s
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl describe deployment mcw-app
Name: mcw-app
Namespace: default
CreationTimestamp: Wed, 26 Jan 2022 00:51:50 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 4
Selector: app=mcw-app
Replicas: 10 desired | 7 updated | 14 total | 7 available | 7 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 35% max unavailable, 35% max surge
Pod Template:
Labels: app=mcw-app
Containers:
mcw-app:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
sleep 3000
Readiness: exec [cat /tmp/healthy] delay=10s timeout=1s period=5s #success=1 #failure=3
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True ReplicaSetUpdated
OldReplicaSets: mcw-app-5d455c9874 (7/7 replicas created)
NewReplicaSet: mcw-app-6fc84f4f96 (7/7 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 17m deployment-controller Scaled down replica set mcw-app-6fc84f4f96 to 0
Normal ScalingReplicaSet 17m deployment-controller Scaled up replica set mcw-app-5d455c9874 to 10
Normal ScalingReplicaSet 98s deployment-controller Scaled up replica set mcw-app-6fc84f4f96 to 4
Normal ScalingReplicaSet 98s deployment-controller Scaled down replica set mcw-app-5d455c9874 to 7
Normal ScalingReplicaSet 98s deployment-controller Scaled up replica set mcw-app-6fc84f4f96 to 7
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl rollout history deployment mcw-app
deployment.apps/mcw-app
REVISION CHANGE-CAUSE
3 kubectl apply --filename=mcwApp.v1.yml --record=true
4 kubectl apply --filename=mcwApp.v2.yml --record=true [machangwei@mcwk8s-master ~]$ mv mcwApp.v1.yml mcwApp.v1.ymlbak #将这个文件移走,照样可以回退,不受影响,但是v3部署没有设置记录参数,就没有记录下来
[machangwei@mcwk8s-master ~]$ kubectl rollout undo deployment mcw-app --to-revision=3 #回退
deployment.apps/mcw-app rolled back
[machangwei@mcwk8s-master ~]$ kubectl get deployment #修改两个最大的参数,两个数量都已经发生了改变,现在已经执行了回退了
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 7/10 10 7 47h
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mcw-app-5d455c9874-2g7zl 1/1 Running 2 (61m ago) 47h
mcw-app-5d455c9874-5r4xx 1/1 Running 2 (62m ago) 47h
mcw-app-5d455c9874-78hxb 0/1 ContainerCreating 0 25s
mcw-app-5d455c9874-7gcnw 1/1 Running 2 (61m ago) 47h
mcw-app-5d455c9874-87lqk 1/1 Running 2 (62m ago) 47h
mcw-app-5d455c9874-dtk7n 0/1 Running 0 25s
mcw-app-5d455c9874-gwpz6 1/1 Running 2 (61m ago) 47h
mcw-app-5d455c9874-kmxqs 1/1 Running 2 (61m ago) 47h
mcw-app-5d455c9874-ksljt 1/1 Running 2 (61m ago) 47h
mcw-app-5d455c9874-npsbq 0/1 Running 0 25s
mcw-app-6fc84f4f96-68qxf 0/1 Terminating 0 4m3s
mcw-app-6fc84f4f96-9fsgv 0/1 Terminating 0 4m3s
mcw-app-6fc84f4f96-fnvq9 0/1 Terminating 0 4m3s
mcw-app-6fc84f4f96-m9ts7 0/1 Terminating 0 4m3s
mcw-app-6fc84f4f96-pg5hb 0/1 Terminating 0 4m3s
mcw-app-6fc84f4f96-wsdq8 0/1 Terminating 0 4m3s
mcw-app-6fc84f4f96-z8rnm 0/1 Terminating 0 4m3s
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mcw-app 10/10 10 10 47h
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mcw-app-5d455c9874-2g7zl 1/1 Running 2 (62m ago) 47h
mcw-app-5d455c9874-5r4xx 1/1 Running 2 (63m ago) 47h
mcw-app-5d455c9874-78hxb 1/1 Running 0 62s
mcw-app-5d455c9874-7gcnw 1/1 Running 2 (61m ago) 47h
mcw-app-5d455c9874-87lqk 1/1 Running 2 (63m ago) 47h
mcw-app-5d455c9874-dtk7n 1/1 Running 0 62s
mcw-app-5d455c9874-gwpz6 1/1 Running 2 (62m ago) 47h
mcw-app-5d455c9874-kmxqs 1/1 Running 2 (62m ago) 47h
mcw-app-5d455c9874-ksljt 1/1 Running 2 (61m ago) 47h
mcw-app-5d455c9874-npsbq 1/1 Running 0 62s
[machangwei@mcwk8s-master ~]$

kubernetes 之Health Check 健康检查的更多相关文章

  1. Kubernetes中Pod的健康检查

    本文介绍 Pod 中容器健康检查相关的内容.配置方法以及实验测试,实验环境为 Kubernetes 1.11,搭建方法参考kubeadm安装kubernetes V1.11.1 集群 0. 什么是 C ...

  2. nginx check健康检查

    nginx利用第三方模块nginx_upstream_check_module来检查后端服务器的健康情况 大家都知道,前段nginx做反代,如果后端服务器宕掉的话,nginx是不能把这台realser ...

  3. k8s的Health Check(健康检查)

    强大的自愈能力是 Kubernetes 这类容器编排引擎的一个重要特性.自愈的默认实现方式是自动重启发生故障的容器.除此之外,用户还可以利用 Liveness 和 Readiness 探测机制设置更精 ...

  4. Health Check - 每天5分钟玩转 Docker 容器技术(142)

    强大的自愈能力是 Kubernetes 这类容器编排引擎的一个重要特性.自愈的默认实现方式是自动重启发生故障的容器.除此之外,用户还可以利用 Liveness 和 Readiness 探测机制设置更精 ...

  5. Health Check【转】

    强大的自愈能力是 Kubernetes 这类容器编排引擎的一个重要特性.自愈的默认实现方式是自动重启发生故障的容器.除此之外,用户还可以利用 Liveness 和 Readiness 探测机制设置更精 ...

  6. linux运维、架构之路-K8s健康检查Health Check

    一.Health Check介绍         强大的自愈能力是k8s容器编排引擎一个重要特性,自愈能力的默认实现方式为自动重启发生故障的容器,另外还可以利用Liveness和Readiness探测 ...

  7. 【Azure 应用服务】App Service 运行状况健康检查功能简介 (Health check)

    通过Azure App Service门户,启用Health Check来监视应用服务的实例,当发现其中一个实例处于不健康(unhealthy)状态时,通过重新路由(即把有问题的实例从负载均衡器中移除 ...

  8. Kubernetes中Pod健康检查

    目录 1.何为健康检查 2.探针分类 2.1.LivenessProbe探针(存活性探测) 2.2.ReadinessProbe探针(就绪型探测) 3.探针实现方法 3.1.Container Exe ...

  9. Docker Kubernetes 健康检查

    Docker Kubernetes 健康检查 提供Probe探测机制,有以下两种类型: livenessProbe:如果检查失败,将杀死容器,然后根据Pod的重启策略来决定是否重启. readines ...

  10. Kubernetes应用健康检查

    目录贴:Kubernetes学习系列 在实际生产环境中,想要使得开发的应用程序完全没有bug,在任何时候都运行正常,几乎 是不可能的任务.因此,我们需要一套管理系统,来对用户的应用程序执行周期性的健康 ...

随机推荐

  1. Python设计模式----4.构建者模式

    构建者模式: 将一个复杂对象的构造与表现进行分离,利用多个步骤进行创建,同一个构建过程可用于创建多个不同的表现 构建者模式一般由 Director(指挥官)和 Builder(建设者)构成 class ...

  2. HDD杭州站·HarmonyOS技术专家分享HUAWEI DevEco Studio特色功能

    原文:https://mp.weixin.qq.com/s/87diJ1RePffgaFyd1VLijQ,点击链接查看更多技术内容. 7月15日,HUAWEI Developer Day(简称HDD) ...

  3. c# 框架系列 ———— EFCore 模型篇 [一]

    前言 简单介绍一下EfCore 的模型篇 正文 内容来源: 配置模型 配置模型的方式,一种是fluent api 还一种是属性的方式. public class Blog { public int B ...

  4. 《Effective C#》系列之(六)——提高多线程的性能

    一.综述 <Effective C#>中提高多线程性能的方法主要有以下几点: 避免锁竞争:锁的使用会导致线程阻塞,从而影响程序的性能.为了避免锁竞争,可以采用无锁编程技术,如CAS(Com ...

  5. 《c#高级编程》第5章C#5.0中的更改(十)——异步编程

    C#异步编程是一种在单线程上实现并发执行的技术,它通过使用异步方法.任务等高级概念,使得应用程序能够更好地响应用户操作.处理大量数据和操作外部资源.C#异步编程的核心概念包括: 异步方法:使用 asy ...

  6. 力扣454(java&python)-四数相加 II(中等)

    题目: 给你四个整数数组 nums1.nums2.nums3 和 nums4 ,数组长度都是 n ,请你计算有多少个元组 (i, j, k, l) 能满足: 0 <= i, j, k, l &l ...

  7. 鸿蒙HarmonyOS实战-ArkUI动画(放大缩小视图)

    前言 在HarmonyOS中,可以通过以下方法放大缩小视图: 使用缩放手势:可以使用双指捏合手势来放大缩小视图.将两个手指放在屏幕上,并向内或向外移动手指,即可进行放大或缩小操作. 使用系统提供的缩放 ...

  8. 免费体验!阿里云智能LOGO帮你解决设计难题

    ​简介:超实用!零基础搞定一个高大上的智能logo设计 新年过后,往往是大家一年中士气最足的时候,散去了年末的疲惫和emo,重燃对新一年的热情和希望. 想创业的朋友们同样意气风发,趁着新年的劲头想大干 ...

  9. 内含干货PPT下载|一站式数据管理DMS关键技术解读

    ​简介: 深入解读实时数据流.库仓一体数据处理等核心技术 "数聚云端·智驭未来"--阿里云数据库创新上云峰会暨第3届数据库性能挑战赛决赛颁奖典礼已圆满结束,更多干货内容欢迎大家观看 ...

  10. C++ 多态与虚拟:Class 语法语义

    1.object与class:在object-oriented programming编程领域,对象(object)有更严格的定义.对象是由数据结构和用于处理该结构的过程(称为methods)组成的实 ...