volume

emptyDir


[machangwei@mcwk8s-master ~]$ kubectl apply -f mcwVolume1.yml #部署emptydir
pod/producer-consumer created
[machangwei@mcwk8s-master ~]$ cat mcwVolume1.yml
apiVersion: v1
kind: Pod
metadata:
name: producer-consumer
spec:
containers:
- image: busybox
name: producer
volumeMounts:
- mountPath: /producer_dir
name: shared-volume
args:
- /bin/sh
- -c
- echo "hello world" >/producer_dir/hello ; sleep 30000 - image: busybox
name: consumer
volumeMounts:
- mountPath: /consumer_dir
name: shared-volume
args:
- /bin/sh
- -c
- cat /consumer_dir/hello ; sleep 30000 volumes:
- name: shared-volume
emptyDir: {}

######定义一个volumes,使用空字典代表它是个空目录,起个名字。规格容器下,有两个镜像代表两个容器。这是pod种类资源。
#元数据名称就是运行的pod的名称。pod下两个容器,下面这行结果,ready下应该是两个容器,但是暂时是0个容器准备好。容器正在创建中
#容器下有镜像。每个镜像下做配置,一个镜像的配置就是一个容器。镜像下的名称就是容器的名称。指定逻辑卷挂载,指定这个容器的逻辑卷
#挂载路径是容器里哪个目录,需要挂载的逻辑卷名称是哪个。后面定义逻辑卷的时候有定义它的名称。然后这个容器的args参数,就是启动容器后,
#要运行的命令,这里生产者是每隔睡眠时间个单位,就写入数据到挂载目录下的某个文件。消费者就是每个睡眠个时间单位,去查看(消费)被挂载目录的文件
#由于两者是同一个volume挂载上去的,所以这个pod中的两个容器共享这一个volume。而这个volume实质就是一个目录,请往下看
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
producer-consumer 0/2 ContainerCreating 0 84s
[machangwei@mcwk8s-master ~]$ kubectl describe pod producer-consumer
Name: producer-consumer
Namespace: default
Priority: 0
Node: mcwk8s-node1/10.0.0.5
Start Time: Fri, 18 Feb 2022 20:20:38 +0800
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.1.15
IPs:
IP: 10.244.1.15
Containers:
producer:
Container ID: docker://e9a06c83f73861d15a115cc95665d19d8b6ef024546d339cc049d1571840cad7
Image: busybox
Image ID: docker-pullable://busybox@sha256:5acba83a746c7608ed544dc1533b87c737a0b0fb730301639a0179f9344b1678
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
echo "hello world" >/producer_dir/hello ; sleep 30000
State: Running
Started: Fri, 18 Feb 2022 20:21:54 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/producer_dir from shared-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spkgq (ro)
consumer:
Container ID: docker://634dc8fe053cc6ea1ee0f5f818cac6a7babbd5f84804579386382e2d2c6d1b40
Image: busybox
Image ID: docker-pullable://busybox@sha256:5acba83a746c7608ed544dc1533b87c737a0b0fb730301639a0179f9344b1678
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
cat /consumer_dir/hello ; sleep 30000
State: Running
Started: Fri, 18 Feb 2022 20:22:11 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/consumer_dir from shared-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spkgq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
shared-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-spkgq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m39s default-scheduler Successfully assigned default/producer-consumer to mcwk8s-node1
Normal Pulling 105s kubelet Pulling image "busybox"
Normal Pulled 89s kubelet Successfully pulled image "busybox" in 16.349119305s
Normal Created 89s kubelet Created container producer
Normal Started 83s kubelet Started container producer
Normal Pulling 83s kubelet Pulling image "busybox"
Normal Pulled 67s kubelet Successfully pulled image "busybox" in 16.122373015s
Normal Created 67s kubelet Created container consumer
Normal Started 66s kubelet Started container consumer
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
producer-consumer 2/2 Running 0 2m55s
[machangwei@mcwk8s-master ~]$ kubectl logs producer-consumer error: a container name must be specified for pod producer-consumer, choose one of: [producer consumer]
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl logs producer-consumer consumer
hello world
[machangwei@mcwk8s-master ~]$ kubectl get pod -o wide #查看pod所在的host
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
producer-consumer 2/2 Running 0 6m19s 10.244.1.15 mcwk8s-node1 <none> <none> [root@mcwk8s-node1 ~]$ docker ps|grep producer-consumer #pod部署主机上查看跟这个pod有关的容器。两个容器和pod在容一个host
634dc8fe053c busybox "/bin/sh -c 'cat /co…" 13 minutes ago Up 13 minutes k8s_consumer_producer-consume_default_d32da9de-8574-458a-97c7-31cf9107f0a1_0
e9a06c83f738 busybox "/bin/sh -c 'echo \"h…" 13 minutes ago Up 13 minutes k8s_producer_producer-consume_default_d32da9de-8574-458a-97c7-31cf9107f0a1_0
173864534fcb registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 14 minutes ago Up 13 minutes k8s_POD_producer-consumer_default_d32da9de-8574-458a-97c7-31cf9107f0a1_0
#它们的命名时部署配置中镜像下的名字是容器名,pod的POD,然后三者后面都是接pod名称producer-consumer
#看下面的,可以看到生产这容器和消费者容器源目录都是一样的,挂载的目标目录是各自容器中指定的目录。有的时候,可能容器改变,可能源目录和目标目录
发生了变化,当容器好了后,应该就都是如下类似。的
[root@mcwk8s-node1 ~]$ docker inspect e9a06c8|grep -iA 8 "mounts"
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume",
"Destination": "/producer_dir",
"Mode": "Z",
"RW": true,
"Propagation": "rprivate"
},
[root@mcwk8s-node1 ~]$ docker inspect 634dc8|grep -iA 8 "mounts"
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume",
"Destination": "/consumer_dir",
"Mode": "Z",
"RW": true,
"Propagation": "rprivate"
},
[root@mcwk8s-node1 ~]$

查看逻辑卷在物理机中的实际位置。以及两个容器操作后的数据

[root@mcwk8s-node1 ~]$ ls /var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume
hello
[root@mcwk8s-node1 ~]$ cat /var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume/hello
hello world
[root@mcwk8s-node1 ~]$ ls /var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/
kubernetes.io~empty-dir kubernetes.io~projected

[root@mcwk8s-node1 ~]$ docker inspect 634dc8
[
{
"Id": "634dc8fe053cc6ea1ee0f5f818cac6a7babbd5f84804579386382e2d2c6d1b40",
"Created": "2022-02-18T12:22:10.65373133Z",
"Path": "/bin/sh",
"Args": [
"-c",
"cat /consumer_dir/hello ; sleep 30000"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 91494,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-02-18T12:22:11.920497543Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a",
"ResolvConfPath": "/var/lib/docker/containers/173864534fcb27ce32342a5b5ba9edc4adc3731c6a629f4175e3bedf173b1dc2/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/173864534fcb27ce32342a5b5ba9edc4adc3731c6a629f4175e3bedf173b1dc2/hostname",
"HostsPath": "/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/etc-hosts",
"LogPath": "/var/lib/docker/containers/634dc8fe053cc6ea1ee0f5f818cac6a7babbd5f84804579386382e2d2c6d1b40/634dc8fe053cc6ea1ee0f5f818cac6a7babbd5f84804579386382e2d2c6d1b40-json.log",
"Name": "/k8s_consumer_producer-consumer_default_d32da9de-8574-458a-97c7-31cf9107f0a1_0",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume:/consumer_dir:Z",
"/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~projected/kube-api-access-spkgq:/var/run/secrets/kubernetes.io/serviceaccount:ro,Z",
"/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/etc-hosts:/etc/hosts:Z",
"/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/containers/consumer/61a05b66:/dev/termination-log:Z"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "container:173864534fcb27ce32342a5b5ba9edc4adc3731c6a629f4175e3bedf173b1dc2",
"PortBindings": null,
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "container:173864534fcb27ce32342a5b5ba9edc4adc3731c6a629f4175e3bedf173b1dc2",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 1000,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined"
],
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 2,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "kubepods-besteffort-podd32da9de_8574_458a_97c7_31cf9107f0a1.slice",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 100000,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/asound",
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/b45ab6cc1d2f691b9529c5dc413b8bcaa62350a62a43d9b2d45f2f2cdc7fb63d-init/diff:/var/lib/docker/overlay2/5c9e2add80e4eb9b40376cc60407679fdc4b510a0c146356397e8a769c1a307c/diff",
"MergedDir": "/var/lib/docker/overlay2/b45ab6cc1d2f691b9529c5dc413b8bcaa62350a62a43d9b2d45f2f2cdc7fb63d/merged",
"UpperDir": "/var/lib/docker/overlay2/b45ab6cc1d2f691b9529c5dc413b8bcaa62350a62a43d9b2d45f2f2cdc7fb63d/diff",
"WorkDir": "/var/lib/docker/overlay2/b45ab6cc1d2f691b9529c5dc413b8bcaa62350a62a43d9b2d45f2f2cdc7fb63d/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume",
"Destination": "/consumer_dir",
"Mode": "Z",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~projected/kube-api-access-spkgq",
"Destination": "/var/run/secrets/kubernetes.io/serviceaccount",
"Mode": "ro,Z",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/etc-hosts",
"Destination": "/etc/hosts",
"Mode": "Z",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/containers/consumer/61a05b66",
"Destination": "/dev/termination-log",
"Mode": "Z",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "producer-consumer",
"Domainname": "",
"User": "0",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"KUBERNETES_PORT_443_TCP_PROTO=tcp",
"KUBERNETES_PORT_443_TCP_PORT=443",
"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1",
"KUBERNETES_SERVICE_HOST=10.96.0.1",
"KUBERNETES_SERVICE_PORT=443",
"KUBERNETES_SERVICE_PORT_HTTPS=443",
"KUBERNETES_PORT=tcp://10.96.0.1:443",
"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"cat /consumer_dir/hello ; sleep 30000"
],
"Healthcheck": {
"Test": [
"NONE"
]
},
"Image": "busybox@sha256:5acba83a746c7608ed544dc1533b87c737a0b0fb730301639a0179f9344b1678",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"annotation.io.kubernetes.container.hash": "2c893f27",
"annotation.io.kubernetes.container.restartCount": "0",
"annotation.io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
"annotation.io.kubernetes.container.terminationMessagePolicy": "File",
"annotation.io.kubernetes.pod.terminationGracePeriod": "30",
"io.kubernetes.container.logpath": "/var/log/pods/default_producer-consumer_d32da9de-8574-458a-97c7-31cf9107f0a1/consumer/0.log",
"io.kubernetes.container.name": "consumer",
"io.kubernetes.docker.type": "container",
"io.kubernetes.pod.name": "producer-consumer",
"io.kubernetes.pod.namespace": "default",
"io.kubernetes.pod.uid": "d32da9de-8574-458a-97c7-31cf9107f0a1",
"io.kubernetes.sandbox.id": "173864534fcb27ce32342a5b5ba9edc4adc3731c6a629f4175e3bedf173b1dc2"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {}
}
}
]
[root@mcwk8s-node1 ~]$

描述某个容器的详情

[root@mcwk8s-node1 ~]$ docker ps|grep producer-consumer
634dc8fe053c busybox "/bin/sh -c 'cat /co…" 3 hours ago Up 3 hours k8s_consumer_producer-consumer_default_d32da9de-8574-458a-97c7-31cf9107f0a1_0
e9a06c83f738 busybox "/bin/sh -c 'echo \"h…" 3 hours ago Up 3 hours k8s_producer_producer-consumer_default_d32da9de-8574-458a-97c7-31cf9107f0a1_0
173864534fcb registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 3 hours ago Up 3 hours k8s_POD_producer-consumer_default_d32da9de-8574-458a-97c7-31cf9107f0a1_0
[root@mcwk8s-node1 ~]$
[root@mcwk8s-node1 ~]$
[root@mcwk8s-node1 ~]$ docker inspect 634dc8|grep -iA 8 "mounts"
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume",
"Destination": "/consumer_dir",
"Mode": "Z",
"RW": true,
"Propagation": "rprivate"
},
[root@mcwk8s-node1 ~]$ docker exec -it 634d cat /consumer_dir/hello
hello world
[root@mcwk8s-node1 ~]$ ls /var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume
hello
[root@mcwk8s-node1 ~]$ cat /var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume/hello
hello world
[root@mcwk8s-node1 ~]$ echo -e " mcw">>/var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume/hello
[root@mcwk8s-node1 ~]$ docker exec -it 634d cat /consumer_dir/hello #修改host目录,容器中显示被修改后的
hello world
mcw
[root@mcwk8s-node1 ~]$ docker exec -it 634d touch /consumer_dir/mcw.txt #容器中修改,host中也是修改后的
[root@mcwk8s-node1 ~]$ ls /var/lib/kubelet/pods/d32da9de-8574-458a-97c7-31cf9107f0a1/volumes/kubernetes.io~empty-dir/shared-volume
hello mcw.txt
[root@mcwk8s-node1 ~]$

hostPath

[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces|grep apiserver
kube-system kube-apiserver-mcwk8s-master 1/1 Running 15 (110m ago) 28d
[machangwei@mcwk8s-master ~]$ kubectl edit --namespace=kube-system pod kube-apiserver-mcwk8s-master
......
volumeMounts: #我们查看apiserver服务的配置。
- mountPath: /etc/ssl/certs #可以看到挂载容器中的三个目录,使用某个名称的volume,是否只读挂载
name: ca-certs #下面有定义用来挂载的volume
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostNetwork: true
nodeName: mcwk8s-master
preemptionPolicy: PreemptLowerPriority
priority: 2000001000
priorityClassName: system-node-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
seccompProfile:
type: RuntimeDefault
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
operator: Exists
volumes:
- hostPath: #host主机path目录被映射到使用这个容器的目录中,起个名字后面通过名字使用这个host上的目录
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs

外部Storage Provider

云硬盘存储

PersistentVolume & PersistentVolumeClaim

persistent 英[pəˈsɪstənt] 美[pərˈsɪstənt]
adj. 持久的; 持续的; 坚持不懈的; 执著的; 不屈不挠的; 连绵的; 反复出现的;
claim 英[kleɪm] 美[kleɪm]
v. 宣称; 声称; 断言; 要求(拥有); 索取; 认领; 索要; 引起(注意); 获得; 夺走,夺去(生命);
n. 声明; 宣称; 断言; (尤指对财产、土地等要求拥有的)所有权; (尤指向公司、政府等)索款,索赔;

PersistentVolume & PersistentVolumeClaim   pv &pvc

pv持久化volume,pvc持久化volume申请。

NFS PersistentVolume

nfs服务器

参考:https://www.cnblogs.com/machangwei-8/articles/15487295.html#_label5

[root@mcwk8s-master ~]$ showmount -e 10.0.0.4  #先部署好nfs服务器,这里nfs的共享目录是/nfsdata
Export list for 10.0.0.4:
/nfsdata *
[root@mcwk8s-master ~]$

要使用外部存储,这里使用nfs挂载存储,得有nfs服务器。上面已经部署好了。

pv

[machangwei@mcwk8s-master ~]$ vim nfs-pv1.yml
[machangwei@mcwk8s-master ~]$ kubectl apply -f nfs-pv1.yml
persistentvolume/mypv1 created
[machangwei@mcwk8s-master ~]$ cat nfs-pv1.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 10.0.0.4
[machangwei@mcwk8s-master ~]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv1 1Gi RWO Recycle Available nfs 21s

配置pv,配置api版本,种类是持久化逻辑卷。元数据名称就是逻辑卷名称。

规格中容量存储设置,访问模式是什么,这里设置该pv只能读写模式挂载一个节点。模式还有ReadWriteMany 能以读写模式挂载到多个节点;ReadOnlyMany 能以只读模式挂载到多个节点。

定义持久化逻辑卷重新声明策略,即回收策略。Reain 需要管理员手工回;Recycle清除PV中的数据,;Delete删除存储提供者上对应的资源

定义存储类名称,这里是使用nfs存储。nfs的信息需要设置,提供的挂载path是什么,访问服务地址是什么

pvc

种类是持久化逻辑卷声明;api版本多少,元数据名称设置逻辑卷名称;规则里设置访问模式,这里是以读写模式可以挂载到单个节点;资源是多少,请求存储大小是多少设置。存储类的名称是nfs

[machangwei@mcwk8s-master ~]$ cat nfs-pvc1.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mypvc1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
[machangwei@mcwk8s-master ~]$ kubectl apply -f nfs-pvc1.yml
persistentvolumeclaim/mypvc1 created
[machangwei@mcwk8s-master ~]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc1 Bound mypv1 1Gi RWO nfs 8s
[machangwei@mcwk8s-master ~]$ kubectl get pvc -o wide #查看到pvc绑定的pv,声明之后就绑定了逻辑卷
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
mypvc1 Bound mypv1 1Gi RWO nfs 18s Filesystem
[machangwei@mcwk8s-master ~]$ kubectl get pv -o wide #查看到pv被pvc绑定了,
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
mypv1 1Gi RWO Recycle Bound default/mypvc1 nfs 2m40s Filesystem

pod还有报错了

[machangwei@mcwk8s-master ~]$ cat pod1.yml
kind: Pod
apiVersion: v1
metadata:
name: mypod1
spec:
containers:
- name: mypod1
image: busybox
args:
- /bin/sh
- -c
- sleep 30000
volumeMounts:
- mountPath: "/mydata"
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: mypvc1

#pod中使用创建的pvc。种类是pod,元数据名称设置pod名称。规格里设置容器信息。容器逻辑卷和镜像是同一级下的,挂载容器中指定路径是什么,使用哪个逻辑卷名称;下面定义了这个
#使用的逻辑卷,这个逻辑卷怎么定义的?逻辑卷定义和容器同级,定义逻辑卷名称,持久化逻辑卷声明的名字是哪个,就是之前创建好的哪个pvc。这样修改容器中的那个目录,容器对应该主机的
#某个逻辑卷目录,而该目录又被挂载nfs挂载着,这样容器就间接访问到了nfs上的目录
[machangwei@mcwk8s-master ~]$ kubectl apply -f pod1.yml
pod/mypod1 created
[machangwei@mcwk8s-master ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mypod1 0/1 ContainerCreating 0 16s <none> mcwk8s-node1 <none> <none>
[machangwei@mcwk8s-master ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mypod1 0/1 ContainerCreating 0 33s <none> mcwk8s-node1 <none> <none>
[machangwei@mcwk8s-master ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mypod1 0/1 ContainerCreating 0 62s <none> mcwk8s-node1 <none> <none>
[machangwei@mcwk8s-master ~]$ kubectl describe pod mypod1
Name: mypod1
Namespace: default
Priority: 0
Node: mcwk8s-node1/10.0.0.5
Start Time: Sat, 19 Feb 2022 01:21:32 +0800
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
mypod1:
Container ID:
Image: busybox
Image ID:
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
sleep 30000
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mydata from mydata (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-28s6l (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mydata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mypvc1
ReadOnly: false
kube-api-access-28s6l:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 80s default-scheduler Successfully assigned default/mypod1 to mcwk8s-node1
Warning FailedMount 13s (x8 over 77s) kubelet MountVolume.SetUp failed for volume "mypv1" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 10.0.0.4:/nfsdata/pv1 /var/lib/kubelet/pods/d1ba5034-28ad-4898-a7aa-3551bc330a4b/volumes/kubernetes.io~nfs/mypv1
Output: mount: wrong fs type, bad option, bad superblock on 10.0.0.4:/nfsdata/pv1,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program) In some cases useful info is found in syslog - try
dmesg | tail or so.
[machangwei@mcwk8s-master ~]$ #由上可知,就是将网络存储挂载到host本地某个目录

解决上面的pod部署挂载问题

缺少包

上面的原因是,由于部署pod在节点上,从节点上把nfs服务(在主上)挂载到节点1,
由于缺少包,无法挂载,导致pod无法在节点上做挂载。
在节点1无法挂载nfs
[root@mcwk8s-node1 ~]$ mount -t nfs 10.0.0.4:/nfsdata/ /root/mcw/
mount: wrong fs type, bad option, bad superblock on 10.0.0.4:/nfsdata/,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program) In some cases useful info is found in syslog - try
dmesg | tail or so.
原因,缺少包,安装一下。
[root@mcwk8s-node1 ~]$ yum install nfs-utils ^C
[root@mcwk8s-node1 ~]$ mount -t nfs 10.0.0.4:/nfsdata/ /root/mcw/
[root@mcwk8s-node1 ~]$ df -h|tail -1 #可以正常挂载了
10.0.0.4:/nfsdata 19G 3.9G 15G 22% /root/mcw
[root@mcwk8s-node1 ~]$ df -h #过一会看pv1已经挂载上了
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 19G 2.6G 16G 14% /
devtmpfs 478M 0 478M 0% /dev
tmpfs 489M 0 489M 0% /dev/shm
tmpfs 489M 51M 438M 11% /run
tmpfs 489M 0 489M 0% /sys/fs/cgroup
/dev/sda1 797M 125M 673M 16% /boot
tmpfs 877M 12K 877M 1% /var/lib/kubelet/pods/9b74614a-6d0b-439c-8ea5-cf1583be7a5a/volumes/kubernetes.io~projected/kube-api-access-6hk5l
tmpfs 877M 12K 877M 1% /var/lib/kubelet/pods/57f73623-751b-41a3-883e-62c025c49f92/volumes/kubernetes.io~projected/kube-api-access-wv8cc
overlay 19G 2.6G 16G 14% /var/lib/docker/overlay2/c3a132cb506fe3db901260f4e8e40ab73c20b6dbb8904c545bd6c12e45af2ce9/merged
overlay 19G 2.6G 16G 14% /var/lib/docker/overlay2/9c92a061f975e8f573fc9b895664adeeec375d7609c6e00fd5facada9a525a13/merged
shm 64M 0 64M 0% /var/lib/docker/containers/fc176c43633f8e6362d47f6bbec6f2f556685bbe51564fe189cf58cd9bf69fa5/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/3bbd8916f1ae8819e27eb1269afa524eca1c60a5d0b8df56dbbf314217c063a0/mounts/shm
overlay 19G 2.6G 16G 14% /var/lib/docker/overlay2/914fc6440a30892ef51129fb445349c082d79ffb0fd79d62769384c7920e983a/merged
overlay 19G 2.6G 16G 14% /var/lib/docker/overlay2/fe20b74227debccec19071237760ba6c37be72eadccc3999490a8aeae432919a/merged
tmpfs 98M 0 98M 0% /run/user/0
tmpfs 877M 12K 877M 1% /var/lib/kubelet/pods/d1ba5034-28ad-4898-a7aa-3551bc330a4b/volumes/kubernetes.io~projected/kube-api-access-28s6l
10.0.0.4:/nfsdata/pv1 19G 3.9G 15G 22% /var/lib/kubelet/pods/d1ba5034-28ad-4898-a7aa-3551bc330a4b/volumes/kubernetes.io~nfs/mypv1
overlay 19G 2.6G 16G 14% /var/lib/docker/overlay2/b316299270c3541708659a9bbd6c65bb6d1fe4fa713460ad298866e1096adb0a/merged
shm 64M 0 64M 0% /var/lib/docker/containers/b35f15c56b2e6f635dd346a9352fa0cbcb77ec9ba3f9a53b62f7bb53cb249e30/mounts/shm
overlay 19G 2.6G 16G 14% /var/lib/docker/overlay2/bda202921f17c4ec866e22f2517af01a6df6434e4107b05588f4a001d8b5ec49/merged 但是验证的时候,报错只读
[machangwei@mcwk8s-master ~]$ kubectl exec mypod1 touch /mydata/hello
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
touch: /mydata/hello: Read-only file system
command terminated with exit code 1 去节点上查看pod写入不了
[root@mcwk8s-node1 ~]$ docker exec -it e09 touch /mydata/mcw.txt
touch: /mydata/mcw.txt: Read-only file system

这个无法从容器中写入文件到nfs存储里,跟容器有关吗。下面直接通过挂载的方式,将某个目录挂载上去了,但是写入文件容易显示只读。说明这是nfs服务端的问题。

[root@mcwk8s-node1 ~]$ mount -t nfs 10.0.0.4:/nfsdata/pv1 /root/mcw/
[root@mcwk8s-node1 ~]$ df -h|grep mcw
10.0.0.4:/nfsdata/pv1 19G 3.9G 15G 22% /root/mcw
[root@mcwk8s-node1 ~]$ touch /root/mcw/test.txt
touch: cannot touch ‘/root/mcw/test.txt’: Read-only file system
[root@mcwk8s-node1 ~]$

nfs问题导致的容器无法写入数据到外部存储的问题解决

这个无法从容器中写入文件到nfs存储里,跟容器有关吗。下面直接通过挂载的方式,
将某个目录挂载上去了,但是写入文件容易显示只读。说明这是nfs服务端的问题。
[root@mcwk8s-master ~]$ cat /etc/exports
/nfsdata *
/nfsdata/pv1 *
[root@mcwk8s-master ~]$ vim /etc/exports
[root@mcwk8s-master ~]$ cat /etc/exports
/nfsdata * (rw,sync,all_squash)
/nfsdata/pv1 * (rw,sync,all_squash)
[root@mcwk8s-master ~]$ systemctl restart nfs
[root@mcwk8s-master ~]$ showmount -e 10.0.0.4
Export list for 10.0.0.4:
/nfsdata/pv1 *
/nfsdata * 然后再去节点1上挂载发现还是不能写入文件
[root@mcwk8s-node1 ~]$ mount -t nfs 10.0.0.4:/nfsdata/pv1 /root/mcw/
[root@mcwk8s-node1 ~]$ df -h|grep mcw
10.0.0.4:/nfsdata/pv1 19G 3.9G 15G 22% /root/mcw
[root@mcwk8s-node1 ~]$ touch /root/mcw/test.txt
touch: cannot touch ‘/root/mcw/test.txt’: Read-only file system 再次修改nfs配置,*代表所有ip网段,后面将小括号里的设置,弄成和ip间是没有空格的,这样才生效了
[root@mcwk8s-master ~]$ vim /etc/exports
[root@mcwk8s-master ~]$ cat /etc/exports
/nfsdata *(rw,sync,all_squash)
/nfsdata/pv1 *(rw,sync,all_squash)
[root@mcwk8s-master ~]$ systemctl restart nfs
[root@mcwk8s-master ~]$ showmount -e 10.0.0.4
Export list for 10.0.0.4:
/nfsdata/pv1 *
/nfsdata *
[root@mcwk8s-master ~]$ 再次去节点1上访问,发现成功创建
[root@mcwk8s-node1 ~]$ touch /root/mcw/test.txt
[root@mcwk8s-node1 ~]$ ls /root/mcw/
test.txt
[root@mcwk8s-node1 ~]$ df -h|grep mcw
10.0.0.4:/nfsdata/pv1 19G 3.9G 15G 22% /root/mcw 主节点上也可以看到这个文件
[root@mcwk8s-master ~]$ ls /nfsdata/pv1/
test.txt 也可以正常从容器中写入文件到nfs服务器
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mypod1 1/1 Running 0 177m
[machangwei@mcwk8s-master ~]$ kubectl exec mypod1 ls /mydata
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
test.txt
[machangwei@mcwk8s-master ~]$ kubectl exec mypod1 touch /mydata/mcw.txt
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[machangwei@mcwk8s-master ~]$ ls /nfsdata/pv1/
mcw.txt test.txt 节点1上看挂载目录,首先是10.0.0.4:/nfsdata/pv1这个目录重新被挂载了,
虽然df查看只显示/root/mcw,但是也是可以通过之前那个mypv1目录去访问到nfs服务器上的文件的
当把/root/mcw卸载后,那么之前那个mypv1就显示出来了。并且可以正常使用
[root@mcwk8s-node1 ~]$ df -h|grep pv1
10.0.0.4:/nfsdata/pv1 19G 3.9G 15G 22% /root/mcw
[root@mcwk8s-node1 ~]$ ls /var/lib/kubelet/pods/d1ba5034-28ad-4898-a7aa-3551bc330a4b/volumes/kubernetes.io~nfs/mypv1
mcw.txt test.txt
[root@mcwk8s-node1 ~]$ unmout /root/mcw/
-bash: unmout: command not found
[root@mcwk8s-node1 ~]$ unmount /root/mcw/
-bash: unmount: command not found
[root@mcwk8s-node1 ~]$ unmount /root/mcw/
unalias unexpand unicode_stop unix2dos unix_chkpwd unlink unshare unxz
uname unicode_start uniq unix2mac unix_update unset until
[root@mcwk8s-node1 ~]$ umount /root/mcw/
[root@mcwk8s-node1 ~]$ df -h|grep pv1
10.0.0.4:/nfsdata/pv1 19G 3.9G 15G 22% /var/lib/kubelet/pods/d1ba5034-28ad-4898-a7aa-3551bc330a4b/volumes/kubernetes.io~nfs/mypv1
[root@mcwk8s-node1 ~]$ ls /var/lib/kubelet/pods/d1ba5034-28ad-4898-a7aa-3551bc330a4b/volumes/kubernetes.io~nfs/mypv1
mcw.txt test.txt

回收PV

会删除nfs数据的回收pv

[machangwei@mcwk8s-master ~]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc1 Bound mypv1 1Gi RWO nfs 3h31m
[machangwei@mcwk8s-master ~]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv1 1Gi RWO Recycle Bound default/mypvc1 nfs 3h33m
[machangwei@mcwk8s-master ~]$ kubectl delete pvc mypvc1 #删除pvc失败,pvc状态也变成终止,而不是绑定
persistentvolumeclaim "mypvc1" deleted ^C
[machangwei@mcwk8s-master ~]$ kubectl delete pvc mypvc1
persistentvolumeclaim "mypvc1" deleted
^C
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl get pod #删除pod,再删除pvc,pvc成功删除,网上说是删除pod,再删pvc,pv,应该就是这个顺序吧
NAME READY STATUS RESTARTS AGE
mypod1 1/1 Running 0 3h28m
[machangwei@mcwk8s-master ~]$ kubectl delete pod mypod1
pod "mypod1" deleted
[machangwei@mcwk8s-master ~]$ kubectl get pvc
No resources found in default namespace.
[machangwei@mcwk8s-master ~]$ kubectl get pod #删除pvc的时候创建了回收pv的容器
NAME READY STATUS RESTARTS AGE
recycler-for-mypv1 0/1 ContainerCreating 0 33s
[machangwei@mcwk8s-master ~]$ kubectl get pv #状态现在是空闲,pv可以重新被其它pvc声明绑定。在删除pvc的过程中应该是被释放的状态,那时是不能被其它pvc绑定
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv1 1Gi RWO Recycle Available nfs 3h41m
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ ls /nfsdata/pv1/ #删除pvc后,pv虽然空闲了,但是pv对应的
[machangwei@mcwk8s-master ~]$ #nfs服务器存储数据都被删除了。这是回收策略决定的,所以当我们要
[machangwei@mcwk8s-master ~]$ #删除pvc时,一定要知道pv使用的什么回收策略,防止本来不想删除pv数据的,但是被删掉的情况发生
[machangwei@mcwk8s-master ~]$ #删除pvc,但是不删除外部存储nfs上的数据,得用Retain的回收策略

不会删除nfs服务端数据的回收pv

[machangwei@mcwk8s-master ~]$ cat nfs-pvc1.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mypvc1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
[machangwei@mcwk8s-master ~]$ cat nfs-pv1.yml #将pv的回收策略修改为Retain,这样删除pvc就不会删除掉nfs服务端数据了
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 10.0.0.4
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl apply -f nfs-pv1.yml
persistentvolume/mypv1 configured
[machangwei@mcwk8s-master ~]$ kubectl get pv #查看可知,现在回收策略已经修改了
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv1 1Gi RWO Retain Available nfs 3h56m
[machangwei@mcwk8s-master ~]$ kubectl apply -f nfs-pvc1.yml persistentvolumeclaim/mypvc1 created
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc1 Bound mypv1 1Gi RWO nfs 10s
[machangwei@mcwk8s-master ~]$ cat pod1.yml
kind: Pod
apiVersion: v1
metadata:
name: mypod1
spec:
containers:
- name: mypod1
image: busybox
args:
- /bin/sh
- -c
- sleep 30000
volumeMounts:
- mountPath: "/mydata"
name: mydata
volumes:
- name: mydata
persistentVolumeClaim:
claimName: mypvc1
[machangwei@mcwk8s-master ~]$ kubectl apply -f pod1.yml #部署pod
pod/mypod1 created
[machangwei@mcwk8s-master ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mypod1 1/1 Running 0 3m16s 10.244.2.16 mcwk8s-node2 <none> <none>
[machangwei@mcwk8s-master ~]$ kubectl exec mypod1 touch ls /mydata
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[machangwei@mcwk8s-master ~]$ ls /nfsdata/pv1/ #查看nfs服务端没有数据
[machangwei@mcwk8s-master ~]$ kubectl exec mypod1 touch /mydata/mcw.txt #执行容器名字,后接命令。这样就相当于进入容器执行命令
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[machangwei@mcwk8s-master ~]$ ls /nfsdata/pv1/ #容器中创建了文件,可以看到nfs服务端已经存在这个文件了
mcw.txt
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc1 Bound mypv1 1Gi RWO nfs 7m10s
[machangwei@mcwk8s-master ~]$ kubectl delete pvc mypvc1 #此时删除pvc,卡住很长时间不动 再开一个xshell会话,将使用pvc的pod删除掉,
[machangwei@mcwk8s-master ~]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv1 1Gi RWO Retain Bound default/mypvc1 nfs 4h5m
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mypod1 1/1 Running 0 8m31s
[machangwei@mcwk8s-master ~]$ kubectl delete pod mypod1 #当删除掉pod后,pvc也立即删除了,pv不再是绑定状态,暂时是被释放的状态
pod "mypod1" deleted
[machangwei@mcwk8s-master ~]$ pod删除后,pvc也被删除了,应该是需要删除pod,才能删除掉pvc吧,目前看是这个样子。
[machangwei@mcwk8s-master ~]$ kubectl delete pvc mypvc1
persistentvolumeclaim "mypvc1" deleted
[machangwei@mcwk8s-master ~]$ kubectl get pv #删除掉pvc后,发现状态一直是被释放的,只有状态是空闲的,才能重新部署pvc使用这个pv,也就是才能被其它pvc申请
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv1 1Gi RWO Retain Released default/mypvc1 nfs 4h7m
[machangwei@mcwk8s-master ~]$ ls /nfsdata/pv1/ #查看到即使删除了pvc,但是外部存储数据nfs服务端的数据还是存在的。
mcw.txt
[machangwei@mcwk8s-master ~]$

让pv中的数据重新被使用

[machangwei@mcwk8s-master ~]$ #retain回收策略,被释放的状态,对应的pvc不存在了,
[machangwei@mcwk8s-master ~]$ kubectl get pv #pv下面的数据还想被重新使用,
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv1 1Gi RWO Retain Released default/mypvc1 nfs 4h16m
[machangwei@mcwk8s-master ~]$ kubectl get pvc
No resources found in default namespace.
[machangwei@mcwk8s-master ~]$ kubectl delete pv mypv1 #那么需要删除pv, persistentvolume "mypv1" deleted
[machangwei@mcwk8s-master ~]$
[machangwei@mcwk8s-master ~]$ ls /nfsdata/pv1/ #这种情况的pv删除是不会将外部数据删除掉的
mcw.txt
[machangwei@mcwk8s-master ~]$ kubectl apply -f nfs-pv1.yml #然后重新部署pv,pvc,pod
persistentvolume/mypv1 created
[machangwei@mcwk8s-master ~]$ kubectl get pv #重新部署了pv,pv空闲可被pvc声明使用
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv1 1Gi RWO Retain Available nfs 12s
[machangwei@mcwk8s-master ~]$ kubectl apply -f nfs-pvc1.yml
persistentvolumeclaim/mypvc1 created
[machangwei@mcwk8s-master ~]$ kubectl apply -f pod1.yml
pod/mypod1 created
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mypod1 1/1 Running 0 4m26s
[machangwei@mcwk8s-master ~]$ kubectl exec mypod1 ls /mydata #然后发现pod下容器重新使用这个数据了。也就是这种情况下数据是能保留下来的
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
mcw.txt
[machangwei@mcwk8s-master ~]$ ls /nfsdata/pv1/
mcw.txt
[machangwei@mcwk8s-master ~]$

PV动态供给

AWS EBS 支持pv动态供给,以后补充

https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner

mysql数据库持久化逻辑卷使用案例(nfs)

查看部署配置文件

[machangwei@mcwk8s-master ~]$ cat mysql-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/mysql-pv
server: 10.0.0.4
[machangwei@mcwk8s-master ~]$ cat mysql-pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
[machangwei@mcwk8s-master ~]$ cat mysql.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pvc
[machangwei@mcwk8s-master ~]$

详解配置文件

[machangwei@mcwk8s-master ~]$ cat mysql-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain #使用Reatin回收策略,即使删除pv,pvc,数据也不清除
storageClassName: nfs
nfs:
path: /nfsdata/mysql-pv
server: 10.0.0.4
[machangwei@mcwk8s-master ~]$ cat mysql-pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
[machangwei@mcwk8s-master ~]$ cat mysql.yml
apiVersion: v1
kind: Service #服务种类
metadata:
name: mysql #元数据名称是服务名称
spec: #查看规则
ports: #服务使用端口是什么
- port: 3306
selector: #使用哪个选择器这里是使用app mysql的选择器。应该是创建app mysql 标签,后面选择器匹配的时候用把
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector: #选择器匹配标签带有app mysql 的
matchLabels:
app: mysql
template: #查看模板
metadata: 元数据标签是app mysql 模板规则如下
labels:
app: mysql
spec:
containers:
- image: mysql:5.6 #容器使用哪个镜像
name: mysql #容器的名称是什么
env:
- name: MYSQL_ROOT_PASSWORD #容器内环境变量咋样,这里定义了mysql用户密码
value: password
ports:
- containerPort: 3306 #端口下,容器端口用的是哪个,并且是什么名字
name: mysql
volumeMounts:
- name: mysql-persistent-storage #容器逻辑卷挂载吗,和容器同级。
mountPath: /var/lib/mysql #容器内部挂载目录
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pvc
[machangwei@mcwk8s-master ~]$

部署pv,pvc ,service,deployment

[machangwei@mcwk8s-master ~]$ kubectl apply -f  mysql-pv.yml
persistentvolume/mysql-pv unchanged
[machangwei@mcwk8s-master ~]$ kubectl apply -f mysql-pvc.yml
persistentvolumeclaim/mysql-pvc created
[machangwei@mcwk8s-master ~]$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/mysql-pv 1Gi RWO Retain Bound default/mysql-pvc nfs 4m22s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mysql-pvc Bound mysql-pv 1Gi RWO nfs 11s [machangwei@mcwk8s-master ~]$ kubectl apply -f mysql.yml
service/mysql unchanged
deployment.apps/mysql created
[machangwei@mcwk8s-master ~]$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d
mysql ClusterIP 10.97.19.121 <none> 3306/TCP 9m22s
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-dbffc69d-trvvb 0/1 ContainerCreating 0 8m25s 查看是卡住了,mysq-pv目录没有,挂载上又卸载不下来了
Normal Scheduled 8m7s default-scheduler Successfully assigned default/mysql-dbffc69d-trvvb to mcwk8s-node2
Warning FailedMount <invalid> (x11 over <invalid>) kubelet MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 10.0.0.4:/nfsdata/mysql-pv /var/lib/kubelet/pods/6392d0d9-15be-4ba9-a338-61f290e6f999/volumes/kubernetes.io~nfs/mysql-pv
Output: mount.nfs: mounting 10.0.0.4:/nfsdata/mysql-pv failed, reason given by server: No such file or directory
Warning FailedMount <invalid> (x3 over <invalid>) kubelet Unable to attach or mount volumes: unmounted volumes=[mysql-persistent-storage], unattached volumes=[mysql-persistent-storage kube-api-access-v6lfr]: timed out waiting for the condition
我删除这个pod也卡在删不掉,找到pod所在节点,然后把这个挂载上的目录,手动执行命令umount卸载掉,然后就把这pod删除了,随后生成新的pod,运行起来了
[machangwei@mcwk8s-master ~]$ kubectl delete pod mysql-dbffc69d-trvvb
pod "mysql-dbffc69d-trvvb" deleted

后面重新部署

[machangwei@mcwk8s-master ~]$ ls
mysql-pvc.yml mysql-pv.yml mysql.yml
[machangwei@mcwk8s-master ~]$ kubectl apply -f mysql-pv.yml
persistentvolume/mysql-pv created
[machangwei@mcwk8s-master ~]$ kubectl apply -f mysql-pvc.yml
persistentvolumeclaim/mysql-pvc created
[machangwei@mcwk8s-master ~]$ kubectl apply -f mysql.yml
service/mysql created
deployment.apps/mysql created 报错
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned default/mysql-dbffc69d-292lp to mcwk8s-node2
Normal Pulling 13m kubelet Pulling image "mysql:5.6"
Normal Pulled 10m kubelet Successfully pulled image "mysql:5.6" in 2m17.834341538s
Normal Created 9m24s (x5 over 10m) kubelet Created container mysql
Normal Started 9m24s (x5 over 10m) kubelet Started container mysql
Normal Pulled 9m24s (x4 over 10m) kubelet Container image "mysql:5.6" already present on machine
Warning BackOff 3m13s (x37 over 10m) kubelet Back-off restarting failed container [root@mcwk8s-node2 ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
75829eceaae7 dd3b2a5dcb48 "docker-entrypoint.s…" About a minute ago Exited (1) About a minute ago k8s_mysql_mysql-dbffc69d-292lp_default_e9825a68-47b0-471d-8280-b95bd51c4068_6 去节点上查看MySQL容器日志,报错
[root@mcwk8s-node2 ~]$ docker logs 758
2022-02-19 11:16:19+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.51-1debian9 started.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted 解决方法,给nfs添加配置no_root_squash
[root@mcwk8s-master ~]$ cat /etc/exports
/nfsdata *(rw,sync)
[root@mcwk8s-master ~]$ vim /etc/exports
[root@mcwk8s-master ~]$ cat /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@mcwk8s-master ~]$ systemctl restart nfs
[root@mcwk8s-master ~]$ showmount -e localhost
Export list for localhost:
/nfsdata *
[root@mcwk8s-master ~]$

进入MySQL并创建数据

[machangwei@mcwk8s-master ~]$ kubectl run -it  --rm --image=mysql:5.6 --restart=Never mysql-mcwclient -- mysql -h 10.103.171.207  -P3306 -ppassword
If you don't see a command prompt, try pressing enter.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3 rows in set (0.03 sec) mysql> create database mcwtest;
Query OK, 1 row affected (0.08 sec) mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mcwtest |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.03 sec) mysql> use mcwtest;
Database changed
mysql> create table my_id(id int(4))
-> ;
Query OK, 0 rows affected (0.25 sec) mysql> insert my_i values(111);
ERROR 1146 (42S02): Table 'mcwtest.my_i' doesn't exist
mysql> insert my_id values(111);
Query OK, 1 row affected (0.05 sec) mysql> select * from my_id;
+------+
| id |
+------+
| 111 |
+------+
1 row in set (0.00 sec) mysql> \q
Bye
pod "mysql-mcwclient" deleted
[machangwei@mcwk8s-master ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-dbffc69d-jhvvz 1/1 Running 0 27m 10.244.1.4 mcwk8s-node1 <none> <none>
[machangwei@mcwk8s-master ~]$

关闭node1模拟故障

[root@mcwk8s-node1 ~]$ shutdown now

Connection closed by foreign host.

Disconnected from remote host(mcw05) at 20:19:00.

Type `help' to learn how to use Xshell prompt.
[c:\~]$

验证服务故障转移

[machangwei@mcwk8s-master ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-dbffc69d-jhvvz 1/1 Running 0 38m 10.244.1.4 mcwk8s-node1 <none> <none>
[machangwei@mcwk8s-master ~]$ kubectl describe pod mysql-dbffc69d-jhvvz
.....
Warning NodeNotReady 111s node-controller Node is not ready Warning NodeNotReady 5m29s node-controller Node is not ready
[machangwei@mcwk8s-master ~]$ 当发现节点5分钟左右没准备好的时候,就开始运行新的pod,并部署mysql服务的pod在可用节点上。
[machangwei@mcwk8s-master ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-dbffc69d-ch2vg 1/1 Running 0 2m9s 10.244.2.18 mcwk8s-node2 <none> <none>
mysql-dbffc69d-jhvvz 1/1 Terminating 0 43m 10.244.1.4 mcwk8s-node1 <none> <none>
[machangwei@mcwk8s-master ~]$ #还是用之前的命令来连接。--后面应该是连接命令吧。-h指定集群服务ip,指定端口和密码
[machangwei@mcwk8s-master ~]$ kubectl get service #--rm是退出后就删的pod.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d
mysql ClusterIP 10.103.171.207 <none> 3306/TCP 79m
[machangwei@mcwk8s-master ~]$ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-mcwclient -- mysql -h 10.103.171.207 -P3306 -ppassword
If you don't see a command prompt, try pressing enter. #当我们要进入MySQL命令行,需要按enter。不用一直等待,因为等着没用 mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mcwtest |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.02 sec) mysql> use mcwtest;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A Database changed
mysql> show tables;
+-------------------+
| Tables_in_mcwtest |
+-------------------+
| my_id |
+-------------------+
1 row in set (0.01 sec) mysql> select * from my_id;
+------+
| id |
+------+
| 111 |
+------+
1 row in set (0.02 sec) mysql> \q
Bye
pod "mysql-mcwclient" deleted 到root下查看数据库的数据存放地址,也就是nfs服务端上。我们设置了即使删除pv,pvc也是不删除数据的。如上可知,当数据库所在主机down之后,那么过一段时间,会重新从其它节点上运行新的pod,旧的pod还存在,只是状态是终止。
[root@mcwk8s-master ~]$
[root@mcwk8s-master ~]$ ls /nfsdata/mysql-pv/
auto.cnf ibdata1 ib_logfile0 ib_logfile1 mcwtest mysql performance_schema
[root@mcwk8s-master ~]$ ls /nfsdata/mysql-pv/mcwtest/
db.opt my_id.frm my_id.ibd
[root@mcwk8s-master ~]$ ls /nfsdata/mysql-pv/mcwtest/ -lh
total 112K
-rw-rw----. 1 polkitd ssh_keys 65 Feb 19 20:08 db.opt
-rw-rw----. 1 polkitd ssh_keys 8.4K Feb 19 20:09 my_id.frm
-rw-rw----. 1 polkitd ssh_keys 96K Feb 19 20:09 my_id.ibd
[root@mcwk8s-master ~]$ ls /nfsdata/mysql-pv/ -lh
total 109M
-rw-rw----. 1 polkitd ssh_keys 56 Feb 19 19:23 auto.cnf
-rw-rw----. 1 polkitd ssh_keys 12M Feb 19 20:24 ibdata1
-rw-rw----. 1 polkitd ssh_keys 48M Feb 19 20:24 ib_logfile0
-rw-rw----. 1 polkitd ssh_keys 48M Feb 19 19:21 ib_logfile1
drwx------. 2 polkitd ssh_keys 54 Feb 19 20:09 mcwtest
drwx------. 2 polkitd ssh_keys 4.0K Feb 19 19:25 mysql
drwx------. 2 polkitd ssh_keys 4.0K Feb 19 19:21 performance_schema

参考学习书籍:每天5分钟玩转kubernetes  ---cloudman

kubernetes之数据管理的更多相关文章

  1. Kubernetes——机密数据管理

    k8s——机密数据管理1.secret2.configMap kubectl explain secret    #查看帮助手册然后将你要加密的变量值做些许处理:echo 123 | base64   ...

  2. Kubernetes——滚动更新和数据管理

    k8s——滚动更新滚动更新就是一次只更新一小部分副本,更新成功之后再更新更多的副本,最终完成所有副本的更新.滚动更新最大的好处是零停机,整个更新的过程中始终有副本运行,从而保证了业务的连续性.kube ...

  3. Kubernetes -- secret (敏感数据管理)

    https://www.kubernetes.org.cn/secret secret 主要解决密码.token.密钥等敏感数据的配置问题,而不需要把这些敏感数据暴露到镜像或者Pod Spec中 Se ...

  4. Kubernetes --(k8s)volume 数据管理

    容器的磁盘的生命周期是短暂的,这就带来了许多问题:第一:当一个容器损坏了,kubelet会重启这个容器,但是数据会随着container的死亡而丢失:第二:当很多容器在同一Pod中运行的时候,经常需要 ...

  5. Kubernetes & Docker

    Docker核心技术原理及其应用 Docker 概览 Docker版本与安装介绍 Docker 核心技术之镜像 Docker 核心技术之容器 Docker 核心技术之容器与镜像 Docker 核心技术 ...

  6. Kubernetes 集群日志管理 - 每天5分钟玩转 Docker 容器技术(180)

    Kubernetes 开发了一个 Elasticsearch 附加组件来实现集群的日志管理.这是一个 Elasticsearch.Fluentd 和 Kibana 的组合.Elasticsearch ...

  7. Kubernetes 集群日志管理

    Kubernetes 开发了一个 Elasticsearch 附加组件来实现集群的日志管理.这是一个 Elasticsearch.Fluentd 和 Kibana 的组合.Elasticsearch ...

  8. ASP.NET Core on K8S深入学习(8)数据管理

    本篇已加入<.NET Core on K8S学习实践系列文章索引>,可以点击查看更多容器化技术相关系列文章. 在Docker中我们知道,要想实现数据的持久化(所谓Docker的数据持久化即 ...

  9. PB 级数据处理挑战,Kubernetes如何助力基因分析?

    摘要: 一家大型基因测序功能公司每日会产生 10TB 到 100TB 的下机数据,大数据生信分析平台需要达到 PB 级别的数据处理能力.这背后是生物科技和计算机科技的双向支撑:测序应用从科研逐步走向临 ...

随机推荐

  1. 你不得不了解的Python3.x新特性

    从 3.0 到 3.8,Python 3 已经更新了一波又一波,但似乎我们用起来和 2.7 没有太大区别?以前该怎么写 2.7 的代码现在就怎么写,只不过少数表达方式变了而已.在这篇文章中,作者介绍了 ...

  2. Servlet全局信息共享域对象ServletContext

    注:图片如果损坏,点击文章链接:https://www.toutiao.com/i6512672630875619853/ 1.<Servlet简单实现开发部署过程> 2.<Serv ...

  3. rockchip-rk3399 RGA的使用

    RGA的使用 RGA即二维图像辅助计算单元,该单元可以在极短时间内拷贝.旋转.格式转换.缩放.混合图片. rk官方RGA库链接:https://github.com/rockchip-linux/li ...

  4. 常用Cron表达式范例

    描述 表达式 每隔5秒执行一次 */5 * * * * ? 每隔1分钟执行一次 0 */1 * * * ? 每天23点执行一次 0 0 23 * * ? 每天凌晨1点执行一次 0 0 1 * * ? ...

  5. Mysql的存储过程摘要

    MySQL 5.0 版本开始支持存储过程. 存储过程(Stored Procedure)是一种在数据库中存储复杂程序,以便外部程序调用的一种数据库对象. 存储过程是为了完成特定功能的SQL语句集,经编 ...

  6. RocketMQ架构原理解析(四):消息生产端(Producer)

    RocketMQ架构原理解析(一):整体架构 RocketMQ架构原理解析(二):消息存储(CommitLog) RocketMQ架构原理解析(三):消息索引(ConsumeQueue & I ...

  7. “伏魔”赏金 | WebShell检测之「模拟污点引擎」首次公测,邀你来战!

    安全是一个动态的过程,攻防对抗如同在赛博世界里降妖伏魔,其要义是:取彼之长,补己之短.--伏魔引擎的诞生 伏魔引擎挑战赛 注册时间: 2022.01.10 00:00:00 - 2022.01.24 ...

  8. 微服务架构 | 2.2 Alibaba Nacos 的统一配置管理

    目录 前言 1. Nacos 配置中心基础知识 1.1 Nacos 在配置中心中的功能 1.2 Nacos 配置管理 Data ID 的构成 1.3 Nacos 配置的回滚机制 1.4 Nacos 配 ...

  9. Java对象栈上分配

    转自 https://blog.csdn.net/o9109003234/article/details/101365108 在学习Java的过程中,很多喜欢说new出来的对象分配一定在对上: 其实不 ...

  10. 原子操作atomic解读

    下面从一个问题引入: // ConsoleApplication5.cpp : 定义控制台应用程序的入口点. #include "stdafx.h" #include<ran ...