1、PV,PVC介绍

1.1、StorageClass & PV & PVC关系图

  • Volumes 是最基础的存储抽象,其支持多种类型,包括本地存储、NFS、FC以及众多的云存储,我们也可以编写自己的存储插件来支持特定的存储系统。Volume可以被Pod直接使用,也可以被PV使用。普通的Volume和Pod之间是一种静态的绑定关系,在定义Pod的同时,通过volume属性来定义存储的类型,通过volumeMount来定义容器内的挂载点。

  • PersistentVolume(PV) 与普通的Volume不同,PV是Kubernetes中的一个资源对象,创建一个PV相当于创建了一个存储资源对象,这个资源的使用要通过PVC来请求.

    PV是集群中由管理员配置的一段网络存储。 它是集群中的资源,就像节点是集群资源一样。 PV是容量插件,如Volumes,但其生命周期独立于使用PV的任何单个pod。 此API对象捕获存储实现的详细信息,包括NFS,iSCSI或特定于云提供程序的存储系统。

  • PersistentVolumeClaim(PVC) 是用户对存储资源PV的请求,根据PVC中指定的条件Kubernetes动态的寻找系统中的PV资源并进行绑定。

    是由用户进行存储的请求。 它类似于pod。 Pod消耗节点资源,PVC消耗PV资源。Pod可以请求特定级别的资源(CPU和内存)。声明可以请求特定的大小和访问模式(例如,可以一次读/写或多次只读)。

    目前PVC与PV匹配可以通过StorageClassName、matchLabels或者matchExpressions三种方式。

  • StorageClass 存储类,目前kubernetes支持很多存储,例如ceph,nfs,glusterfs等等,StorageClass为管理员提供了一种描述他们提供的存储的“类”的方法。 不同的类可能映射到服务质量级别,或备份策略,或者由群集管理员确定的任意策略。

1.2、生命周期

PV是群集中的资源。PVC是对这些资源的请求,并且还充当对资源的检查。PV和PVC之间的相互作用遵循以下生命周期:

Provisioning ——-> Binding ——–>Using——>Releasing——>Recycling

  • 供应准备Provisioning---通过集群外的存储系统或者云平台来提供存储持久化支持。

    • 静态提供Static:集群管理员创建多个PV。 它们携带可供集群用户使用的真实存储的详细信息。 它们存在于Kubernetes API中,可用于消费
    • 动态提供Dynamic:当管理员创建的静态PV都不匹配用户的PersistentVolumeClaim时,集群可能会尝试为PVC动态配置卷。 此配置基于StorageClasses:PVC必须请求一个类,并且管理员必须已创建并配置该类才能进行动态配置。 要求该类的声明有效地为自己禁用动态配置。
  • 绑定Binding -- 用户创建pvc并指定需要的资源和访问模式。在找到可用pv之前,pvc会保持未绑定状态。

  • 使用Using -- 用户可在pod中像volume一样使用pvc。

  • 释放Releasing -- 用户删除pvc来回收存储资源,pv将变成“released”状态。由于还保留着之前的数据,这些数据需要根据不同的策略来处理,否则这些存储资源无法被其他pvc使用。

  • 回收Recycling -- pv可以设置三种回收策略:保留(Retain),回收(Recycle)和删除(Delete)。

    • 保留策略:允许人工处理保留的数据。
    • 删除策略:将删除pv和外部关联的存储资源,需要插件支持。
    • 回收策略:将执行清除操作,之后可以被新的pvc使用,需要插件支持。

注:目前只有NFS和HostPath类型卷支持回收策略,AWS EBS,GCE PD,Azure Disk和Cinder支持删除(Delete)策略。

参考:

https://zhuanlan.zhihu.com/p/299242718

https://www.cnblogs.com/along21/p/10342788.html

1.3 静态供给的pv/pvc

创建pvc时需指定映射的pv,否则无法创建。pv需事先创建好。

下面的示例是以NFS作为持久存储,给zookeeper 3节点集群的/data、/datalog目录创建的静态pv/pvc

1.3.1 部署NFS server

# 安装
apt install nfs-kernel-server # 配置文件
/data/nfs_data *(rw,sync,no_root_squash)
/data/k8s-data *(rw,sync,no_root_squash) # 启动并设置开机启动
systemctl restart nfs-kernel-server && systemctl enable nfs-kernel-server # 检查共享的目录
showmount -e
Export list for k8-deploy:
/data/k8s-data *
/data/nfs_data * # 客户端安装
apt install nfs-common # 客户端挂载测试
mount -t nfs 192.168.2.10:/data/k8s-data /mnt/

1.3.2 pv yaml

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# cat zookeeper-datadir-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datadir-pv-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-2
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datadir-pv-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-3
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datadir-pv-3
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datalog-pv-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv-2
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datalog-pv-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv-3
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# cat zookeeper-datadir-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datadir-pv-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-2
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datadir-pv-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-3
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datadir-pv-3
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datalog-pv-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv-2
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datalog-pv-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv-3
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.2.10
path: /data/k8s-data/zookeeper/datalog-pv-3

1.3.3 创建pv

# 需要在nfs server服务器上创建使用的目录
mkdir -p /data/k8s-data/zookeeper/datadir-pv-{1..3}
mkdir -p /data/k8s-data/zookeeper/datalog-pv-{1..3} # 在k8s进群中创建pv
root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl apply -f zookeeper-datadir-pv.yml
persistentvolume/zookeeper-datadir-pv-1 created
persistentvolume/zookeeper-datadir-pv-2 created
persistentvolume/zookeeper-datadir-pv-3 created
persistentvolume/zookeeper-datalog-pv-1 created
persistentvolume/zookeeper-datalog-pv-2 created
persistentvolume/zookeeper-datalog-pv-3 created #查看pv是否创建成功
root@k8-deploy:~# kubectl get pv -n zk-ns
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
zookeeper-datadir-pv-1 20Gi RWO Retain Bound zk-ns/zookeeper-datadir-pvc-1 20s
zookeeper-datadir-pv-2 20Gi RWO Retain Bound zk-ns/zookeeper-datadir-pvc-2 20s
zookeeper-datadir-pv-3 20Gi RWO Retain Bound zk-ns/zookeeper-datadir-pvc-3 20s
zookeeper-datalog-pv-1 20Gi RWO Retain Bound zk-ns/zookeeper-datalog-pvc-1 20s
zookeeper-datalog-pv-2 20Gi RWO Retain Bound zk-ns/zookeeper-datalog-pvc-2 20s
zookeeper-datalog-pv-3 20Gi RWO Retain Bound zk-ns/zookeeper-datalog-pvc-3 20s

1.3.4 pvc yaml

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# cat zookeeper-datadir-pvc.yml
apiVersion: v1
kind: Namespace
metadata:
name: zk-ns
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-1
namespace: zk-ns
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-1
resources:
requests:
storage: 10Gi ---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-2
namespace: zk-ns
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-2
resources:
requests:
storage: 10Gi ---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-3
namespace: zk-ns
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-3
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datalog-pvc-1
namespace: zk-ns
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datalog-pv-1
resources:
requests:
storage: 10Gi ---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datalog-pvc-2
namespace: zk-ns
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datalog-pv-2
resources:
requests:
storage: 10Gi ---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datalog-pvc-3
namespace: zk-ns
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datalog-pv-3
resources:
requests:
storage: 10Gi

1.3.5 创建pvc

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl apply -f zookeeper-datadir-pvc.yml
namespace/zk-ns created
persistentvolumeclaim/zookeeper-datadir-pvc-1 created
persistentvolumeclaim/zookeeper-datadir-pvc-2 created
persistentvolumeclaim/zookeeper-datadir-pvc-3 created
persistentvolumeclaim/zookeeper-datalog-pvc-1 created
persistentvolumeclaim/zookeeper-datalog-pvc-2 created
persistentvolumeclaim/zookeeper-datalog-pvc-3 created # 查看是否创建成功
root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zk-ns zookeeper-datadir-pvc-1 Bound zookeeper-datadir-pv-1 20Gi RWO 14s
zk-ns zookeeper-datadir-pvc-2 Bound zookeeper-datadir-pv-2 20Gi RWO 14s
zk-ns zookeeper-datadir-pvc-3 Bound zookeeper-datadir-pv-3 20Gi RWO 14s
zk-ns zookeeper-datalog-pvc-1 Bound zookeeper-datalog-pv-1 20Gi RWO 14s
zk-ns zookeeper-datalog-pvc-2 Bound zookeeper-datalog-pv-2 20Gi RWO 14s
zk-ns zookeeper-datalog-pvc-3 Bound zookeeper-datalog-pv-3 20Gi RWO 14s

2.使用pv/pvc做为zookeeper集群的持久存储,搭建集群

2.1 zookeeper介绍

ZooKeeper 是一个典型的分布式数据一致性解决方案,分布式应用程序可以基于 ZooKeeper 实现诸如数据发布/订阅、负载均衡、命名服务、分布式协调/通知、集群管理、Master 选举、分布式锁和分布式队列等功能。

ZooKeeper 一个最常用的使用场景就是用于担任服务生产者和服务消费者的注册中心。

服务生产者将自己提供的服务注册到 ZooKeeper 中心,服务的消费者在进行服务调用的时候先到 ZooKeeper 中查找服务,获取到服务生产者的详细信息之后,再去调用服务生产者的内容与数据。

2.2 准备zookeeper docker镜像

# 从dokerhub拉去镜像,版本为3.4.14
docker pull zookeeper:3.4.14 # 更改tag并上传到本地harbor
docker tag zookeeper:3.4.14 192.168.1.110/zookeeper/zookeeper:3.4.14 docker push 192.168.1.110/zookeeper/zookeeper:3.4.14

2.3 k8s zookeeper集群 yaml

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# cat zookeeper-cluster.yml
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: zk-ns
spec:
ports:
- name: client
port: 2181
selector:
app: zookeeper
--- apiVersion: v1
kind: Service
metadata:
name: zookeeper1
namespace: zk-ns
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 42181
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "1"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper2
namespace: zk-ns
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 42182
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "2"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper3
namespace: zk-ns
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 42183
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "3"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper1
namespace: zk-ns
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "1"
spec:
containers:
- name: server
image: 192.168.1.110/zookeeper/zookeeper:3.4.14
imagePullPolicy: Always
env:
- name: ZOO_MY_ID
value: "1"
- name: SERVERS
value: "zookeeper1"
- name: ZOO_SERVERS
value: server.1=0.0.0.0:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/data"
name: zookeeper-datadir-pvc-1
- mountPath: "/datalog"
name: zookeeper-datalog-pvc-1
volumes:
- name: zookeeper-datadir-pvc-1
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-1
- name: zookeeper-datalog-pvc-1
persistentVolumeClaim:
claimName: zookeeper-datalog-pvc-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper2
namespace: zk-ns
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "2"
spec:
containers:
- name: server
image: 192.168.1.110/zookeeper/zookeeper:3.4.14
imagePullPolicy: Always
env:
- name: ZOO_MY_ID
value: "2"
- name: SERVERS
value: "zookeeper2"
- name: ZOO_SERVERS
value: server.1=zookeeper1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zookeeper3:2888:3888
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/data"
name: zookeeper-datadir-pvc-2
- mountPath: "/datalog"
name: zookeeper-datalog-pvc-2
volumes:
- name: zookeeper-datadir-pvc-2
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-2
- name: zookeeper-datalog-pvc-2
persistentVolumeClaim:
claimName: zookeeper-datalog-pvc-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper3
namespace: zk-ns
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "3"
spec:
containers:
- name: server
image: 192.168.1.110/zookeeper/zookeeper:3.4.14
imagePullPolicy: Always
env:
- name: ZOO_MY_ID
value: "3"
- name: SERVERS
value: "zookeeper3"
- name: ZOO_SERVERS
value: server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=0.0.0.0:2888:3888
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/data"
name: zookeeper-datadir-pvc-3
- mountPath: "/datalog"
name: zookeeper-datalog-pvc-3
volumes:
- name: zookeeper-datadir-pvc-3
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-3
- name: zookeeper-datalog-pvc-3
persistentVolumeClaim:
claimName: zookeeper-datalog-pvc-3

2.4 使用yaml创建zookeeper集群

# kubectl apply -f zookeeper-cluster.yml

2.5 查看service,pod,deployment创建是否成功

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl get svc -n zk-ns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
zookeeper ClusterIP 10.0.225.221 <none> 2181/TCP 21h
zookeeper1 NodePort 10.0.198.188 <none> 2181:42181/TCP,2888:30109/TCP,3888:50986/TCP 21h
zookeeper2 NodePort 10.0.38.33 <none> 2181:42182/TCP,2888:48760/TCP,3888:57228/TCP 21h
zookeeper3 NodePort 10.0.164.42 <none> 2181:42183/TCP,2888:41871/TCP,3888:37373/TCP 21h root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl get pod -n zk-ns
NAME READY STATUS RESTARTS AGE
zookeeper1-5668b88966-plks2 1/1 Running 0 21h
zookeeper2-978dfb9cb-bz67x 1/1 Running 0 21h
zookeeper3-69c77fdcc4-djfs6 1/1 Running 0 21h root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl get deploy -n zk-ns
NAME READY UP-TO-DATE AVAILABLE AGE
zookeeper1 1/1 1 1 21h
zookeeper2 1/1 1 1 21h
zookeeper3 1/1 1 1 21h

2.6 查看zookeeper集群状态

# kubectl exec zookeeper1-5668b88966-plks2 -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower # kubectl exec zookeeper2-978dfb9cb-bz67x -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: leader # kubectl exec zookeeper3-69c77fdcc4-djfs6 -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower

2.7 测试集群节点故障后选主是否正常

# 删除leader所在的pod
root@k8-deploy:~# kubectl delete pod zookeeper2-978dfb9cb-bz67x -n zk-ns
pod "zookeeper2-978dfb9cb-bz67x" deleted # pod 自动重建
root@k8-deploy:~# kubectl get pod -n zk-ns
NAME READY STATUS RESTARTS AGE
zookeeper1-5668b88966-plks2 1/1 Running 0 21h
zookeeper2-978dfb9cb-jmxr5 1/1 Running 0 21s
zookeeper3-69c77fdcc4-djfs6 1/1 Running 0 21h # 查看选主,可以看到leader变成了zookeeper3
root@k8-deploy:~# kubectl exec zookeeper1-5668b88966-plks2 -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower root@k8-deploy:~# kubectl exec zookeeper2-978dfb9cb-jmxr5 -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower root@k8-deploy:~# kubectl exec zookeeper3-69c77fdcc4-djfs6 -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: leader

2.8 使用ZooInspector工具连接zookeeper集群

登录到和k8s node节点在同一网段的windows机器上,通过node节点IP上暴露的Nodeport端口访问zookeeper集群。

2.9 查看NFS提供的pv的目录中的zookeeper数据是否正常存储

# 查看NFS中pv所在的目录
root@k8-deploy:~# ls -l /data/k8s-data/zookeeper/data*
/data/k8s-data/zookeeper/datadir-pv-1:
总用量 8
-rw-r--r-- 1 user user 2 10月 18 16:19 myid
drwxr-xr-x 2 user user 4096 10月 19 14:06 version-2 /data/k8s-data/zookeeper/datadir-pv-2:
总用量 8
-rw-r--r-- 1 user user 2 10月 18 16:19 myid
drwxr-xr-x 2 user user 4096 10月 19 14:06 version-2 /data/k8s-data/zookeeper/datadir-pv-3:
总用量 8
-rw-r--r-- 1 user user 2 10月 18 16:19 myid
drwxr-xr-x 2 user user 4096 10月 19 14:06 version-2 /data/k8s-data/zookeeper/datalog-pv-1:
总用量 4
drwxr-xr-x 2 user user 4096 10月 18 16:25 version-2 /data/k8s-data/zookeeper/datalog-pv-2:
总用量 4
drwxr-xr-x 2 user user 4096 10月 19 14:12 version-2 /data/k8s-data/zookeeper/datalog-pv-3:
总用量 4
drwxr-xr-x 2 user user 4096 10月 18 16:25 version-2 # 查看zookeeper集群节点生成的ID
root@k8-deploy:~# cat /data/k8s-data/zookeeper/datadir-pv-1/myid
1
root@k8-deploy:~# cat /data/k8s-data/zookeeper/datadir-pv-2/myid
2
root@k8-deploy:~# cat /data/k8s-data/zookeeper/datadir-pv-3/myid
3 # 查看日志文件是否可以正常写入存储
root@k8-deploy:~# ls -l /data/k8s-data/zookeeper/datalog-pv-1/version-2/log.200000001
-rw-r--r-- 1 user user 67108880 10月 19 14:14 /data/k8s-data/zookeeper/datalog-pv-1/version-2/log.200000001
root@k8-deploy:~# ls -l /data/k8s-data/zookeeper/datalog-pv-2/version-2/log.200000001
-rw-r--r-- 1 user user 67108880 10月 19 09:04 /data/k8s-data/zookeeper/datalog-pv-2/version-2/log.200000001
root@k8-deploy:~# ls -l /data/k8s-data/zookeeper/datalog-pv-3/version-2/log.200000001
-rw-r--r-- 1 user user 67108880 10月 19 14:14 /data/k8s-data/zookeeper/datalog-pv-3/version-2/log.200000001

7.2 k8s 基于PV、PVC搭建zookeeper 3节点集群的更多相关文章

  1. VMWare9下基于Ubuntu12.10搭建Hadoop-1.2.1集群—整合Zookeeper和Hbase

    VMWare9下基于Ubuntu12.10搭建Hadoop-1.2.1集群-整合Zookeeper和Hbase 这篇是接着上一篇hadoop集群搭建进行的.在hadoop-1.2.1基础之上安装zoo ...

  2. VMWare9下基于Ubuntu12.10搭建Hadoop-1.2.1集群

    VMWare9下基于Ubuntu12.10搭建Hadoop-1.2.1集群 下一篇:VMWare9下基于Ubuntu12.10搭建Hadoop-1.2.1集群-整合Zookeeper和Hbase 近期 ...

  3. 搭建zookeeper和Kafka集群

    搭建zookeeper和Kafka集群: 本实验拥有3个节点,均为CentOS 7系统,分别对应IP为10.211.55.11.10.211.55.13.10.211.55.14,且均有相同用户名 ( ...

  4. Docker快速搭建Zookeeper和kafka集群

    使用Docker快速搭建Zookeeper和kafka集群 镜像选择 Zookeeper和Kafka集群分别运行在不同的容器中zookeeper官方镜像,版本3.4kafka采用wurstmeiste ...

  5. 使用Docker快速搭建Zookeeper和kafka集群

    使用Docker快速搭建Zookeeper和kafka集群 镜像选择 Zookeeper和Kafka集群分别运行在不同的容器中zookeeper官方镜像,版本3.4kafka采用wurstmeiste ...

  6. 8.1 k8s使用PV/PVC做数据持久化运行redis服务,数据保存至NFS

    1.制作redis docker镜像 1.1 准备alpine基础镜像 # 下载 docker pull alpine:3.13 # 更改tag docker tag alpine:3.13 192. ...

  7. k8s存储 pv pvc ,storageclass

    1.  pv  pvc 现在测试 glusterfs  nfs  可读可写, 多个pod绑定到同一个pvc上,可读可写. 2. storageclass  分成两种 (1)  建立pvc, 相当于多个 ...

  8. 基于Docker快速搭建多节点Hadoop集群--已验证

    Docker最核心的特性之一,就是能够将任何应用包括Hadoop打包到Docker镜像中.这篇教程介绍了利用Docker在单机上快速搭建多节点 Hadoop集群的详细步骤.作者在发现目前的Hadoop ...

  9. CentOS 7搭建Zookeeper和Kafka集群

    环境 CentOS 7.4 Zookeeper-3.6.1 Kafka_2.13-2.4.1 Kafka-manager-2.0.0.2 本次安装的软件全部在 /home/javateam 目录下. ...

随机推荐

  1. 你对微信小程序的理解?优缺点?

    一.是什么 2017年,微信正式推出了小程序,允许外部开发者在微信内部运行自己的代码,开展业务 截至目前,小程序已经成为国内前端的一个重要业务,跟 Web 和手机 App 有着同等的重要性 小程序是一 ...

  2. css单位px,em,rem区别

    在css中单位长度用的最多的是px.em.rem,这三个的区别是: px是固定的像素,一旦设置了就无法因为适应页面大小而改变. em和rem相对于px更具有灵活性,他们是相对长度单位,意思是长度不是定 ...

  3. 脚本注入3(blind)

    布尔盲注适用于任何情况回显都不变的情况. (由此,可以看出,回显啥的其实都不重要,最重要的是判断注入点.只要找到注入点了,其他的都是浮云.) 在操作上,时间盲注还稍微简单一点:它不需要像布尔盲注那样, ...

  4. JVM:内存模型

    JVM:内存模型 说明:这是看了 bilibili 上 黑马程序员 的课程 JVM完整教程 后做的笔记 1. java 内存模型 很多人将[java 内存结构]与[java 内存模型]傻傻分不清,[j ...

  5. Jupyter Notebook配置多个kernel

    Jupyter Notebook配置多个kernel 前言: 在anaconda下配置了多个环境,而Jupiter Notebook只是安装在base环境下,为了能在Jupiter Notebook中 ...

  6. [对对子队]会议记录4.19(Scrum Meeting10)

    今天已完成的工作 何瑞 ​ 工作内容:搭建第2关,基本完成第3关 ​ 相关issue:搭建关卡2.3 ​ 相关签入:4.19签入1 4.19签入2 刘子航 ​ 工作内容:完成关卡选择界面的设计图 ​ ...

  7. 【二食堂】Alpha - Scrum Meeting 11

    Scrum Meeting 11 例会时间:4.21 18:00~18:20 进度情况 组员 进度 今日任务 李健 1. 登录注册页面前后端对接issue 1. 登录注册页面前后端对接issue2. ...

  8. BUAA-OO-UML

    BUAA-OO-UML 作业架构设计分析 第一次作业 类图如下: 这个架构十分简明,就是在底层数据和调用者之间建立起一层隔离层.但其实可以将转换过程延迟到调用阶段. 第二次作业 类图如下: 架构基本同 ...

  9. series和读取外部数据

    1.为什么学习pandas 我们并不是不愿意学习新的知识,只是在学习之前我们更想知道学习他们能够帮助我们解决什么问题.--伟哥 numpy虽然能够帮助我们处理数值,但是pandas除了处理数值之外(基 ...

  10. 数据治理之元数据管理的利器——Atlas入门宝典

    随着数字化转型的工作推进,数据治理的工作已经被越来越多的公司提上了日程.作为Hadoop生态最紧密的元数据管理与发现工具,Atlas在其中扮演着重要的位置.但是其官方文档不是很丰富,也不够详细.所以整 ...