k8s使用rbd作后端存储

k8s里的存储方式主要有三种。分别是volume、persistent volumes和dynamic volume provisioning。

  • volume: 就是直接挂载在pod上的组件,k8s中所有的其他存储组件都是通过volume来跟pod直接联系的。volume有个type属性,type决定了挂载的存储是什么,常见的比如:emptyDir,hostPath,nfs,rbd,以及下文要说的persistentVolumeClaim等。跟docker里面的volume概念不同的是,docker里的volume的生命周期是跟docker紧紧绑在一起的。这里根据type的不同,生命周期也不同,比如emptyDir类型的就是跟docker一样,pod挂掉,对应的volume也就消失了,而其他类型的都是永久存储。详细介绍可以参考Volumes
  • Persistent Volumes:顾名思义,这个组件就是用来支持永久存储的,Persistent Volumes组件会抽象后端存储的提供者(也就是上文中volume中的type)和消费者(即具体哪个pod使用)。该组件提供了PersistentVolume和PersistentVolumeClaim两个概念来抽象上述两者。一个PersistentVolume(简称PV)就是后端存储提供的一块存储空间,具体到ceph rbd中就是一个image,一个PersistentVolumeClaim(简称PVC)可以看做是用户对PV的请求,PVC会跟某个PV绑定,然后某个具体pod会在volume 中挂载PVC,就挂载了对应的PV。关于更多详细信息比如PV,PVC的生命周期,dockerfile 格式等信息参考Persistent Volumes
  • Dynamic Volume Provisioning: 动态volume发现,比如上面的Persistent Volumes,我们必须先要创建一个存储块,比如一个ceph中的image,然后将该image绑定PV,才能使用。这种静态的绑定模式太僵硬,每次申请存储都要向存储提供者索要一份存储快。Dynamic Volume Provisioning就是解决这个问题的。它引入了StorageClass这个概念,StorageClass抽象了存储提供者,只需在PVC中指定StorageClass,然后说明要多大的存储就可以了,存储提供者会根据需求动态创建所需存储快。甚至于,我们可以指定一个默认StorageClass,这样,只需创建PVC就可以了。

PV的访问模式

  • ReadWriteOnce:可读可写,但只支持被单个node挂载。
  • ReadOnlyMany:只读权限,可以以只读的方式被多个node挂载。
  • ReadWriteMany:可读可写,这种存储可以以读写的方式被多个node共享。

PV回收策略

  • Retain – 手动重新使用
  • Recycle – 基本的删除操作 (“rm -rf /thevolume/*”)
  • Delete – 关联的后端存储卷一起删除,后端存储例如AWS EBS, GCE PD或OpenStack Cinder

在CLI下,访问方式被简写为:

RWO – ReadWriteOnce

ROX – ReadOnlyMany

RWX – ReadWriteMany

在当前的定义中,这三种方式都是针对节点级别的,也就是说,对于一个Persistent Volume, 如果是RWO, 那么只能被挂载在某一个Kubernetes的工作节点(以下简称节点)上,当再次尝试在其他节点挂载的时候,系统会报Multi-Attach的错误(当然,在只有一台可调度节点的情况,即使RWO也是能够被多个Pod同时使用的,但除了开发测试,有谁会这么用呢?); 如果是RMX, 那么可以同时在多个节点上挂载并被不同的Pod使用。

官方支持如下:

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

所以,我们这里是无法在多个node节点使用rbd的。

折中解决办法:

方法一:使用 NodeSelector标签,限制pod调度到指定的Node上。(如此便是单机)

方法二:不使用k8s,直接将rbd挂载到单台linux机器上。(也是单机)

开始使用rbd

每个k8s node都需要安装ceph客户端ceph-common,才能正常使用ceph

[root@k8s-node1 yum.repos.d]# cd /etc/yum.repos.d
[root@k8s-node1 yum.repos.d]# pwd
/etc/yum.repos.d
# 这里ceph源最好和ceph存储系统里使用的保持一致,保证ceph客户端版本一致
[root@k8s-node1 yum.repos.d]# vim ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
# 安装ceph-common
[root@k8s-node1 yum.repos.d]# yum -y install ceph-common

注:依赖包报错

# 如果yum安装ceph-common的时候,报这个错,则先安装epel-release
[root@k8s-node1 yum.repos.d]# yum -y install ceph-common
...
---> Package python-six.noarch 0:1.9.0-2.el7 will be installed
--> Finished Dependency Resolution
Error: Package: 2:ceph-common-12.2.12-0.el7.x86_64 (Ceph)
Requires: libleveldb.so.1()(64bit)
Error: Package: 2:librados2-12.2.12-0.el7.x86_64 (Ceph)
Requires: liblttng-ust.so.0()(64bit)
Error: Package: 2:ceph-common-12.2.12-0.el7.x86_64 (Ceph)
Requires: libbabeltrace-ctf.so.1()(64bit)
Error: Package: 2:ceph-common-12.2.12-0.el7.x86_64 (Ceph)
Requires: libbabeltrace.so.1()(64bit)
Error: Package: 2:librbd1-12.2.12-0.el7.x86_64 (Ceph)
Requires: liblttng-ust.so.0()(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
# 安装epel-release
[root@k8s-node1 yum.repos.d]# yum install epel-release
# 此时,再次安装ceph-common即可安装上
[root@k8s-node1 yum.repos.d]# yum -y install ceph-common
[root@k8s-node1 yum.repos.d]# ceph --version
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)

每个k8s-node都必须要安装ceph客户端,否则无法挂载

k8s-master上也安装

[root@k8s-master yum.repos.d]# cd /etc/yum.repos.d
[root@k8s-master yum.repos.d]# yum -y install ceph-commo
[root@k8s-master yum.repos.d]# ceph --version
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)

ceph配置(在ceph-admin节点设置)

# 创建存储池rbd_data,(我们测试环境之前创建过,所以略过)
[cephfsd@ceph-admin ceph]$ ceph osd pool create rbd_data 64 64
# 创建rbd镜像 (因为我们之前配置里指定了format为1,所以这里不需要显示指定)
[cephfsd@ceph-admin ceph]$ rbd create rbd_data/filecenter_image --size=10G
# 映射该rbd镜像
[cephfsd@ceph-admin ceph]$ sudo rbd map rbd_data/filecenter_image
/dev/rbd4
# 查看镜像信息
[cephfsd@ceph-admin ceph]$ rbd info rbd_data/filecenter_image
rbd image 'filecenter_image':
size 10GiB in 2560 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.376b6b8b4567
format: 2
features: layering
flags:
create_timestamp: Sat Dec 7 17:37:41 2019
[cephfsd@ceph-admin ceph]$
# 创建secret:
# 在创建pv前,由于ceph是开启了cephx认证的,于是首先需要创建secret资源,k8s的secret资源采用的是base64加密
# 在ceph monitor上提取key:
# 生成加密的key
[cephfsd@ceph-admin ceph]$ cat ceph.client.admin.keyring
[client.admin]
key = AQBIH+ld1okAJhAAmULVJM4zCCVAK/Vdi3Tz5Q==
[cephfsd@ceph-admin ceph]$ ceph auth get-key client.admin | base64
QVFCSUgrbGQxb2tBSmhBQW1VTFZKTTR6Q0NWQUsvVmRpM1R6NVE9PQ==

k8s-master上创建ceph的secret

# 创建一个目录,存放yaml文件,自己可以随意设置
[root@k8s-master yum.repos.d]# mkdir /root/k8s/nmp/k1/ceph
[root@k8s-master yum.repos.d]# cd /root/k8s/nmp/k1/ceph/
[root@k8s-master ceph]# pwd
/root/k8s/nmp/k1/ceph
# 创建secret
[root@k8s-master ceph]# vim ceph-secret.yaml
[root@k8s-master ceph]# cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: "kubernetes.io/rbd"
data:
key: QVFDTTlXOWFOMk9IR3hBQXZyUjFjdGJDSFpoZUtmckY0N2tZOUE9PQ==
[root@k8s-master ceph]# kubectl create -f ceph-secret.yaml
secret "ceph-secret" created
[root@k8s-master ceph]# kubectl get secret
NAME TYPE DATA AGE
ceph-secret kubernetes.io/rbd 1 7s
[root@k8s-master ceph]#

创建PV

[root@k8s-master ceph]# vim filecenter-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: filecenter-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 172.16.143.121:6789
pool: rbd_data
image: filecenter_image
user: admin
secretRef:
name: ceph-secret
fsType: xfs
readOnly: false
persistentVolumeReclaimPolicy: Recycle
[root@k8s-master ceph]# kubectl create -f filecenter-pv.yaml
persistentvolume "filecenter-pv" created
[root@k8s-master ceph]# kubectl get pv -o wide
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
filecenter-pv 1Gi RWO Recycle Available 18m

创建PVC

[root@k8s-master ceph]# vim filecenter-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: filecenter-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
[root@k8s-master ceph]# kubectl create -f filecenter-pvc.yaml
persistentvolumeclaim "filecenter-pvc" created
[root@k8s-master ceph]# kubectl get pvc -o wide
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
filecenter-pvc Bound filecenter-pv 1Gi RWO 6s
[root@k8s-master ceph]#

创建deployment挂载PVC

这里修改之前已经在使用的php-filecenter.yaml

[root@k8s-master ceph]# vim ../php/file-center/php-filecenter-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-filecenter-deployment
spec:
replicas: 1
selector:
matchLabels:
app: php-filecenter
template:
metadata:
labels:
app: php-filecenter
spec:
containers:
- name: php-filecenter
image: 172.16.143.107:5000/php-fpm:v2019120205
volumeMounts:
- mountPath: "/mnt"
name: filedata
volumes:
- name: filedata
persistentVolumeClaim:
claimName: filecenter-pvc
[root@k8s-master ceph]# kubectl apply -f ../php/file-center/php-filecenter-deployment.yaml
deployment "php-filecenter-deployment" configured
[root@k8s-master ceph]#

报错:pvc没有创建成功,导致pod创建失败

# pvc并没有绑定成功,导致pod创建失败
[root@k8s-master ceph]# kubectl exec -it php-filecenter-deployment-3316474311-g1jmg bash
Error from server (BadRequest): pod php-filecenter-deployment-3316474311-g1jmg does not have a host assigned
[root@k8s-master ceph]# kubectl describe pod php-filecenter-deployment-3316474311-g1jmg
Name: php-filecenter-deployment-3316474311-g1jmg
Namespace: default
Node: /
Labels: app=php-filecenter
pod-template-hash=3316474311
Status: Pending
IP:
Controllers: ReplicaSet/php-filecenter-deployment-3316474311
Containers:
php-filecenter:
Image: 172.16.143.107:5000/php-fpm:v2019120205
Port:
Volume Mounts:
/mnt from filedata (rw)
Environment Variables: <none>
Conditions:
Type Status
PodScheduled False
Volumes:
filedata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: filecenter-pvc
ReadOnly: false
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
8m 1m 29 {default-scheduler } Warning FailedScheduling [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected.]
# 查看pv,pv是Available
[root@k8s-master ceph]# kubectl get pv -o wide
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
filecenter-pv 1Gi RWO Recycle Available 39m
# 查看pvc,发现pvc是pending状态,有问题
[root@k8s-master ceph]# kubectl get pvc -o wide
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
filecenter-pvc Pending 35m
[root@k8s-master ceph]# kubectl get pod php-filecenter-deployment-3316474311-g1jmg
NAME READY STATUS RESTARTS AGE
php-filecenter-deployment-3316474311-g1jmg 0/1 Pending 0 9m
[root@k8s-master ceph]#
[root@k8s-master ceph]# kubectl describe pv filecenter-pv
Name: filecenter-pv
Labels: <none>
StorageClass:
Status: Available
Claim:
Reclaim Policy: Recycle
Access Modes: RWO
Capacity: 1Gi
Message:
Source:
Type: RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
CephMonitors: [172.16.143.121:6789]
RBDImage: filecenter_image
FSType: xfs
RBDPool: rbd_data
RadosUser: admin
Keyring: /etc/ceph/keyring
SecretRef: &{ceph-secret}
ReadOnly: false
No events.
[root@k8s-master ceph]# kubectl describe pvc filecenter-pvc
Name: filecenter-pvc
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
48m 5s 196 {persistentvolume-controller } Normal FailedBinding no persistent volumes available for this claim and no storage class is set
[root@k8s-master ceph]#
# 这种问题就表示pv和pvc没有绑定成功,如果没有指定match labels等规则,那要么就是容量大小不匹配,要么就是access modes不匹配
# 这里检查,发现是pv定义的1G,而pvc定义的是10G,容量大小不匹配,所以pvc没有绑定成功。
# 修改pvc的配置
[root@k8s-master ceph]# vim filecenter-pvc.yaml
# 再次重新应用配置
[root@k8s-master ceph]# kubectl apply -f filecenter-pvc.yaml
The PersistentVolumeClaim "filecenter-pvc" is invalid: spec: Forbidden: field is immutable after creation
# 删除pvc,重新创建
[root@k8s-master ceph]# kubectl delete -f filecenter-pvc.yaml
persistentvolumeclaim "filecenter-pvc" deleted
# 再次检查配置是否一致
[root@k8s-master ceph]# cat filecenter-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: filecenter-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
[root@k8s-master ceph]# cat filecenter-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: filecenter-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 172.16.143.121:6789
pool: rbd_data
image: filecenter_image
user: admin
secretRef:
name: ceph-secret
fsType: xfs
readOnly: false
persistentVolumeReclaimPolicy: Recycle
# 重新创建pvc
[root@k8s-master ceph]# kubectl create -f filecenter-pvc.yaml
persistentvolumeclaim "filecenter-pvc" created
[root@k8s-master ceph]# kubectl get pvc -o wide
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
filecenter-pvc Bound filecenter-pv 1Gi RWO 6s
[root@k8s-master ceph]#
# 可以看到pvc是Bound状态,绑定成功。

没有rbd map,导致pod绑定rbd失败

[root@k8s-master ceph]# kubectl apply -f ../php/file-center/php-filecenter-deployment.yaml
service "php-filecenter-service" configured
deployment "php-filecenter-deployment" configured
[root@k8s-master ceph]# kubectl get pod php-filecenter-deployment-3316474311-g1jmg
NAME READY STATUS RESTARTS AGE
php-filecenter-deployment-3316474311-g1jmg 0/1 ContainerCreating 0 41m
[root@k8s-master ceph]# kubectl logs -f php-filecenter-deployment-3316474311-g1jmg
Error from server (BadRequest): container "php-filecenter" in pod "php-filecenter-deployment-3316474311-g1jmg" is waiting to start: ContainerCreating
[root@k8s-master ceph]# kubectl describe pod php-filecenter-deployment-3316474311-g1jmg
Name: php-filecenter-deployment-3316474311-g1jmg
Namespace: default
Node: k8s-node1/172.16.143.108
Start Time: Sat, 07 Dec 2019 18:52:30 +0800
Labels: app=php-filecenter
pod-template-hash=3316474311
Status: Pending
IP:
Controllers: ReplicaSet/php-filecenter-deployment-3316474311
Containers:
php-filecenter:
Container ID:
Image: 172.16.143.107:5000/php-fpm:v2019120205
Image ID:
Port:
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Volume Mounts:
/mnt from filedata (rw)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
filedata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: filecenter-pvc
ReadOnly: false
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
41m 4m 133 {default-scheduler } Warning FailedScheduling [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected.]
3m 3m 1 {default-scheduler } Normal Scheduled Successfully assigned php-filecenter-deployment-3316474311-g1jmg to k8s-node1
1m 1m 1 {kubelet k8s-node1} Warning FailedMount Unable to mount volumes for pod "php-filecenter-deployment-3316474311-g1jmg_default(5d5d48d9-18da-11ea-8c36-000c29fc3a73)": timeout expired waiting for volumes to attach/mount for pod "default"/"php-filecenter-deployment-3316474311-g1jmg". list of unattached/unmounted volumes=[filedata]
1m 1m 1 {kubelet k8s-node1} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"php-filecenter-deployment-3316474311-g1jmg". list of unattached/unmounted volumes=[filedata]
1m 1m 1 {kubelet k8s-node1} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/rbd/5d5d48d9-18da-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "5d5d48d9-18da-11ea-8c36-000c29fc3a73" (UID: "5d5d48d9-18da-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 2019-12-07 18:54:44.268690 7f2f99d00d40 -1 did not load config file, using default settings.
2019-12-07 18:54:44.271606 7f2f99d00d40 -1 Errors while parsing config file!
2019-12-07 18:54:44.271610 7f2f99d00d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.271610 7f2f99d00d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.271610 7f2f99d00d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.272599 7f2f99d00d40 -1 Errors while parsing config file!
2019-12-07 18:54:44.272603 7f2f99d00d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.272603 7f2f99d00d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.272604 7f2f99d00d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.291155 7f2f99d00d40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
rbd: sysfs write failed
2019-12-07 18:54:44.297026 7f2f99d00d40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-12-07 18:54:44.298627 7f2f99d00d40 0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted
# 上面说没有ceph.conf和ceph.client.admin.keyring,我们从ceph-admin节点拷贝过来
[root@k8s-master ceph]# rz [root@k8s-master ceph]# ls
ceph.client.admin.keyring ceph.conf rbdmap
# k8s所有节点都要拷贝,node节点也拷贝一下
[root@k8s-node1 ceph]# rz [root@k8s-node1 ceph]# ls
ceph.client.admin.keyring ceph.conf rbdmap
# 删除pod,让它重新挂载一下
[root@k8s-master ceph]# kubectl delete pod php-filecenter-deployment-3316474311-g1jmg
pod "php-filecenter-deployment-3316474311-g1jmg" deleted
[root@k8s-master ceph]# kubectl describe pod php-filecenter-deployment-3316474311-jr48g
Name: php-filecenter-deployment-3316474311-jr48g
Namespace: default
Node: k8s-master/172.16.143.107
Start Time: Mon, 09 Dec 2019 10:01:29 +0800
Labels: app=php-filecenter
pod-template-hash=3316474311
Status: Pending
IP:
Controllers: ReplicaSet/php-filecenter-deployment-3316474311
Containers:
php-filecenter:
Container ID:
Image: 172.16.143.107:5000/php-fpm:v2019120205
Image ID:
Port:
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Volume Mounts:
/mnt from filedata (rw)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
filedata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: filecenter-pvc
ReadOnly: false
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
10s 10s 1 {default-scheduler } Normal Scheduled Successfully assigned php-filecenter-deployment-3316474311-jr48g to k8s-master
8s 8s 1 {kubelet k8s-master} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
2019-12-09 10:01:30.443054 7f96b803fd40 0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted 6s 6s 1 {kubelet k8s-master} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
2019-12-09 10:01:32.022514 7fb376cb0d40 0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted 4s 4s 1 {kubelet k8s-master} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
2019-12-09 10:01:34.197942 7f0282d5fd40 0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted 1s 1s 1 {kubelet k8s-master} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
2019-12-09 10:01:37.602709 7f18facc1d40 0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted [root@k8s-master ceph]#

k8s使用ceph的rbd作后端存储的更多相关文章

  1. SUSE CaaS Platform 4 - 使用 Ceph RBD 作为持久存储 (静态)

    1.所有节点安装 # zypper -n in ceph-common 复制 ceph.conf 到 worker 节点上 # scp admin:/etc/ceph/ceph.conf /etc/c ...

  2. Openstack_后端存储平台Ceph

    框架图 介绍 一种为优秀的性能.可靠性和可扩展性而设计的统一的.分布式文件系统 特点 CRUSH算法 Crush算法是ceph的两大创新之一,简单来说,ceph摒弃了传统的集中式存储元数据寻址的方案, ...

  3. k8s对接ceph存储

    前提条件:已经部署好ceph集群 本次实验由于环境有限,ceph集群是部署在k8s的master节点上的 一.创建ceph存储池 在ceph集群的mon节点上执行以下命令: ceph osd pool ...

  4. 配置Ceph集群为OpenStack后端存储

    配置Ceph存储为OpenStack的后端存储 1  前期配置 Ceph官网提供的配置Ceph块存储为OpenStack后端存储的文档说明链接地址:http://docs.ceph.com/docs/ ...

  5. k8s使用ceph存储

    目录 ceph配置 k8s 配置 通过静态pv,pvc使用ceph 测试多pod挂载静态pv数据不一致问题 StoragaClass 方式 ceph 常用命令 k8s 常用命令 k8s各类端口及IP说 ...

  6. SUSE CaaS Platform 4 - 使用 Ceph RBD 作为持久存储(动态)

    图1 架构图 图2 各存储插件对动态供给方式的支持状况 1.所有节点安装 # yum install ceph-common 复制 ceph.conf 到 worker 节点上 # scp admin ...

  7. K8S学习笔记之k8s使用ceph实现动态持久化存储

    0x00 概述 本文章介绍如何使用ceph为k8s提供动态申请pv的功能.ceph提供底层存储功能,cephfs方式支持k8s的pv的3种访问模式ReadWriteOnce,ReadOnlyMany ...

  8. 配置cinder-backup服务使用ceph作为后端存储

    在ceph监视器上执行 CINDER_PASSWD='cinder1234!'controllerHost='controller'RABBIT_PASSWD='0penstackRMQ' 1.创建p ...

  9. 配置cinder-volume服务使用ceph作为后端存储

    在ceph监视器上执行 CINDER_PASSWD='cinder1234!'controllerHost='controller'RABBIT_PASSWD='0penstackRMQ' 1.创建p ...

随机推荐

  1. Makefile目标文件搜索(VPATH和vpath

    转载:http://c.biancheng.net/view/7051.html 我们都知道一个工程文件中的源文件有很多,并且存放的位置可能不相同(工程中的文件会被放到不同的目录下),所以按照之前的方 ...

  2. 国产新芯片连不上J-Link?芯海CS32L010系列芯片JLink配置方法

    疫情以来芯片供货紧张,特别是ST的MCU一芯难求.所以很多产品不得不切换成国产.不过也是经过使用后才发现,很多国产芯片的性能还是挺好的.由于芯片比较新,官方J-Link还没有支持,所以调试和烧录有些不 ...

  3. C# StringBuilder和string

    StringBuilder和string 1.string是引用类型还是值类型 MSDN官方说string是引用类型: 引用类型:引用分配栈内存,引用类型本身的数据存储在堆中: 值类型:在函数中创建, ...

  4. Go语言核心36讲(Go语言进阶技术十六)--学习笔记

    22 | panic函数.recover函数以及defer语句(下) 我在前一篇文章提到过这样一个说法,panic 之中可以包含一个值,用于简要解释引发此 panic 的原因. 如果一个 panic ...

  5. 二.什么是Promise

    二.什么是Promise 1.理解 2.promise 的状态改变 3.promise的基本流程 4.promise的基本使用 1.理解 抽象表达: Promise 是JS 中进行异步编程的新的解决方 ...

  6. 深入理解Spring IOC容器

    本文将从纯xml模式.xml和注解结合.纯注解的方式讲解Spring IOC容器的配置和相关应用. 纯XML模式 实例化Bean的三种方式: 使用无参构造函数 默认情况下,会使用反射调用无参构造函数来 ...

  7. RabbitMQ (五):死信队列

    什么是TTL RabbitMQ的TTL全称为Time-To-Live,表示的是消息的有效期.消息如果在队列中一直没有被消费并且存在时间超过了TTL,消息就会变成了"死信" (Dea ...

  8. GitHub 12个实用技巧-从projiect项目管理、代码链接到博客wiki全过程

    1 在GitHub.com上编辑代码 2 粘贴图片 3 美化代码 4 在PRs中巧妙关闭issues 5 链接到评论 6 链接到代码 7 灵活使用GitHub地址栏 8 创建复选框列表 9 在GitH ...

  9. Java核心技术--Java程序设计

    Java术语 术语名 缩写 解释 Java Development Kit(Java开发工具包) JDK 编写Java程序的程序员使用的软件 Java Runtime Environment(Java ...

  10. Node.js实现前后端交换——用户登陆

    最近学习了一点Node.js的后端知识,于是作为一个学习前端方向的我开始了解后端,话不多说,开始介绍.首先,如果你想要更好的理解这篇博客,你需要具备html,css,javascript和Node.j ...