k8s使用rbd作后端存储

k8s里的存储方式主要有三种。分别是volume、persistent volumes和dynamic volume provisioning。

  • volume: 就是直接挂载在pod上的组件,k8s中所有的其他存储组件都是通过volume来跟pod直接联系的。volume有个type属性,type决定了挂载的存储是什么,常见的比如:emptyDir,hostPath,nfs,rbd,以及下文要说的persistentVolumeClaim等。跟docker里面的volume概念不同的是,docker里的volume的生命周期是跟docker紧紧绑在一起的。这里根据type的不同,生命周期也不同,比如emptyDir类型的就是跟docker一样,pod挂掉,对应的volume也就消失了,而其他类型的都是永久存储。详细介绍可以参考Volumes
  • Persistent Volumes:顾名思义,这个组件就是用来支持永久存储的,Persistent Volumes组件会抽象后端存储的提供者(也就是上文中volume中的type)和消费者(即具体哪个pod使用)。该组件提供了PersistentVolume和PersistentVolumeClaim两个概念来抽象上述两者。一个PersistentVolume(简称PV)就是后端存储提供的一块存储空间,具体到ceph rbd中就是一个image,一个PersistentVolumeClaim(简称PVC)可以看做是用户对PV的请求,PVC会跟某个PV绑定,然后某个具体pod会在volume 中挂载PVC,就挂载了对应的PV。关于更多详细信息比如PV,PVC的生命周期,dockerfile 格式等信息参考Persistent Volumes
  • Dynamic Volume Provisioning: 动态volume发现,比如上面的Persistent Volumes,我们必须先要创建一个存储块,比如一个ceph中的image,然后将该image绑定PV,才能使用。这种静态的绑定模式太僵硬,每次申请存储都要向存储提供者索要一份存储快。Dynamic Volume Provisioning就是解决这个问题的。它引入了StorageClass这个概念,StorageClass抽象了存储提供者,只需在PVC中指定StorageClass,然后说明要多大的存储就可以了,存储提供者会根据需求动态创建所需存储快。甚至于,我们可以指定一个默认StorageClass,这样,只需创建PVC就可以了。

PV的访问模式

  • ReadWriteOnce:可读可写,但只支持被单个node挂载。
  • ReadOnlyMany:只读权限,可以以只读的方式被多个node挂载。
  • ReadWriteMany:可读可写,这种存储可以以读写的方式被多个node共享。

PV回收策略

  • Retain – 手动重新使用
  • Recycle – 基本的删除操作 (“rm -rf /thevolume/*”)
  • Delete – 关联的后端存储卷一起删除,后端存储例如AWS EBS, GCE PD或OpenStack Cinder

在CLI下,访问方式被简写为:

RWO – ReadWriteOnce

ROX – ReadOnlyMany

RWX – ReadWriteMany

在当前的定义中,这三种方式都是针对节点级别的,也就是说,对于一个Persistent Volume, 如果是RWO, 那么只能被挂载在某一个Kubernetes的工作节点(以下简称节点)上,当再次尝试在其他节点挂载的时候,系统会报Multi-Attach的错误(当然,在只有一台可调度节点的情况,即使RWO也是能够被多个Pod同时使用的,但除了开发测试,有谁会这么用呢?); 如果是RMX, 那么可以同时在多个节点上挂载并被不同的Pod使用。

官方支持如下:

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

所以,我们这里是无法在多个node节点使用rbd的。

折中解决办法:

方法一:使用 NodeSelector标签,限制pod调度到指定的Node上。(如此便是单机)

方法二:不使用k8s,直接将rbd挂载到单台linux机器上。(也是单机)

开始使用rbd

每个k8s node都需要安装ceph客户端ceph-common,才能正常使用ceph

  1. [root@k8s-node1 yum.repos.d]# cd /etc/yum.repos.d
  2. [root@k8s-node1 yum.repos.d]# pwd
  3. /etc/yum.repos.d
  4. # 这里ceph源最好和ceph存储系统里使用的保持一致,保证ceph客户端版本一致
  5. [root@k8s-node1 yum.repos.d]# vim ceph.repo
  6. [Ceph]
  7. name=Ceph packages for $basearch
  8. baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
  9. enabled=1
  10. gpgcheck=0
  11. type=rpm-md
  12. gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
  13. priority=1
  14. [Ceph-noarch]
  15. name=Ceph noarch packages
  16. baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
  17. enabled=1
  18. gpgcheck=0
  19. type=rpm-md
  20. gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
  21. priority=1
  22. [ceph-source]
  23. name=Ceph source packages
  24. baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
  25. enabled=1
  26. gpgcheck=0
  27. type=rpm-md
  28. gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
  29. priority=1
  30. # 安装ceph-common
  31. [root@k8s-node1 yum.repos.d]# yum -y install ceph-common

注:依赖包报错

  1. # 如果yum安装ceph-common的时候,报这个错,则先安装epel-release
  2. [root@k8s-node1 yum.repos.d]# yum -y install ceph-common
  3. ...
  4. ---> Package python-six.noarch 0:1.9.0-2.el7 will be installed
  5. --> Finished Dependency Resolution
  6. Error: Package: 2:ceph-common-12.2.12-0.el7.x86_64 (Ceph)
  7. Requires: libleveldb.so.1()(64bit)
  8. Error: Package: 2:librados2-12.2.12-0.el7.x86_64 (Ceph)
  9. Requires: liblttng-ust.so.0()(64bit)
  10. Error: Package: 2:ceph-common-12.2.12-0.el7.x86_64 (Ceph)
  11. Requires: libbabeltrace-ctf.so.1()(64bit)
  12. Error: Package: 2:ceph-common-12.2.12-0.el7.x86_64 (Ceph)
  13. Requires: libbabeltrace.so.1()(64bit)
  14. Error: Package: 2:librbd1-12.2.12-0.el7.x86_64 (Ceph)
  15. Requires: liblttng-ust.so.0()(64bit)
  16. You could try using --skip-broken to work around the problem
  17. You could try running: rpm -Va --nofiles --nodigest
  18. # 安装epel-release
  19. [root@k8s-node1 yum.repos.d]# yum install epel-release
  20. # 此时,再次安装ceph-common即可安装上
  21. [root@k8s-node1 yum.repos.d]# yum -y install ceph-common
  22. [root@k8s-node1 yum.repos.d]# ceph --version
  23. ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)

每个k8s-node都必须要安装ceph客户端,否则无法挂载

k8s-master上也安装

  1. [root@k8s-master yum.repos.d]# cd /etc/yum.repos.d
  2. [root@k8s-master yum.repos.d]# yum -y install ceph-commo
  3. [root@k8s-master yum.repos.d]# ceph --version
  4. ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)

ceph配置(在ceph-admin节点设置)

  1. # 创建存储池rbd_data,(我们测试环境之前创建过,所以略过)
  2. [cephfsd@ceph-admin ceph]$ ceph osd pool create rbd_data 64 64
  3. # 创建rbd镜像 (因为我们之前配置里指定了format为1,所以这里不需要显示指定)
  4. [cephfsd@ceph-admin ceph]$ rbd create rbd_data/filecenter_image --size=10G
  5. # 映射该rbd镜像
  6. [cephfsd@ceph-admin ceph]$ sudo rbd map rbd_data/filecenter_image
  7. /dev/rbd4
  8. # 查看镜像信息
  9. [cephfsd@ceph-admin ceph]$ rbd info rbd_data/filecenter_image
  10. rbd image 'filecenter_image':
  11. size 10GiB in 2560 objects
  12. order 22 (4MiB objects)
  13. block_name_prefix: rbd_data.376b6b8b4567
  14. format: 2
  15. features: layering
  16. flags:
  17. create_timestamp: Sat Dec 7 17:37:41 2019
  18. [cephfsd@ceph-admin ceph]$
  19. # 创建secret:
  20. # 在创建pv前,由于ceph是开启了cephx认证的,于是首先需要创建secret资源,k8s的secret资源采用的是base64加密
  21. # 在ceph monitor上提取key:
  22. # 生成加密的key
  23. [cephfsd@ceph-admin ceph]$ cat ceph.client.admin.keyring
  24. [client.admin]
  25. key = AQBIH+ld1okAJhAAmULVJM4zCCVAK/Vdi3Tz5Q==
  26. [cephfsd@ceph-admin ceph]$ ceph auth get-key client.admin | base64
  27. QVFCSUgrbGQxb2tBSmhBQW1VTFZKTTR6Q0NWQUsvVmRpM1R6NVE9PQ==

k8s-master上创建ceph的secret

  1. # 创建一个目录,存放yaml文件,自己可以随意设置
  2. [root@k8s-master yum.repos.d]# mkdir /root/k8s/nmp/k1/ceph
  3. [root@k8s-master yum.repos.d]# cd /root/k8s/nmp/k1/ceph/
  4. [root@k8s-master ceph]# pwd
  5. /root/k8s/nmp/k1/ceph
  6. # 创建secret
  7. [root@k8s-master ceph]# vim ceph-secret.yaml
  8. [root@k8s-master ceph]# cat ceph-secret.yaml
  9. apiVersion: v1
  10. kind: Secret
  11. metadata:
  12. name: ceph-secret
  13. type: "kubernetes.io/rbd"
  14. data:
  15. key: QVFDTTlXOWFOMk9IR3hBQXZyUjFjdGJDSFpoZUtmckY0N2tZOUE9PQ==
  16. [root@k8s-master ceph]# kubectl create -f ceph-secret.yaml
  17. secret "ceph-secret" created
  18. [root@k8s-master ceph]# kubectl get secret
  19. NAME TYPE DATA AGE
  20. ceph-secret kubernetes.io/rbd 1 7s
  21. [root@k8s-master ceph]#

创建PV

  1. [root@k8s-master ceph]# vim filecenter-pv.yaml
  2. apiVersion: v1
  3. kind: PersistentVolume
  4. metadata:
  5. name: filecenter-pv
  6. spec:
  7. capacity:
  8. storage: 1Gi
  9. accessModes:
  10. - ReadWriteOnce
  11. rbd:
  12. monitors:
  13. - 172.16.143.121:6789
  14. pool: rbd_data
  15. image: filecenter_image
  16. user: admin
  17. secretRef:
  18. name: ceph-secret
  19. fsType: xfs
  20. readOnly: false
  21. persistentVolumeReclaimPolicy: Recycle
  22. [root@k8s-master ceph]# kubectl create -f filecenter-pv.yaml
  23. persistentvolume "filecenter-pv" created
  24. [root@k8s-master ceph]# kubectl get pv -o wide
  25. NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
  26. filecenter-pv 1Gi RWO Recycle Available 18m

创建PVC

  1. [root@k8s-master ceph]# vim filecenter-pvc.yaml
  2. kind: PersistentVolumeClaim
  3. apiVersion: v1
  4. metadata:
  5. name: filecenter-pvc
  6. spec:
  7. accessModes:
  8. - ReadWriteOnce
  9. resources:
  10. requests:
  11. storage: 1Gi
  12. [root@k8s-master ceph]# kubectl create -f filecenter-pvc.yaml
  13. persistentvolumeclaim "filecenter-pvc" created
  14. [root@k8s-master ceph]# kubectl get pvc -o wide
  15. NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
  16. filecenter-pvc Bound filecenter-pv 1Gi RWO 6s
  17. [root@k8s-master ceph]#

创建deployment挂载PVC

这里修改之前已经在使用的php-filecenter.yaml

  1. [root@k8s-master ceph]# vim ../php/file-center/php-filecenter-deployment.yaml
  2. apiVersion: extensions/v1beta1
  3. kind: Deployment
  4. metadata:
  5. name: php-filecenter-deployment
  6. spec:
  7. replicas: 1
  8. selector:
  9. matchLabels:
  10. app: php-filecenter
  11. template:
  12. metadata:
  13. labels:
  14. app: php-filecenter
  15. spec:
  16. containers:
  17. - name: php-filecenter
  18. image: 172.16.143.107:5000/php-fpm:v2019120205
  19. volumeMounts:
  20. - mountPath: "/mnt"
  21. name: filedata
  22. volumes:
  23. - name: filedata
  24. persistentVolumeClaim:
  25. claimName: filecenter-pvc
  26. [root@k8s-master ceph]# kubectl apply -f ../php/file-center/php-filecenter-deployment.yaml
  27. deployment "php-filecenter-deployment" configured
  28. [root@k8s-master ceph]#

报错:pvc没有创建成功,导致pod创建失败

  1. # pvc并没有绑定成功,导致pod创建失败
  2. [root@k8s-master ceph]# kubectl exec -it php-filecenter-deployment-3316474311-g1jmg bash
  3. Error from server (BadRequest): pod php-filecenter-deployment-3316474311-g1jmg does not have a host assigned
  4. [root@k8s-master ceph]# kubectl describe pod php-filecenter-deployment-3316474311-g1jmg
  5. Name: php-filecenter-deployment-3316474311-g1jmg
  6. Namespace: default
  7. Node: /
  8. Labels: app=php-filecenter
  9. pod-template-hash=3316474311
  10. Status: Pending
  11. IP:
  12. Controllers: ReplicaSet/php-filecenter-deployment-3316474311
  13. Containers:
  14. php-filecenter:
  15. Image: 172.16.143.107:5000/php-fpm:v2019120205
  16. Port:
  17. Volume Mounts:
  18. /mnt from filedata (rw)
  19. Environment Variables: <none>
  20. Conditions:
  21. Type Status
  22. PodScheduled False
  23. Volumes:
  24. filedata:
  25. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
  26. ClaimName: filecenter-pvc
  27. ReadOnly: false
  28. QoS Class: BestEffort
  29. Tolerations: <none>
  30. Events:
  31. FirstSeen LastSeen Count From SubObjectPath Type Reason Message
  32. --------- -------- ----- ---- ------------- -------- ------ -------
  33. 8m 1m 29 {default-scheduler } Warning FailedScheduling [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected.]
  34. # 查看pv,pv是Available
  35. [root@k8s-master ceph]# kubectl get pv -o wide
  36. NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
  37. filecenter-pv 1Gi RWO Recycle Available 39m
  38. # 查看pvc,发现pvc是pending状态,有问题
  39. [root@k8s-master ceph]# kubectl get pvc -o wide
  40. NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
  41. filecenter-pvc Pending 35m
  42. [root@k8s-master ceph]# kubectl get pod php-filecenter-deployment-3316474311-g1jmg
  43. NAME READY STATUS RESTARTS AGE
  44. php-filecenter-deployment-3316474311-g1jmg 0/1 Pending 0 9m
  45. [root@k8s-master ceph]#
  46. [root@k8s-master ceph]# kubectl describe pv filecenter-pv
  47. Name: filecenter-pv
  48. Labels: <none>
  49. StorageClass:
  50. Status: Available
  51. Claim:
  52. Reclaim Policy: Recycle
  53. Access Modes: RWO
  54. Capacity: 1Gi
  55. Message:
  56. Source:
  57. Type: RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
  58. CephMonitors: [172.16.143.121:6789]
  59. RBDImage: filecenter_image
  60. FSType: xfs
  61. RBDPool: rbd_data
  62. RadosUser: admin
  63. Keyring: /etc/ceph/keyring
  64. SecretRef: &{ceph-secret}
  65. ReadOnly: false
  66. No events.
  67. [root@k8s-master ceph]# kubectl describe pvc filecenter-pvc
  68. Name: filecenter-pvc
  69. Namespace: default
  70. StorageClass:
  71. Status: Pending
  72. Volume:
  73. Labels: <none>
  74. Capacity:
  75. Access Modes:
  76. Events:
  77. FirstSeen LastSeen Count From SubObjectPath Type Reason Message
  78. --------- -------- ----- ---- ------------- -------- ------ -------
  79. 48m 5s 196 {persistentvolume-controller } Normal FailedBinding no persistent volumes available for this claim and no storage class is set
  80. [root@k8s-master ceph]#
  81. # 这种问题就表示pv和pvc没有绑定成功,如果没有指定match labels等规则,那要么就是容量大小不匹配,要么就是access modes不匹配
  82. # 这里检查,发现是pv定义的1G,而pvc定义的是10G,容量大小不匹配,所以pvc没有绑定成功。
  83. # 修改pvc的配置
  84. [root@k8s-master ceph]# vim filecenter-pvc.yaml
  85. # 再次重新应用配置
  86. [root@k8s-master ceph]# kubectl apply -f filecenter-pvc.yaml
  87. The PersistentVolumeClaim "filecenter-pvc" is invalid: spec: Forbidden: field is immutable after creation
  88. # 删除pvc,重新创建
  89. [root@k8s-master ceph]# kubectl delete -f filecenter-pvc.yaml
  90. persistentvolumeclaim "filecenter-pvc" deleted
  91. # 再次检查配置是否一致
  92. [root@k8s-master ceph]# cat filecenter-pvc.yaml
  93. kind: PersistentVolumeClaim
  94. apiVersion: v1
  95. metadata:
  96. name: filecenter-pvc
  97. spec:
  98. accessModes:
  99. - ReadWriteOnce
  100. resources:
  101. requests:
  102. storage: 1Gi
  103. [root@k8s-master ceph]# cat filecenter-pv.yaml
  104. apiVersion: v1
  105. kind: PersistentVolume
  106. metadata:
  107. name: filecenter-pv
  108. spec:
  109. capacity:
  110. storage: 1Gi
  111. accessModes:
  112. - ReadWriteOnce
  113. rbd:
  114. monitors:
  115. - 172.16.143.121:6789
  116. pool: rbd_data
  117. image: filecenter_image
  118. user: admin
  119. secretRef:
  120. name: ceph-secret
  121. fsType: xfs
  122. readOnly: false
  123. persistentVolumeReclaimPolicy: Recycle
  124. # 重新创建pvc
  125. [root@k8s-master ceph]# kubectl create -f filecenter-pvc.yaml
  126. persistentvolumeclaim "filecenter-pvc" created
  127. [root@k8s-master ceph]# kubectl get pvc -o wide
  128. NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
  129. filecenter-pvc Bound filecenter-pv 1Gi RWO 6s
  130. [root@k8s-master ceph]#
  131. # 可以看到pvc是Bound状态,绑定成功。

没有rbd map,导致pod绑定rbd失败

  1. [root@k8s-master ceph]# kubectl apply -f ../php/file-center/php-filecenter-deployment.yaml
  2. service "php-filecenter-service" configured
  3. deployment "php-filecenter-deployment" configured
  4. [root@k8s-master ceph]# kubectl get pod php-filecenter-deployment-3316474311-g1jmg
  5. NAME READY STATUS RESTARTS AGE
  6. php-filecenter-deployment-3316474311-g1jmg 0/1 ContainerCreating 0 41m
  7. [root@k8s-master ceph]# kubectl logs -f php-filecenter-deployment-3316474311-g1jmg
  8. Error from server (BadRequest): container "php-filecenter" in pod "php-filecenter-deployment-3316474311-g1jmg" is waiting to start: ContainerCreating
  9. [root@k8s-master ceph]# kubectl describe pod php-filecenter-deployment-3316474311-g1jmg
  10. Name: php-filecenter-deployment-3316474311-g1jmg
  11. Namespace: default
  12. Node: k8s-node1/172.16.143.108
  13. Start Time: Sat, 07 Dec 2019 18:52:30 +0800
  14. Labels: app=php-filecenter
  15. pod-template-hash=3316474311
  16. Status: Pending
  17. IP:
  18. Controllers: ReplicaSet/php-filecenter-deployment-3316474311
  19. Containers:
  20. php-filecenter:
  21. Container ID:
  22. Image: 172.16.143.107:5000/php-fpm:v2019120205
  23. Image ID:
  24. Port:
  25. State: Waiting
  26. Reason: ContainerCreating
  27. Ready: False
  28. Restart Count: 0
  29. Volume Mounts:
  30. /mnt from filedata (rw)
  31. Environment Variables: <none>
  32. Conditions:
  33. Type Status
  34. Initialized True
  35. Ready False
  36. PodScheduled True
  37. Volumes:
  38. filedata:
  39. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
  40. ClaimName: filecenter-pvc
  41. ReadOnly: false
  42. QoS Class: BestEffort
  43. Tolerations: <none>
  44. Events:
  45. FirstSeen LastSeen Count From SubObjectPath Type Reason Message
  46. --------- -------- ----- ---- ------------- -------- ------ -------
  47. 41m 4m 133 {default-scheduler } Warning FailedScheduling [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected.]
  48. 3m 3m 1 {default-scheduler } Normal Scheduled Successfully assigned php-filecenter-deployment-3316474311-g1jmg to k8s-node1
  49. 1m 1m 1 {kubelet k8s-node1} Warning FailedMount Unable to mount volumes for pod "php-filecenter-deployment-3316474311-g1jmg_default(5d5d48d9-18da-11ea-8c36-000c29fc3a73)": timeout expired waiting for volumes to attach/mount for pod "default"/"php-filecenter-deployment-3316474311-g1jmg". list of unattached/unmounted volumes=[filedata]
  50. 1m 1m 1 {kubelet k8s-node1} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"php-filecenter-deployment-3316474311-g1jmg". list of unattached/unmounted volumes=[filedata]
  51. 1m 1m 1 {kubelet k8s-node1} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/rbd/5d5d48d9-18da-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "5d5d48d9-18da-11ea-8c36-000c29fc3a73" (UID: "5d5d48d9-18da-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 2019-12-07 18:54:44.268690 7f2f99d00d40 -1 did not load config file, using default settings.
  52. 2019-12-07 18:54:44.271606 7f2f99d00d40 -1 Errors while parsing config file!
  53. 2019-12-07 18:54:44.271610 7f2f99d00d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
  54. 2019-12-07 18:54:44.271610 7f2f99d00d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
  55. 2019-12-07 18:54:44.271610 7f2f99d00d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
  56. 2019-12-07 18:54:44.272599 7f2f99d00d40 -1 Errors while parsing config file!
  57. 2019-12-07 18:54:44.272603 7f2f99d00d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
  58. 2019-12-07 18:54:44.272603 7f2f99d00d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
  59. 2019-12-07 18:54:44.272604 7f2f99d00d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
  60. 2019-12-07 18:54:44.291155 7f2f99d00d40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
  61. rbd: sysfs write failed
  62. 2019-12-07 18:54:44.297026 7f2f99d00d40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
  63. 2019-12-07 18:54:44.298627 7f2f99d00d40 0 librados: client.admin authentication error (1) Operation not permitted
  64. rbd: couldn't connect to the cluster!
  65. In some cases useful info is found in syslog - try "dmesg | tail".
  66. rbd: map failed: (1) Operation not permitted
  67. # 上面说没有ceph.conf和ceph.client.admin.keyring,我们从ceph-admin节点拷贝过来
  68. [root@k8s-master ceph]# rz
  69.  
  70. [root@k8s-master ceph]# ls
  71. ceph.client.admin.keyring ceph.conf rbdmap
  72. # k8s所有节点都要拷贝,node节点也拷贝一下
  73. [root@k8s-node1 ceph]# rz
  74.  
  75. [root@k8s-node1 ceph]# ls
  76. ceph.client.admin.keyring ceph.conf rbdmap
  77. # 删除pod,让它重新挂载一下
  78. [root@k8s-master ceph]# kubectl delete pod php-filecenter-deployment-3316474311-g1jmg
  79. pod "php-filecenter-deployment-3316474311-g1jmg" deleted
  80. [root@k8s-master ceph]# kubectl describe pod php-filecenter-deployment-3316474311-jr48g
  81. Name: php-filecenter-deployment-3316474311-jr48g
  82. Namespace: default
  83. Node: k8s-master/172.16.143.107
  84. Start Time: Mon, 09 Dec 2019 10:01:29 +0800
  85. Labels: app=php-filecenter
  86. pod-template-hash=3316474311
  87. Status: Pending
  88. IP:
  89. Controllers: ReplicaSet/php-filecenter-deployment-3316474311
  90. Containers:
  91. php-filecenter:
  92. Container ID:
  93. Image: 172.16.143.107:5000/php-fpm:v2019120205
  94. Image ID:
  95. Port:
  96. State: Waiting
  97. Reason: ContainerCreating
  98. Ready: False
  99. Restart Count: 0
  100. Volume Mounts:
  101. /mnt from filedata (rw)
  102. Environment Variables: <none>
  103. Conditions:
  104. Type Status
  105. Initialized True
  106. Ready False
  107. PodScheduled True
  108. Volumes:
  109. filedata:
  110. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
  111. ClaimName: filecenter-pvc
  112. ReadOnly: false
  113. QoS Class: BestEffort
  114. Tolerations: <none>
  115. Events:
  116. FirstSeen LastSeen Count From SubObjectPath Type Reason Message
  117. --------- -------- ----- ---- ------------- -------- ------ -------
  118. 10s 10s 1 {default-scheduler } Normal Scheduled Successfully assigned php-filecenter-deployment-3316474311-jr48g to k8s-master
  119. 8s 8s 1 {kubelet k8s-master} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
  120. 2019-12-09 10:01:30.443054 7f96b803fd40 0 librados: client.admin authentication error (1) Operation not permitted
  121. rbd: couldn't connect to the cluster!
  122. In some cases useful info is found in syslog - try "dmesg | tail".
  123. rbd: map failed: (1) Operation not permitted
  124.  
  125. 6s 6s 1 {kubelet k8s-master} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
  126. 2019-12-09 10:01:32.022514 7fb376cb0d40 0 librados: client.admin authentication error (1) Operation not permitted
  127. rbd: couldn't connect to the cluster!
  128. In some cases useful info is found in syslog - try "dmesg | tail".
  129. rbd: map failed: (1) Operation not permitted
  130.  
  131. 4s 4s 1 {kubelet k8s-master} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
  132. 2019-12-09 10:01:34.197942 7f0282d5fd40 0 librados: client.admin authentication error (1) Operation not permitted
  133. rbd: couldn't connect to the cluster!
  134. In some cases useful info is found in syslog - try "dmesg | tail".
  135. rbd: map failed: (1) Operation not permitted
  136.  
  137. 1s 1s 1 {kubelet k8s-master} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
  138. 2019-12-09 10:01:37.602709 7f18facc1d40 0 librados: client.admin authentication error (1) Operation not permitted
  139. rbd: couldn't connect to the cluster!
  140. In some cases useful info is found in syslog - try "dmesg | tail".
  141. rbd: map failed: (1) Operation not permitted
  142.  
  143. [root@k8s-master ceph]#

k8s使用ceph的rbd作后端存储的更多相关文章

  1. SUSE CaaS Platform 4 - 使用 Ceph RBD 作为持久存储 (静态)

    1.所有节点安装 # zypper -n in ceph-common 复制 ceph.conf 到 worker 节点上 # scp admin:/etc/ceph/ceph.conf /etc/c ...

  2. Openstack_后端存储平台Ceph

    框架图 介绍 一种为优秀的性能.可靠性和可扩展性而设计的统一的.分布式文件系统 特点 CRUSH算法 Crush算法是ceph的两大创新之一,简单来说,ceph摒弃了传统的集中式存储元数据寻址的方案, ...

  3. k8s对接ceph存储

    前提条件:已经部署好ceph集群 本次实验由于环境有限,ceph集群是部署在k8s的master节点上的 一.创建ceph存储池 在ceph集群的mon节点上执行以下命令: ceph osd pool ...

  4. 配置Ceph集群为OpenStack后端存储

    配置Ceph存储为OpenStack的后端存储 1  前期配置 Ceph官网提供的配置Ceph块存储为OpenStack后端存储的文档说明链接地址:http://docs.ceph.com/docs/ ...

  5. k8s使用ceph存储

    目录 ceph配置 k8s 配置 通过静态pv,pvc使用ceph 测试多pod挂载静态pv数据不一致问题 StoragaClass 方式 ceph 常用命令 k8s 常用命令 k8s各类端口及IP说 ...

  6. SUSE CaaS Platform 4 - 使用 Ceph RBD 作为持久存储(动态)

    图1 架构图 图2 各存储插件对动态供给方式的支持状况 1.所有节点安装 # yum install ceph-common 复制 ceph.conf 到 worker 节点上 # scp admin ...

  7. K8S学习笔记之k8s使用ceph实现动态持久化存储

    0x00 概述 本文章介绍如何使用ceph为k8s提供动态申请pv的功能.ceph提供底层存储功能,cephfs方式支持k8s的pv的3种访问模式ReadWriteOnce,ReadOnlyMany ...

  8. 配置cinder-backup服务使用ceph作为后端存储

    在ceph监视器上执行 CINDER_PASSWD='cinder1234!'controllerHost='controller'RABBIT_PASSWD='0penstackRMQ' 1.创建p ...

  9. 配置cinder-volume服务使用ceph作为后端存储

    在ceph监视器上执行 CINDER_PASSWD='cinder1234!'controllerHost='controller'RABBIT_PASSWD='0penstackRMQ' 1.创建p ...

随机推荐

  1. Luogu P1023 [NOIp2000提高组]税收与补贴问题 | 数学

    题目链接 思路:列不等式组,然后解出不等式,得出答案的取值范围,最后取一个绝对值最小的答案就行了. #include<iostream> #include<cstdio> #i ...

  2. pycharm基本使用python的注释语法

    pychram基本使用 1.主题选择 file settings Editor color Scheme 2.pycharm切换解释器 file settings Project Python Int ...

  3. Robot Framework操作MySQL数据库

    1.安装databaselibrary.pymysql 通过cmd命令执行:pip install robotframework-databaselibrary cmd命令执行:pip install ...

  4. css语法规范、选择器、字体、文本

    css语法规范 使用 HTML 时需要遵从一定的规范,CSS 也是如此.要想熟练地使用 CSS 对网页进行修饰,首先需要了解CSS 样式规则. CSS 规则由两个主要的部分构成:选择器以及一条或多条声 ...

  5. Vue 之 Mixins (混入)的使用

    是什么 混入 (mixins): 是一种分发 Vue 组件中可复用功能的非常灵活的方式.混入对象可以包含任意组件选项.当组件使用混入对象时,所有混入对象的选项将被合并到组件本身,也就是说父组件调用混入 ...

  6. 菜鸡的Java笔记 第二十二 - java 对象多态性

    本次只是围绕着多态性的概念来进行讲解,但是所讲解的代码与实际的开发几乎没有关系,而且多态一定是在继承性的基础上才可以操作的,        而本次将使用类继承的关系来描述多态的性质,实际的开发中不会出 ...

  7. 接口返回图片,前端生成临时url实现展示、下载效果

    请求一个后端接口 返回一张图片(打印后发现是二进制流) 瞬间不开心了(为什么不能后端处理好再让前端调用呢) 不过丝毫不慌好吧 先说处理逻辑:首先要将获取到的数据转换,这边选择以blob形式进行转换 主 ...

  8. [luogu7599]雨林跳跃

    为了方便,令$a_{0}=a_{n+1}=\infty$,另外$a_{i}$是两两不同的 记$L_{x}$和$R_{x}$分别为$x$左右两侧第一个比$a_{x}$大的元素位置,可以$o(n)$预处理 ...

  9. UNCTF2020 web writeup

    1.Easy_ssrf 给了file_get_contents,直接读取flag即可 2.Easyunserialize 利用点在 构造uname反序列化逃逸即可 3.Babyeval 两个过滤,绕过 ...

  10. pycahrm下载

    下载地址: https://www.jetbrains.com/pycharm/download/#section=windows 下载社区版本,不用破解,可以直接使用