1、准备工作

  所有节点安装GFS客户端

  1. yum install glusterfs glusterfs-fuse -y

  如果不是所有节点要部署GFS管理服务,就在需要部署的节点上打上标签

  1. [root@k8s-master01 ~]# kubectl label node k8s-node01 storagenode=glusterfs
  2. node/k8s-node01 labeled
  3. [root@k8s-master01 ~]# kubectl label node k8s-node02 storagenode=glusterfs
  4. node/k8s-node02 labeled
  5. [root@k8s-master01 ~]# kubectl label node k8s-master01 storagenode=glusterfs
  6. node/k8s-master01 labeled

  所有节点

  1. [root@k8s-master01 ~]# modprobe dm_snapshot
  2. [root@k8s-master01 ~]# modprobe dm_mirror
  3. [root@k8s-master01 ~]# modprobe dm_thin_pool

2、创建GFS管理服务容器集群

  本文采用容器化方式部署GFS,公司如有GFS集群可直接使用。

  GFS已Daemonset的方式进行部署,保证每台需要部署GFS管理服务的Node上都运行一个GFS管理服务。

  下载相关文件:

  1. wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz

  创建集群:

  1. [root@k8s-master01 kubernetes]# pwd
  2. /root/heketi-client/share/heketi/kubernetes
  1. [root@k8s-master01 kubernetes]# kubectl create -f glusterfs-daemonset.json
  2. daemonset.extensions/glusterfs created
  3. [root@k8s-master01 kubernetes]# pwd
  4. /root/heketi-client/share/heketi/kubernetes

  注意1:此时采用的为默认的挂载方式,可使用其他磁盘当做GFS的工作目录。

  注意2:此时创建的namespace为默认的default,按需更改

  注意3:可使用gluster/gluster-centos:gluster3u12_centos7镜像

  查看pods

  1. [root@k8s-master01 kubernetes]# kubectl get pods -l glusterfs-node=daemonset
  2. NAME READY STATUS RESTARTS AGE
  3. glusterfs-5npwn / Running 1m
  4. glusterfs-bd5dx / Running 1m
    ...

3、创建Heketi服务

  Heketi是一个提供RESTful API管理GFS卷的框架,并能够在K8S、OpenShift、OpenStack等云平台上实现动态存储资源供应,支持GFS多集群管理,便于管理员对GFS进行操作。

  创建Heketi的ServiceAccount对象:

  1. [root@k8s-master01 kubernetes]# cat heketi-service-account.json
  2. {
  3. "apiVersion": "v1",
  4. "kind": "ServiceAccount",
  5. "metadata": {
  6. "name": "heketi-service-account"
  7. }
  8. }
  9. [root@k8s-master01 kubernetes]# kubectl create -f heketi-service-account.json
  10. serviceaccount/heketi-service-account created
  11. [root@k8s-master01 kubernetes]# pwd
  12. /root/heketi-client/share/heketi/kubernetes
  13. [root@k8s-master01 kubernetes]# kubectl get sa
  14. NAME SECRETS AGE
  15. default 13d
  16. heketi-service-account <invalid>

  创建Heketi对应的权限和secret

  1. [root@k8s-master01 kubernetes]# kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account
  2. clusterrolebinding.rbac.authorization.k8s.io/heketi-gluster-admin created
  3. [root@k8s-master01 kubernetes]# kubectl create secret generic heketi-config-secret --from-file=./heketi.json
  4. secret/heketi-config-secret created

  初始化部署Heketi

  1. [root@k8s-master01 kubernetes]# kubectl create -f heketi-bootstrap.json
  2. secret/heketi-db-backup created
  3. service/heketi created
  4. deployment.extensions/heketi created
  5. [root@k8s-master01 kubernetes]# pwd
  6. /root/heketi-client/share/heketi/kubernetes

  

4、设置GFS集群

  1. [root@k8s-master01 heketi-client]# cp bin/heketi-cli /usr/local/bin/
  2. [root@k8s-master01 heketi-client]# pwd
  3. /root/heketi-client
  4.  
  5. [root@k8s-master01 heketi-client]# heketi-cli -v
  6. heketi-cli v7.0.0

  修改topology-sample,manage为GFS管理服务的Node节点主机名,storage为Node节点IP,devices为Node节点上的裸设备

  1. [root@k8s-master01 kubernetes]# cat topology-sample.json
  2. {
  3. "clusters": [
  4. {
  5. "nodes": [
  6. {
  7. "node": {
  8. "hostnames": {
  9. "manage": [
  10. "k8s-master01"
  11. ],
  12. "storage": [
  13. "192.168.20.20"
  14. ]
  15. },
  16. "zone":
  17. },
  18. "devices": [
  19. {
  20. "name": "/dev/sdc",
  21. "destroydata": false
  22. }
  23. ]
  24. },
  25. {
  26. "node": {
  27. "hostnames": {
  28. "manage": [
  29. "k8s-node01"
  30. ],
  31. "storage": [
  32. "192.168.20.30"
  33. ]
  34. },
  35. "zone":
  36. },
  37. "devices": [
  38. {
  39. "name": "/dev/sdb",
  40. "destroydata": false
  41. }
  42. ]
  43. },
  44. {
  45. "node": {
  46. "hostnames": {
  47. "manage": [
  48. "k8s-node02"
  49. ],
  50. "storage": [
  51. "192.168.20.31"
  52. ]
  53. },
  54. "zone":
  55. },
  56. "devices": [
  57. {
  58. "name": "/dev/sdb",
  59. "destroydata": false
  60. }
  61. ]
  62. }
  63. ]
  64. }
  65. ]
  66. }

  查看当前pod的ClusterIP

  1. [root@k8s-master01 kubernetes]# kubectl get svc | grep heketi
  2. deploy-heketi ClusterIP 10.110.217.153 <none> /TCP 26m
  3. [root@k8s-master01 kubernetes]# export HEKETI_CLI_SERVER=http://10.110.217.153:8080

  创建GFS集群

  1. [root@k8s-master01 kubernetes]# heketi-cli topology load --json=topology-sample.json
  2. Creating cluster ... ID: a058723afae149618337299c84a1eaed
  3. Allowing file volumes on cluster.
  4. Allowing block volumes on cluster.
  5. Creating node k8s-master01 ... ID: 929909065ceedb59c1b9c235fc3298ec
  6. Adding device /dev/sdc ... OK
  7. Creating node k8s-node01 ... ID: 37409d82b9ef27f73ccc847853eec429
  8. Adding device /dev/sdb ... OK
  9. Creating node k8s-node02 ... ID: e3ab676be27945749bba90efb34f2eb9
  10. Adding device /dev/sdb ... OK

  创建heketi持久化卷

  1. yum install device-mapper* -y
  1. [root@k8s-master01 kubernetes]# heketi-cli setup-openshift-heketi-storage
  2. Saving heketi-storage.json
  3. [root@k8s-master01 kubernetes]# ls
  4. glusterfs-daemonset.json heketi.json heketi-storage.json
  5. heketi-bootstrap.json heketi-service-account.json README.md
  6. heketi-deployment.json heketi-start.sh topology-sample.json
  7. [root@k8s-master01 kubernetes]# kubectl create -f heketi-storage.json
  8. secret/heketi-storage-secret created
  9. endpoints/heketi-storage-endpoints created
  10. service/heketi-storage-endpoints created
  11. job.batch/heketi-storage-copy-job created

  如果出现如下报错:

  1. [root@k8s-master01 kubernetes]# heketi-cli setup-openshift-heketi-storage
  2. Error: /usr/sbin/modprobe failed:
  3. thin: Required device-mapper target(s) not detected in your kernel.
  4. Run `lvcreate --help' for more information.

  解决办法:所有节点执行modprobe dm_thin_pool

  删除中间产物

  1. [root@k8s-master01 kubernetes]# kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"
  2. pod "deploy-heketi-59f8dbc97f-5rf6s" deleted
  3. service "deploy-heketi" deleted
  4. service "heketi" deleted
  5. deployment.apps "deploy-heketi" deleted
  6. replicaset.apps "deploy-heketi-59f8dbc97f" deleted
  7. job.batch "heketi-storage-copy-job" deleted
  8. secret "heketi-storage-secret" deleted

  创建持久化Heketi,持久化方式也可以选用其他方法。

  1. [root@k8s-master01 kubernetes]# kubectl create -f heketi-deployment.json
  2. service/heketi created
  3. deployment.extensions/heketi created

  待pod起来后,部署完成

  1. [root@k8s-master01 kubernetes]# kubectl get po
  2. NAME READY STATUS RESTARTS AGE
  3. glusterfs-5npwn / Running 3h
  4. glusterfs-8zfzq / Running 3h
  5. glusterfs-bd5dx / Running 3h
  6. heketi-5cb5f55d9f-5mtqt / Running 2m

  查看最新部署的持久化Heketi的svc,并更改HEKETI_CLI_SERVER的值

  1. [root@k8s-master01 kubernetes]# kubectl get svc
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. heketi ClusterIP 10.111.95.240 <none> /TCP 12h
  4. heketi-storage-endpoints ClusterIP 10.99.28.153 <none> /TCP 12h
  5. kubernetes ClusterIP 10.96.0.1 <none> /TCP 14d
  6. [root@k8s-master01 kubernetes]# export HEKETI_CLI_SERVER=http://10.111.95.240:8080
  7. [root@k8s-master01 kubernetes]# curl http://10.111.95.240:8080/hello
  8. Hello from Heketi

  查看GFS信息

  1. Hello from Heketi[root@k8s-master01 kubernetes]# heketi-cli topology info
  2.  
  3. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  4.  
  5. File: true
  6. Block: true
  7.  
  8. Volumes:
  9.  
  10. Name: heketidbstorage
  11. Size:
  12. Id: 828dc2dfaa00b7213e831b91c6213ae4
  13. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  14. Mount: 192.168.20.31:heketidbstorage
  15. Mount Options: backup-volfile-servers=192.168.20.30,192.168.20.20
  16. Durability Type: replicate
  17. Replica:
  18. Snapshot: Disabled
  19.  
  20. Bricks:
  21. Id: 16b7270d7db1b3cfe9656b64c2a3916c
  22. Path: /var/lib/heketi/mounts/vg_04290ec786dc7752a469b66f5e94458f/brick_16b7270d7db1b3cfe9656b64c2a3916c/brick
  23. Size (GiB):
  24. Node: fb181b0cef571e9af7d84d2ecf534585
  25. Device: 04290ec786dc7752a469b66f5e94458f
  26.  
  27. Id: 828da093d9d78a2b1c382b13cc4da4a1
  28. Path: /var/lib/heketi/mounts/vg_80b61df999fcac26ebca6e28c4da8e61/brick_828da093d9d78a2b1c382b13cc4da4a1/brick
  29. Size (GiB):
  30. Node: d38819746cab7d567ba5f5f4fea45d91
  31. Device: 80b61df999fcac26ebca6e28c4da8e61
  32.  
  33. Id: e8ef0e68ccc3a0416f73bc111cffee61
  34. Path: /var/lib/heketi/mounts/vg_82af8e5f2fb2e1396f7c9e9f7698a178/brick_e8ef0e68ccc3a0416f73bc111cffee61/brick
  35. Size (GiB):
  36. Node: 0f00835397868d3591f45432e432ba38
  37. Device: 82af8e5f2fb2e1396f7c9e9f7698a178
  38.  
  39. Nodes:
  40.  
  41. Node Id: 0f00835397868d3591f45432e432ba38
  42. State: online
  43. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  44. Zone:
  45. Management Hostnames: k8s-node02
  46. Storage Hostnames: 192.168.20.31
  47. Devices:
  48. Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB): Used (GiB): Free (GiB):
  49. Bricks:
  50. Id:e8ef0e68ccc3a0416f73bc111cffee61 Size (GiB): Path: /var/lib/heketi/mounts/vg_82af8e5f2fb2e1396f7c9e9f7698a178/brick_e8ef0e68ccc3a0416f73bc111cffee61/brick
  51.  
  52. Node Id: d38819746cab7d567ba5f5f4fea45d91
  53. State: online
  54. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  55. Zone:
  56. Management Hostnames: k8s-node01
  57. Storage Hostnames: 192.168.20.30
  58. Devices:
  59. Id:80b61df999fcac26ebca6e28c4da8e61 Name:/dev/sdb State:online Size (GiB): Used (GiB): Free (GiB):
  60. Bricks:
  61. Id:828da093d9d78a2b1c382b13cc4da4a1 Size (GiB): Path: /var/lib/heketi/mounts/vg_80b61df999fcac26ebca6e28c4da8e61/brick_828da093d9d78a2b1c382b13cc4da4a1/brick
  62.  
  63. Node Id: fb181b0cef571e9af7d84d2ecf534585
  64. State: online
  65. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  66. Zone:
  67. Management Hostnames: k8s-master01
  68. Storage Hostnames: 192.168.20.20
  69. Devices:
  70. Id:04290ec786dc7752a469b66f5e94458f Name:/dev/sdc State:online Size (GiB): Used (GiB): Free (GiB):
  71. Bricks:
  72. Id:16b7270d7db1b3cfe9656b64c2a3916c Size (GiB): Path: /var/lib/heketi/mounts/vg_04290ec786dc7752a469b66f5e94458f/brick_16b7270d7db1b3cfe9656b64c2a3916c/brick

5、定义StorageClass

  1. [root@k8s-master01 gfs]# cat storageclass-gfs-heketi.yaml
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5. name: gluster-heketi
  6. provisioner: kubernetes.io/glusterfs
  7. parameters:
  8. resturl: "http://10.111.95.240:8080"
  9. restauthenabled: "false"
  10. [root@k8s-master01 gfs]# ku
  11. [root@k8s-master01 gfs]# kubectl create -f storageclass-gfs-heketi.yaml
  12. storageclass.storage.k8s.io/gluster-heketi created

  Provisioner参数须设置为"kubernetes.io/glusterfs"

  resturl地址为API Server所在主机可以访问到的Heketi服务的某个地址

6、定义PVC及测试Pod

  1. [root@k8s-master01 gfs]# kubectl create -f pod-use-pvc.yaml
  2. pod/pod-use-pvc created
  3. persistentvolumeclaim/pvc-gluster-heketi created
  4. [root@k8s-master01 gfs]# cat pod-use-pvc.yaml
  5. apiVersion: v1
  6. kind: Pod
  7. metadata:
  8. name: pod-use-pvc
  9. spec:
  10. containers:
  11. - name: pod-use-pvc
  12. image: busybox
  13. command:
  14. - sleep
  15. - ""
  16. volumeMounts:
  17. - name: gluster-volume
  18. mountPath: "/pv-data"
  19. readOnly: false
  20. volumes:
  21. - name: gluster-volume
  22. persistentVolumeClaim:
  23. claimName: pvc-gluster-heketi
  24. ---
  25. kind: PersistentVolumeClaim
  26. apiVersion: v1
  27. metadata:
  28. name: pvc-gluster-heketi
  29. spec:
  30. accessModes: [ "ReadWriteOnce" ]
  31. storageClassName: "gluster-heketi"
  32. resources:
  33. requests:
  34. storage: 1Gi

  PVC定义一旦生成,系统便触发Heketi进行相应的操作,主要为在GlusterFS集群上创建brick,再创建并启动一个volume

  创建的pv及pvc如下

  1. [root@k8s-master01 gfs]# kubectl get pv,pvc | grep gluster
  2.  
  3. persistentvolume/pvc-4a8033e8-e7f7-11e8-9a09-000c293bfe27 1Gi RWO Delete Bound default/pvc-gluster-heketi gluster-heketi 5m
  4. persistentvolumeclaim/pvc-gluster-heketi Bound pvc-4a8033e8-e7f7-11e8-9a09-000c293bfe27 1Gi RWO gluster-heketi 5m

7、测试数据

  进入到pod并创建文件

  1. [root@k8s-master01 /]# kubectl exec -ti pod-use-pvc -- /bin/sh
  2. / # cd /pv-data/
  3. /pv-data # mkdir {..}
  4. /pv-data # ls
  5. {..}

  宿主机挂载测试

  1. # 查看volume
  2. [root@k8s-master01 /]# heketi-cli topology info
  3.  
  4. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  5.  
  6. File: true
  7. Block: true
  8.  
  9. Volumes:
  10.  
  11. Name: vol_56d636b452d31a9d4cb523d752ad0891
  12. Size:
  13. Id: 56d636b452d31a9d4cb523d752ad0891
  14. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  15. Mount: 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891
  16. Mount Options: backup-volfile-servers=192.168.20.30,192.168.20.20
  17. Durability Type: replicate
  18. Replica:
  19. Snapshot: Enabled
  20. ...
  21. ...
    # 或者使用volume list查看

 [root@k8s-master01 mnt]# heketi-cli volume list
  Id:56d636b452d31a9d4cb523d752ad0891 Cluster:5dec5676c731498c2bdf996e110a3e5e Name:vol_56d636b452d31a9d4cb523d752ad0891
  Id:828dc2dfaa00b7213e831b91c6213ae4 Cluster:5dec5676c731498c2bdf996e110a3e5e Name:heketidbstorage
  [root@k8s-master01 mnt]#

  vol_56d636b452d31a9d4cb523d752ad0891为volume Name,Mount: 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891,挂载方式。

  挂载查看数据

  1. [root@k8s-master01 /]# mount -t glusterfs 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891 /mnt/
  2. [root@k8s-master01 /]# cd /mnt/
  3. [root@k8s-master01 mnt]# ls
  4. {..}

8、测试Deployments

  1. [root@k8s-master01 gfs]# cat nginx-gluster.yaml
  2. apiVersion: extensions/v1beta1
  3. kind: Deployment
  4. metadata:
  5. name: nginx-gfs
  6. spec:
  7. replicas:
  8. template:
  9. metadata:
  10. labels:
  11. name: nginx
  12. spec:
  13. containers:
  14. - name: nginx
  15. image: nginx
  16. imagePullPolicy: IfNotPresent
  17. ports:
  18. - containerPort:
  19. volumeMounts:
  20. - name: nginx-gfs-html
  21. mountPath: "/usr/share/nginx/html"
  22. - name: nginx-gfs-conf
  23. mountPath: "/etc/nginx/conf.d"
  24. volumes:
  25. - name: nginx-gfs-html
  26. persistentVolumeClaim:
  27. claimName: glusterfs-nginx-html
  28. - name: nginx-gfs-conf
  29. persistentVolumeClaim:
  30. claimName: glusterfs-nginx-conf
  31. ---
  32. kind: PersistentVolumeClaim
  33. apiVersion: v1
  34. metadata:
  35. name: glusterfs-nginx-html
  36. spec:
  37. accessModes: [ "ReadWriteMany" ]
  38. storageClassName: "gluster-heketi"
  39. resources:
  40. requests:
  41. storage: 500Mi
  42. ---
  43. kind: PersistentVolumeClaim
  44. apiVersion: v1
  45. metadata:
  46. name: glusterfs-nginx-conf
  47. spec:
  48. accessModes: [ "ReadWriteMany" ]
  49. storageClassName: "gluster-heketi"
  50. resources:
  51. requests:
  52. storage: 10Mi
  53. [root@k8s-master01 gfs]# kubectl get po,pvc,pv | grep nginx
  54. pod/nginx-gfs-77c758ccc-2hwl6 / Running 4m
  55. pod/nginx-gfs-77c758ccc-kxzfz / ContainerCreating 3m
  56.  
  57. persistentvolumeclaim/glusterfs-nginx-conf Bound pvc-f40c5d4b-e800-11e8-8a89-000c293ad492 1Gi RWX gluster-heketi 2m
  58. persistentvolumeclaim/glusterfs-nginx-html Bound pvc-f40914f8-e800-11e8-8a89-000c293ad492 1Gi RWX gluster-heketi 2m
  59.  
  60. persistentvolume/pvc-f40914f8-e800-11e8-8a89-000c293ad492 1Gi RWX Delete Bound default/glusterfs-nginx-html gluster-heketi 4m
  61. persistentvolume/pvc-f40c5d4b-e800-11e8-8a89-000c293ad492 1Gi RWX Delete Bound default/glusterfs-nginx-conf gluster-heketi 4m

  查看挂载情况

  1. [root@k8s-master01 gfs]# kubectl exec -ti nginx-gfs-77c758ccc-2hwl6 -- df -Th
  2. Filesystem Type Size Used Avail Use% Mounted on
  3. overlay overlay 86G .6G 80G % /
  4. tmpfs tmpfs .8G .8G % /dev
  5. tmpfs tmpfs .8G .8G % /sys/fs/cgroup
  6. /dev/mapper/centos-root xfs 86G .6G 80G % /etc/hosts
  7. shm tmpfs 64M 64M % /dev/shm
  8. 192.168.20.20:vol_b9c68075c6f20438b46db892d15ed45a fuse.glusterfs 1014M 43M 972M % /etc/nginx/conf.d
  9. 192.168.20.20:vol_32146a51be9f980c14bc86c34f67ebd5 fuse.glusterfs 1014M 43M 972M % /usr/share/nginx/html
  10. tmpfs tmpfs .8G 12K .8G % /run/secrets/kubernetes.io/serviceaccount
  11. tmpfs

  挂载并创建index.html

  1. [root@k8s-master01 gfs]# mount -t glusterfs 192.168.20.20:vol_32146a51be9f980c14bc86c34f67ebd5 /mnt/
  2. [root@k8s-master01 gfs]# cd /mnt/
  3. [root@k8s-master01 mnt]# ls
  4. [root@k8s-master01 mnt]# echo "test" > index.html
  5. [root@k8s-master01 mnt]# kubectl exec -ti nginx-gfs-77c758ccc-2hwl6 -- cat /usr/share/nginx/html/index.html
  6. test

  扩容nginx

  1. [root@k8s-master01 ~]# kubectl get deploy
  2. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  3. heketi 14h
  4. nginx-gfs 23m
  5. [root@k8s-master01 ~]# kubectl scale deploy nginx-gfs --replicas
  6. deployment.extensions/nginx-gfs scaled
  7. [root@k8s-master01 ~]# kubectl get po
  8. NAME READY STATUS RESTARTS AGE
  9. glusterfs-5npwn / Running 18h
  10. glusterfs-8zfzq / Running 17h
  11. glusterfs-bd5dx / Running 18h
  12. heketi-5cb5f55d9f-5mtqt / Running 14h
  13. nginx-gfs-77c758ccc-2hwl6 / Running 11m
  14. nginx-gfs-77c758ccc-6fphl / Running 8m
  15. nginx-gfs-77c758ccc-kxzfz / Running 10m

  查看文件内容

  [root@k8s-master01 ~]# kubectl exec -ti nginx-gfs-77c758ccc-6fphl -- cat /usr/share/nginx/html/index.html
  test

9、扩容GlusterFS

9.1添加磁盘至已存在的node节点

  基于上述节点,假设在k8s-node02上增加磁盘

  查看k8s-node02部署的pod name及IP

  1. [root@k8s-master01 ~]# kubectl get po -o wide -l glusterfs-node
  2. NAME READY STATUS RESTARTS AGE IP NODE
  3. glusterfs-5npwn / Running 20h 192.168.20.31 k8s-node02
  4. glusterfs-8zfzq / Running 20h 192.168.20.20 k8s-master01
  5. glusterfs-bd5dx / Running 20h 192.168.20.30 k8s-node01

  在node02上确认新添加的盘符

  1. Disk /dev/sdc: 42.9 GB, bytes, sectors
  2. Units = sectors of * = bytes
  3. Sector size (logical/physical): bytes / bytes
  4. I/O size (minimum/optimal): bytes / bytes

  使用heketi-cli查看cluster ID和所有node ID

  1. [root@k8s-master01 ~]# heketi-cli cluster info
  2. Error: Cluster id missing
  3. [root@k8s-master01 ~]# heketi-cli cluster list
  4. Clusters:
  5. Id:5dec5676c731498c2bdf996e110a3e5e [file][block]
  6. [root@k8s-master01 ~]# heketi-cli cluster info 5dec5676c731498c2bdf996e110a3e5e
  7. Cluster id: 5dec5676c731498c2bdf996e110a3e5e
  8. Nodes:
  9. 0f00835397868d3591f45432e432ba38
  10. d38819746cab7d567ba5f5f4fea45d91
  11. fb181b0cef571e9af7d84d2ecf534585
  12. Volumes:
  13. 32146a51be9f980c14bc86c34f67ebd5
  14. 56d636b452d31a9d4cb523d752ad0891
  15. 828dc2dfaa00b7213e831b91c6213ae4
  16. b9c68075c6f20438b46db892d15ed45a
  17. Block: true
  18.  
  19. File: true

  找到对应的k8s-node02的node ID

  1. [root@k8s-master01 ~]# heketi-cli node info 0f00835397868d3591f45432e432ba38
  2. Node Id: 0f00835397868d3591f45432e432ba38
  3. State: online
  4. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  5. Zone:
  6. Management Hostname: k8s-node02
  7. Storage Hostname: 192.168.20.31
  8. Devices:
  9. Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB): Used (GiB): Free (GiB): Bricks:

  添加磁盘至GFS集群的node02

  1. [root@k8s-master01 ~]# heketi-cli device add --name=/dev/sdc --node=0f00835397868d3591f45432e432ba38
  2. Device added successfully

  查看结果

  1. [root@k8s-master01 ~]# heketi-cli node info 0f00835397868d3591f45432e432ba38
  2. Node Id: 0f00835397868d3591f45432e432ba38
  3. State: online
  4. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  5. Zone:
  6. Management Hostname: k8s-node02
  7. Storage Hostname: 192.168.20.31
  8. Devices:
  9. Id:5539e74bc2955e7c70b3a20e72c04615 Name:/dev/sdc State:online Size (GiB): Used (GiB): Free (GiB): Bricks:
  10. Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB): Used (GiB): Free (GiB): Bricks:

9.2 添加新节点

  假设将k8s-master03,IP为192.168.20.22的加入glusterfs集群,并将该节点的/dev/sdc加入到集群

  加标签,之后会自动创建pod

  1. [root@k8s-master01 kubernetes]# kubectl label node k8s-master03 storagenode=glusterfs
  2. node/k8s-master03 labeled
  3. [root@k8s-master01 kubernetes]# kubectl get pod -owide -l glusterfs-node
  4. NAME READY STATUS RESTARTS AGE IP NODE
  5. glusterfs-5npwn / Running 21h 192.168.20.31 k8s-node02
  6. glusterfs-8zfzq / Running 21h 192.168.20.20 k8s-master01
  7. glusterfs-96w74 / ContainerCreating 2m 192.168.20.22 k8s-master03
  8. glusterfs-bd5dx / Running 21h 192.168.20.30 k8s-node01

  在任意节点执行peer probe

  1. [root@k8s-master01 kubernetes]# kubectl exec -ti glusterfs-5npwn -- gluster peer probe 192.168.20.22
  2. peer probe: success.

  将新节点加入到glusterfs集群中

  1. [root@k8s-master01 kubernetes]# heketi-cli cluster list
  2. Clusters:
  3. Id:5dec5676c731498c2bdf996e110a3e5e [file][block]
  4. [root@k8s-master01 kubernetes]# heketi-cli node add --zone= --cluster=5dec5676c731498c2bdf996e110a3e5e --management-host-name=k8s-master03 --storage-host-name=192.168.20.22
  5. Node information:
  6. Id: 150bc8c458a70310c6137e840619758c
  7. State: online
  8. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  9. Zone:
  10. Management Hostname k8s-master03
  11. Storage Hostname 192.168.20.22

  将新节点的磁盘加入到集群中

  1. [root@k8s-master01 kubernetes]# heketi-cli device add --name=/dev/sdc --node=150bc8c458a70310c6137e840619758c
  2. Device added successfully

  验证

  1. [root@k8s-master01 kubernetes]# heketi-cli node list
  2. Id:0f00835397868d3591f45432e432ba38 Cluster:5dec5676c731498c2bdf996e110a3e5e
  3. Id:150bc8c458a70310c6137e840619758c Cluster:5dec5676c731498c2bdf996e110a3e5e
  4. Id:d38819746cab7d567ba5f5f4fea45d91 Cluster:5dec5676c731498c2bdf996e110a3e5e
  5. Id:fb181b0cef571e9af7d84d2ecf534585 Cluster:5dec5676c731498c2bdf996e110a3e5e
  6. [root@k8s-master01 kubernetes]# heketi-cli node info 150bc8c458a70310c6137e840619758c
  7. Node Id: 150bc8c458a70310c6137e840619758c
  8. State: online
  9. Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
  10. Zone:
  11. Management Hostname: k8s-master03
  12. Storage Hostname: 192.168.20.22
  13. Devices:
  14. Id:2d5210c19858fb7ea3f805e6f582ecce Name:/dev/sdc State:online Size (GiB): Used (GiB): Free (GiB): Bricks:

  PS:扩容volume可使用heketi-cli volume expand --volume=volumeID --expand-size=10

10、重启heketi报错解决

  报错如下:

  1. [heketi] ERROR // :: heketi/apps/glusterfs/app.go::glusterfs.NewApp: Heketi was terminated while performing one or more operations. Server may refuse to start as long as pending operations are present in the db.

  解决:

  修改heketi.json,添加"brick_min_size_gb" : 1,

  1. [root@k8s-master01 kubernetes]# cat heketi.json
  2. {
  3. "_port_comment": "Heketi Server Port Number",
  4. "port": "",
  5.  
  6. "_use_auth": "Enable JWT authorization. Please enable for deployment",
  7. "use_auth": false,
  8.  
  9. "_jwt": "Private keys for access",
  10. "jwt": {
  11. "_admin": "Admin has access to all APIs",
  12. "admin": {
  13. "key": "My Secret"
  14. },
  15. "_user": "User only has access to /volumes endpoint",
  16. "user": {
  17. "key": "My Secret"
  18. }
  19. },
  20.  
  21. "_glusterfs_comment": "GlusterFS Configuration",
  22. "glusterfs": {
  23. "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
  24. "executor": "kubernetes",
  25.  
  26. "_db_comment": "Database file name",
  27. "db": "/var/lib/heketi/heketi.db",
  28. "brick_min_size_gb" : ,
  29.  
  30. "kubeexec": {
  31. "rebalance_on_expansion": true
  32. },
  33.  
  34. "sshexec": {
  35. "rebalance_on_expansion": true,
  36. "keyfile": "/etc/heketi/private_key",
  37. "fstab": "/etc/fstab",
  38. "port": "",
  39. "user": "root",
  40. "sudo": false
  41. }
  42. },
  43.  
  44. "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
  45. "backup_db_to_kube_secret": false
  46. }
  47. [root@k8s-master01 kubernetes]# pwd
  48. /root/heketi-client/share/heketi/kubernetes

  删除secret并重建

  1. [root@k8s-master01 kubernetes]# kubectl delete secret heketi-config-secret
  2. [root@k8s-master01 ~]# kubectl create secret generic heketi-config-secret --from-file heketi.json

  更改heketi的deploy

  1. # env添加变量如下
  2. - name: HEKETI_IGNORE_STALE_OPERATIONS
  3. value: "true"

 

11、GFS容器无法启动

  1. glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled

  解决(新建集群,无数据):

  1. rm -rf /var/lib/heketi/
  2. rm -rf /var/lib/glusterd
  3. rm -rf /etc/glusterfs/
  4. yum remove glusterfs
  5. yum install glusterfs
  6. yum install glusterfs glusterfs-fuse -y

赞助作者:

  

kubernetes实战(九):k8s集群动态存储管理GlusterFS及使用Heketi扩容GlusterFS集群的更多相关文章

  1. Kubernetes实战总结 - 阿里云ECS自建K8S集群

    一.概述 详情参考阿里云说明:https://help.aliyun.com/document_detail/98886.html?spm=a2c4g.11186623.6.1078.323b1c9b ...

  2. kubernetes实战-交付dubbo服务到k8s集群(一)准备工作

    本次交付的服务架构图:因为zookeeper属于有状态服务,不建议将有状态服务,交付到k8s,如mysql,zk等. 首先部署zk集群:zk是java服务,需要依赖jdk,jdk请自行下载: 集群分布 ...

  3. kubernetes实战(八):k8s集群安全机制RBAC

    1.基本概念 RBAC(Role-Based Access Control,基于角色的访问控制)在k8s v1.5中引入,在v1.6版本时升级为Beta版本,并成为kubeadm安装方式下的默认选项, ...

  4. kubernetes实战之部署一个接近生产环境的consul集群

    系列目录 前面我们介绍了如何在windows单机以及如何基于docker部署consul集群,看起来也不是很复杂,然而如果想要把consul部署到kubernetes集群中并充分利用kubernete ...

  5. (二)Kubernetes kubeadm部署k8s集群

    kubeadm介绍 kubeadm是Kubernetes项目自带的及集群构建工具,负责执行构建一个最小化的可用集群以及将其启动等的必要基本步骤,kubeadm是Kubernetes集群全生命周期的管理 ...

  6. 【Kubernetes 系列四】Kubernetes 实战:管理 Hello World 集群

    目录 1. 创建集群 1.1. 安装 kubectl 1.1.1. 安装 kubectl 到 Linux 1.1.2. 安装 kubectl 到 macOS 1.1.3. 安装 kubectl 到 W ...

  7. 动态存储管理实战:GlusterFS

    文件转载自:https://www.orchome.com/1284 本节以GlusterFS为例,从定义StorageClass.创建GlusterFS和Heketi服务.用户申请PVC到创建Pod ...

  8. 通过Heketi管理GlusterFS为K8S集群提供持久化存储

    参考文档: Github project:https://github.com/heketi/heketi MANAGING VOLUMES USING HEKETI:https://access.r ...

  9. kubernetes实战(十):k8s使用Helm安装harbor

    1.基本概念 对于复杂的应用中间件,需要设置镜像运行的需求.环境变量,并且需要定制存储.网络等设置,最后设计和编写Deployment.Configmap.Service及Ingress等相关yaml ...

随机推荐

  1. Flask restful源码分析

    Flask restful的代码量不大,功能比较简单 参见 http://note.youdao.com/noteshare?id=4ef343068763a56a10a2ada59a019484

  2. 2019前端UI框架排行榜

    一.Mint UI 流行指数:★★★★ Mint UI是 饿了么团队开发基于 Vue.js 的移动端UI框架,它包含丰富的 CSS 和 JS 组件,能够满足日常的移动端开发需要. 官网:https:/ ...

  3. redis笔记1

    存储结构 字符类型 散列类型 列表类型 集合类型 有序集合 可以为每个key设置超时时间: 可以通过列表类型来实现分布式队列的操作 支持发布订阅的消息模式 提供了很多命令与redis进行交互 数据缓存 ...

  4. Valgrind调试

    Valgrind的最初作者是Julian Seward,他于2006年由于在开发Valgrind上的工作获得了第二届Google-O'Reilly开源代码奖 摘自 Valgrind.org: Valg ...

  5. 团队作业第3周——需求改进&系统设计(crtl冲锋队)

    2.需求&原型改进: 1.问题:游戏中我方飞机和敌方飞机是怎么控制的? 改进: 在游戏中,我控制我方飞机,按下方向键飞机便向按下的方向移动,按下Z键,我方飞机发射子弹. 敌方飞机面向随机的方向 ...

  6. Window平台下的静默下载并安装软件脚本bat

    一,隐藏命令窗口 当我们运行bat脚本的时候,弹出CMD窗口.如果要隐藏窗口可以在bat脚本开头处写一下代码: @echo off if "%1" == "h" ...

  7. python使用ftplib模块实现FTP文件的上传下载

    python已经默认安装了ftplib模块,用其中的FTP类可以实现FTP文件的上传下载 FTP文件上传下载 # coding:utf8 from ftplib import FTP def uplo ...

  8. linux date 设置系统时间

    设置 系统时间 注意时间格式 date  -s "date" [root@localhost c]# date -s "2019-05-29 10:58:00" ...

  9. 9-剑指offer: 二进制中1的个数

    题目描述 输入一个整数,输出该数二进制表示中1的个数.其中负数用补码表示. 代码 class Solution { public: int NumberOf1(int n) { if(n==0) re ...

  10. 201871010135 张玉晶《面向对象程序设计(java)》第十一周学习总结

    项目 内容 <面向对象程序设计(java)> https://www.cnblogs.com/nwnu-daizh/ 这个作业的要求在哪里 https://www.cnblogs.com/ ...