前言

知识点

  • 定级:入门级
  • 使用 Heketi Topology 扩容磁盘
  • 使用 Heketi CLI 扩容磁盘

实战服务器配置 (架构 1:1 复刻小规模生产环境,配置略有不同)

主机名 IP CPU 内存 系统盘 数据盘 用途
ks-master-0 192.168.9.91 2 4 50 100 KubeSphere/k8s-master
ks-master-1 192.168.9.92 2 4 50 100 KubeSphere/k8s-master
ks-master-2 192.168.9.93 2 4 50 100 KubeSphere/k8s-master
ks-worker-0 192.168.9.95 2 4 50 100 k8s-worker/CI
ks-worker-1 192.168.9.96 2 4 50 100 k8s-worker
ks-worker-2 192.168.9.97 2 4 50 100 k8s-worker
storage-0 192.168.9.81 2 4 50 100+50+50 ElasticSearch/GlusterFS/Ceph/Longhorn/NFS/
storage-1 192.168.9.82 2 4 50 100+50+50 ElasticSearch/GlusterFS/Ceph/Longhorn
storage-2 192.168.9.83 2 4 50 100+50+50 ElasticSearch/GlusterFS/Ceph/Longhorn
registry 192.168.9.80 2 4 50 200 Sonatype Nexus 3
合计 10 20 40 500 1100+

实战环境涉及软件版本信息

  • 操作系统:openEuler 22.03 LTS SP2 x86_64

  • KubeSphere:3.3.2

  • Kubernetes:v1.24.12

  • Containerd:1.6.4

  • KubeKey: v3.0.8

  • GlusterFS:10.0-8

  • Heketi:v10.4.0

简介

之前的实战课程,我们已经学习了如何在 openEuler 22.03 LTS SP2 上安装部署 GlusterFS、Heketi 以及 Kubernetes 使用 in-tree storage driver 模式对接 GlusterFS 做为集群的后端存储。

今天我们来实战模拟生产环境必然会遇到的一个场景,业务上线一段时间后 GlusterFS 数据盘满了,需要扩容怎么办?

基于 Heketi 管理的 GlusterFS 数据卷扩容方案有两种:

  • 调整现有 Topology 配置文件,重新加载
  • 使用 Heketi CLI 直接扩容(简单,建议使用)

实战模拟前提条件:

  • 在已有的 GlusterFS 100G 数据盘的基础上,额外添加了 2 块 50G 的磁盘,用来模拟两种数据卷扩容方案。

  • 为了模拟实战效果,预先将已有的 100G 空间消耗掉 95G。

本文的实战过程与操作系统无关,所有相关操作均适用于其他操作系统部署的 Heketi + GlusterFS 存储集群。

磁盘空间不足故障模拟

创建新 PVC

  • 编辑 pvc 资源文件 vi pvc-test-95g.yaml
  1. ---
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5. name: test-data-95g
  6. spec:
  7. accessModes:
  8. - ReadWriteOnce
  9. storageClassName: glusterfs
  10. resources:
  11. requests:
  12. storage: 95Gi
  • 执行创建命令
  1. kubectl apply -f pvc-test-95g.yaml
  2. # 执行命令不会报错,但是 pvc 状态会处于 Pending 状态

查看报错信息

  • 查看 Heketi 服务日志报错信息
  1. # 执行命令(没有独立的日志文件,日志直接输出到了 messages 中)
  2. tail -f /var/log/messages
  3. # 输出结果如下(只截取了完整的一段,后面一直循环输出相同的错误信息)
  4. [root@ks-storage-0 heketi]# tail -f /var/log/messages
  5. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0
  6. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0
  7. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0
  8. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #1
  9. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0
  10. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #1
  11. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #2
  12. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #3
  13. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] ERROR 2023/08/16 15:29:32 heketi/apps/glusterfs/volume_entry_allocate.go:37:glusterfs.(*VolumeEntry).allocBricksInCluster: Minimum brick size limit reached. Out of space.
  14. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] ERROR 2023/08/16 15:29:32 heketi/apps/glusterfs/operations_manage.go:220:glusterfs.AsyncHttpOperation: Create Volume Build Failed: No space
  15. Aug 16 15:29:32 ks-storage-0 heketi[34102]: [negroni] 2023-08-16T15:29:32+08:00 | 500 | #011 4.508081ms | 192.168.9.81:18080 | POST /volumes

通过上面的模拟演示,我们学会了在 K8s 集群中使用 Glusterfs 作为后端存储时,如何判断数据卷空间满了。

  • 创建后状态为 Pending
  • Hekiti 报错日志中有关键字 Create Volume Build Failed: No space

当 GlusterFS 存储集群磁盘空间分配完无法新建数据卷时,作为运维的我们就需要为存储集群添加新的硬盘来扩容存储集群了。

利用 Heketi 扩容 GlusterFS 数据卷

请注意,本文为了完整的展示扩容过程,执行相关命令时会完整的记录输出的结果。这样导致的后果就是本文会略显冗长,因此,各位在阅读本文时可以选择性阅读。

查看现有 Topology 信息

  1. # 执行命令
  2. heketi-cli topology info
  3. # 正常的输出结果如下
  4. [root@ks-storage-0 heketi]# heketi-cli topology info
  5. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  6. File: true
  7. Block: true
  8. Volumes:
  9. Name: vol_75c90b8463d73a7fd9187a8ca22ff91f
  10. Size: 95
  11. Id: 75c90b8463d73a7fd9187a8ca22ff91f
  12. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  13. Mount: 192.168.9.81:vol_75c90b8463d73a7fd9187a8ca22ff91f
  14. Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83
  15. Durability Type: replicate
  16. Replica: 3
  17. Snapshot: Enabled
  18. Snapshot Factor: 1.00
  19. Bricks:
  20. Id: 37006636e1fe713a395755e8d34f6f20
  21. Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
  22. Size (GiB): 95
  23. Node: 5e99fe0cd727b8066f200bad5524c544
  24. Device: 8fd529a668d5c19dfc37450b755230cd
  25. Id: 3dca27f98e1c20aa092c159226ddbe4d
  26. Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
  27. Size (GiB): 95
  28. Node: 7bb26eb30c1c61456b5ae8d805c01cf1
  29. Device: 51ad0981f8fed73002f5a7f2dd0d65c5
  30. Id: 7ac64e137d803cccd4b9fcaaed4be8ad
  31. Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
  32. Size (GiB): 95
  33. Node: 0108350a9d13578febbfd0502f8077ff
  34. Device: 9af38756fe916fced666fcd3de786c19
  35. Nodes:
  36. Node Id: 0108350a9d13578febbfd0502f8077ff
  37. State: online
  38. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  39. Zone: 1
  40. Management Hostnames: 192.168.9.81
  41. Storage Hostnames: 192.168.9.81
  42. Devices:
  43. Id:9af38756fe916fced666fcd3de786c19 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4
  44. Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb
  45. Bricks:
  46. Id:7ac64e137d803cccd4b9fcaaed4be8ad Size (GiB):95 Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
  47. Node Id: 5e99fe0cd727b8066f200bad5524c544
  48. State: online
  49. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  50. Zone: 1
  51. Management Hostnames: 192.168.9.82
  52. Storage Hostnames: 192.168.9.82
  53. Devices:
  54. Id:8fd529a668d5c19dfc37450b755230cd State:online Size (GiB):99 Used (GiB):95 Free (GiB):4
  55. Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/sdb
  56. Bricks:
  57. Id:37006636e1fe713a395755e8d34f6f20 Size (GiB):95 Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
  58. Node Id: 7bb26eb30c1c61456b5ae8d805c01cf1
  59. State: online
  60. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  61. Zone: 1
  62. Management Hostnames: 192.168.9.83
  63. Storage Hostnames: 192.168.9.83
  64. Devices:
  65. Id:51ad0981f8fed73002f5a7f2dd0d65c5 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4
  66. Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb
  67. Bricks:
  68. Id:3dca27f98e1c20aa092c159226ddbe4d Size (GiB):95 Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick

查看现有 Node 信息

  • 查看 node 节点列表
  1. # 执行命令
  2. heketi-cli node list
  3. # 正常的输出结果如下
  4. [root@ks-storage-0 heketi]# heketi-cli node list
  5. Id:0108350a9d13578febbfd0502f8077ff Cluster:9ad37206ce6575b5133179ba7c6e0935
  6. Id:5e99fe0cd727b8066f200bad5524c544 Cluster:9ad37206ce6575b5133179ba7c6e0935
  7. Id:7bb26eb30c1c61456b5ae8d805c01cf1 Cluster:9ad37206ce6575b5133179ba7c6e0935
  • 查看 node 详细信息

storage-0 节点为例,查看 Node 详细信息。

  1. # 执行命令
  2. heketi-cli node info xxxxxx
  3. # 正常的输出结果如下
  4. [root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ff
  5. Node Id: 0108350a9d13578febbfd0502f8077ff
  6. State: online
  7. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  8. Zone: 1
  9. Management Hostname: 192.168.9.81
  10. Storage Hostname: 192.168.9.81
  11. Devices:
  12. Id:9af38756fe916fced666fcd3de786c19 Name:/dev/sdb State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Bricks:1

查看现有 VG 信息

storage-0 节点为例,查看已分配 VG 信息(输出结果中删除了系统 VG 信息)。

  1. # 简单查看
  2. [root@ks-storage-0 heketi]# vgs
  3. VG #PV #LV #SN Attr VSize VFree
  4. vg_9af38756fe916fced666fcd3de786c19 1 2 0 wz--n- 99.87g <3.92g
  5. # 查看详细信息
  6. [root@ks-storage-0 heketi]# vgdisplay vg_9af38756fe916fced666fcd3de786c19
  7. --- Volume group ---
  8. VG Name vg_9af38756fe916fced666fcd3de786c19
  9. System ID
  10. Format lvm2
  11. Metadata Areas 1
  12. Metadata Sequence No 187
  13. VG Access read/write
  14. VG Status resizable
  15. MAX LV 0
  16. Cur LV 2
  17. Open LV 1
  18. Max PV 0
  19. Cur PV 1
  20. Act PV 1
  21. VG Size 99.87 GiB
  22. PE Size 4.00 MiB
  23. Total PE 25567
  24. Alloc PE / Size 24564 / 95.95 GiB
  25. Free PE / Size 1003 / <3.92 GiB
  26. VG UUID jrxfIv-Fnjq-IYF8-aubc-t2y0-zwUp-YxjkDC

查看现有 LV 信息

storage-0 节点为例,查看已分配 LV 信息(输出结果中删除了系统 LV 信息)。

  1. # 简单查看
  2. [root@ks-storage-0 heketi]# lvs
  3. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  4. brick_7ac64e137d803cccd4b9fcaaed4be8ad vg_9af38756fe916fced666fcd3de786c19 Vwi-aotz-- 95.00g tp_3c68ad0d0752d41ede13afdc3db9637b 0.05
  5. tp_3c68ad0d0752d41ede13afdc3db9637b vg_9af38756fe916fced666fcd3de786c19 twi-aotz-- 95.00g 0.05 3.31
  6. # 查看详细信息
  7. [root@ks-storage-0 heketi]# lvdisplay
  8. --- Logical volume ---
  9. LV Name tp_3c68ad0d0752d41ede13afdc3db9637b
  10. VG Name vg_9af38756fe916fced666fcd3de786c19
  11. LV UUID Aho32F-tBTa-VTTp-VfwY-qRbm-WUxu-puj4kv
  12. LV Write Access read/write (activated read only)
  13. LV Creation host, time ks-storage-0, 2023-08-16 15:21:06 +0800
  14. LV Pool metadata tp_3c68ad0d0752d41ede13afdc3db9637b_tmeta
  15. LV Pool data tp_3c68ad0d0752d41ede13afdc3db9637b_tdata
  16. LV Status available
  17. # open 0
  18. LV Size 95.00 GiB
  19. Allocated pool data 0.05%
  20. Allocated metadata 3.31%
  21. Current LE 24320
  22. Segments 1
  23. Allocation inherit
  24. Read ahead sectors auto
  25. - currently set to 8192
  26. Block device 253:5
  27. --- Logical volume ---
  28. LV Path /dev/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad
  29. LV Name brick_7ac64e137d803cccd4b9fcaaed4be8ad
  30. VG Name vg_9af38756fe916fced666fcd3de786c19
  31. LV UUID VGTOMk-d07E-XWhw-Omzz-Pc1t-WwEH-Wh0EuY
  32. LV Write Access read/write
  33. LV Creation host, time ks-storage-0, 2023-08-16 15:21:10 +0800
  34. LV Pool name tp_3c68ad0d0752d41ede13afdc3db9637b
  35. LV Status available
  36. # open 1
  37. LV Size 95.00 GiB
  38. Mapped size 0.05%
  39. Current LE 24320
  40. Segments 1
  41. Allocation inherit
  42. Read ahead sectors auto
  43. - currently set to 8192

注意:Heketi 使用了 LVM 存储池的方式创建 LV 卷,所有输出结果中看到了两个 LV。brick_ 开头的是实际可用的 LV。

扩容方案之调整 Topology 配置文件

前提说明

  • 扩容盘符:/dev/sdc
  • 扩容容量:50G

查看现有 Topology 配置文件

  • cat /etc/heketi/topology.json
  1. {
  2. "clusters": [
  3. {
  4. "nodes": [
  5. {
  6. "node": {
  7. "hostnames": {
  8. "manage": [
  9. "192.168.9.81"
  10. ],
  11. "storage": [
  12. "192.168.9.81"
  13. ]
  14. },
  15. "zone": 1
  16. },
  17. "devices": [
  18. "/dev/sdb"
  19. ]
  20. },
  21. {
  22. "node": {
  23. "hostnames": {
  24. "manage": [
  25. "192.168.9.82"
  26. ],
  27. "storage": [
  28. "192.168.9.82"
  29. ]
  30. },
  31. "zone": 1
  32. },
  33. "devices": [
  34. "/dev/sdb"
  35. ]
  36. },
  37. {
  38. "node": {
  39. "hostnames": {
  40. "manage": [
  41. "192.168.9.83"
  42. ],
  43. "storage": [
  44. "192.168.9.83"
  45. ]
  46. },
  47. "zone": 1
  48. },
  49. "devices": [
  50. "/dev/sdb"
  51. ]
  52. }
  53. ]
  54. }
  55. ]
  56. }

修改 Topology 文件

编辑现有的 topology.jsonvi /etc/heketi/topology.json

在每一个 node 的 devices 的配置下面增加 /dev/sdc,注意 /dev/sdb 后面的标点配置。

修改后的 topology.json 文件如下:

  1. {
  2. "clusters": [
  3. {
  4. "nodes": [
  5. {
  6. "node": {
  7. "hostnames": {
  8. "manage": [
  9. "192.168.9.81"
  10. ],
  11. "storage": [
  12. "192.168.9.81"
  13. ]
  14. },
  15. "zone": 1
  16. },
  17. "devices": [
  18. "/dev/sdb",
  19. "/dev/sdc"
  20. ]
  21. },
  22. {
  23. "node": {
  24. "hostnames": {
  25. "manage": [
  26. "192.168.9.82"
  27. ],
  28. "storage": [
  29. "192.168.9.82"
  30. ]
  31. },
  32. "zone": 1
  33. },
  34. "devices": [
  35. "/dev/sdb",
  36. "/dev/sdc"
  37. ]
  38. },
  39. {
  40. "node": {
  41. "hostnames": {
  42. "manage": [
  43. "192.168.9.83"
  44. ],
  45. "storage": [
  46. "192.168.9.83"
  47. ]
  48. },
  49. "zone": 1
  50. },
  51. "devices": [
  52. "/dev/sdb",
  53. "/dev/sdc"
  54. ]
  55. }
  56. ]
  57. }
  58. ]
  59. }

重新加载 Topology

  1. # 执行命令
  2. heketi-cli topology load --json=/etc/heketi/topology.json
  3. # 正常的输出结果如下
  4. [root@ks-storage-0 heketi]# heketi-cli topology load --json=/etc/heketi/topology.json
  5. Found node 192.168.9.81 on cluster 9ad37206ce6575b5133179ba7c6e0935
  6. Found device /dev/sdb
  7. Adding device /dev/sdc ... OK
  8. Found node 192.168.9.82 on cluster 9ad37206ce6575b5133179ba7c6e0935
  9. Found device /dev/sdb
  10. Adding device /dev/sdc ... OK
  11. Found node 192.168.9.83 on cluster 9ad37206ce6575b5133179ba7c6e0935
  12. Found device /dev/sdb
  13. Adding device /dev/sdc ... OK

查看更新后的 Topology 信息

  1. # 执行命令
  2. heketi-cli topology info
  3. # 正常的输出结果如下
  4. [root@ks-storage-0 heketi]# heketi-cli topology info
  5. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  6. File: true
  7. Block: true
  8. Volumes:
  9. Name: vol_75c90b8463d73a7fd9187a8ca22ff91f
  10. Size: 95
  11. Id: 75c90b8463d73a7fd9187a8ca22ff91f
  12. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  13. Mount: 192.168.9.81:vol_75c90b8463d73a7fd9187a8ca22ff91f
  14. Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83
  15. Durability Type: replicate
  16. Replica: 3
  17. Snapshot: Enabled
  18. Snapshot Factor: 1.00
  19. Bricks:
  20. Id: 37006636e1fe713a395755e8d34f6f20
  21. Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
  22. Size (GiB): 95
  23. Node: 5e99fe0cd727b8066f200bad5524c544
  24. Device: 8fd529a668d5c19dfc37450b755230cd
  25. Id: 3dca27f98e1c20aa092c159226ddbe4d
  26. Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
  27. Size (GiB): 95
  28. Node: 7bb26eb30c1c61456b5ae8d805c01cf1
  29. Device: 51ad0981f8fed73002f5a7f2dd0d65c5
  30. Id: 7ac64e137d803cccd4b9fcaaed4be8ad
  31. Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
  32. Size (GiB): 95
  33. Node: 0108350a9d13578febbfd0502f8077ff
  34. Device: 9af38756fe916fced666fcd3de786c19
  35. Nodes:
  36. Node Id: 0108350a9d13578febbfd0502f8077ff
  37. State: online
  38. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  39. Zone: 1
  40. Management Hostnames: 192.168.9.81
  41. Storage Hostnames: 192.168.9.81
  42. Devices:
  43. Id:9af38756fe916fced666fcd3de786c19 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4
  44. Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb
  45. Bricks:
  46. Id:7ac64e137d803cccd4b9fcaaed4be8ad Size (GiB):95 Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
  47. Id:ab5f766ddc779449db2bf45bb165fbff State:online Size (GiB):49 Used (GiB):0 Free (GiB):49
  48. Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc
  49. Bricks:
  50. Node Id: 5e99fe0cd727b8066f200bad5524c544
  51. State: online
  52. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  53. Zone: 1
  54. Management Hostnames: 192.168.9.82
  55. Storage Hostnames: 192.168.9.82
  56. Devices:
  57. Id:8fd529a668d5c19dfc37450b755230cd State:online Size (GiB):99 Used (GiB):95 Free (GiB):4
  58. Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/sdb
  59. Bricks:
  60. Id:37006636e1fe713a395755e8d34f6f20 Size (GiB):95 Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
  61. Id:b648c995486b0e785f78a8b674d8b590 State:online Size (GiB):49 Used (GiB):0 Free (GiB):49
  62. Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/sdc
  63. Bricks:
  64. Node Id: 7bb26eb30c1c61456b5ae8d805c01cf1
  65. State: online
  66. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  67. Zone: 1
  68. Management Hostnames: 192.168.9.83
  69. Storage Hostnames: 192.168.9.83
  70. Devices:
  71. Id:51ad0981f8fed73002f5a7f2dd0d65c5 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4
  72. Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb
  73. Bricks:
  74. Id:3dca27f98e1c20aa092c159226ddbe4d Size (GiB):95 Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
  75. Id:9b39c4e288d4a1783d204d2033444c00 State:online Size (GiB):49 Used (GiB):0 Free (GiB):49
  76. Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc
  77. Bricks:

查看更新后的 Node 信息

storage-0 节点为例,查看更新后 Node 详细信息(重点查看 Devices 信息)。

  1. # 执行命令
  2. heketi-cli node info xxxxxx
  3. # 正常的输出结果如下
  4. [root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ff
  5. Node Id: 0108350a9d13578febbfd0502f8077ff
  6. State: online
  7. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  8. Zone: 1
  9. Management Hostname: 192.168.9.81
  10. Storage Hostname: 192.168.9.81
  11. Devices:
  12. Id:9af38756fe916fced666fcd3de786c19 Name:/dev/sdb State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Bricks:1
  13. Id:ab5f766ddc779449db2bf45bb165fbff Name:/dev/sdc State:online Size (GiB):49 Used (GiB):0 Free (GiB):49 Bricks:0

查看更新后的 VG 信息

storage-0 节点为例,查看更新后 VG 信息(输出结果中删除了系统 VG 信息)。

  1. [root@ks-storage-0 heketi]# vgs
  2. VG #PV #LV #SN Attr VSize VFree
  3. vg_9af38756fe916fced666fcd3de786c19 1 2 0 wz--n- 99.87g <3.92g
  4. vg_ab5f766ddc779449db2bf45bb165fbff 1 0 0 wz--n- 49.87g 49.87g

创建测试 PVC

k8s-master-0 节点,执行下面的相关命令。

  • 编辑 pvc 资源文件 vi pvc-test-45g.yaml
  1. ---
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5. name: test-data-45g
  6. spec:
  7. accessModes:
  8. - ReadWriteOnce
  9. storageClassName: glusterfs
  10. resources:
  11. requests:
  12. storage: 45Gi
  • 执行创建命令
  1. kubectl apply -f pvc-test-45g.yaml
  • 查看创建结果
  1. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
  2. test-data-45g Bound pvc-19343e73-6b14-40ca-b65b-356d38d16bb0 45Gi RWO glusterfs 17s Filesystem
  3. test-data-95g Bound pvc-2461f639-1634-4085-af2f-b526a3800217 95Gi RWO glusterfs 42h Filesystem

查看新创建的 Volume

  • 查看卷 list
  1. [root@ks-storage-0 heketi]# heketi-cli volume list
  2. Id:75c90b8463d73a7fd9187a8ca22ff91f Cluster:9ad37206ce6575b5133179ba7c6e0935 Name:vol_75c90b8463d73a7fd9187a8ca22ff91f
  3. Id:ebd76f343b04f89ed4166c8f1ece0361 Cluster:9ad37206ce6575b5133179ba7c6e0935 Name:vol_ebd76f343b04f89ed4166c8f1ece0361
  • 查看新创建的 volume 的信息
  1. [root@ks-storage-0 heketi]# heketi-cli volume info ebd76f343b04f89ed4166c8f1ece0361
  2. Name: vol_ebd76f343b04f89ed4166c8f1ece0361
  3. Size: 45
  4. Volume Id: ebd76f343b04f89ed4166c8f1ece0361
  5. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  6. Mount: 192.168.9.81:vol_ebd76f343b04f89ed4166c8f1ece0361
  7. Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83
  8. Block: false
  9. Free Size: 0
  10. Reserved Size: 0
  11. Block Hosting Restriction: (none)
  12. Block Volumes: []
  13. Durability Type: replicate
  14. Distribute Count: 1
  15. Replica Count: 3
  16. Snapshot Factor: 1.00
  • 查看新创建的 LV 信息

storage-0 节点为例,查看新分配的 LV 信息(输出结果中删除了系统 LV 信息)。

  1. [root@ks-storage-0 heketi]# lvs
  2. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  3. brick_7ac64e137d803cccd4b9fcaaed4be8ad vg_9af38756fe916fced666fcd3de786c19 Vwi-aotz-- 95.00g tp_3c68ad0d0752d41ede13afdc3db9637b 0.05
  4. tp_3c68ad0d0752d41ede13afdc3db9637b vg_9af38756fe916fced666fcd3de786c19 twi-aotz-- 95.00g 0.05 3.31
  5. brick_27e193590ccdb5fba287fb66d5473074 vg_ab5f766ddc779449db2bf45bb165fbff Vwi-aotz-- 45.00g tp_7bdcf1e2c3aab06cb25906f017ae1b08 0.06
  6. tp_7bdcf1e2c3aab06cb25906f017ae1b08 vg_ab5f766ddc779449db2bf45bb165fbff twi-aotz-- 45.00g 0.06 6.94

至此,我们实战演示了 Heketi 通过 Topology 配置文件扩容磁盘并验证测试的全过程。

扩容方案之 Heketi-CLI 直接扩容

前提说明

  • 扩容盘符:/dev/sdd
  • 扩容容量:50G

查看 Node 信息

  • 查看 Node 列表,获取 Node ID
  1. [root@ks-storage-0 heketi]# heketi-cli node list
  2. Id:0108350a9d13578febbfd0502f8077ff Cluster:9ad37206ce6575b5133179ba7c6e0935
  3. Id:5e99fe0cd727b8066f200bad5524c544 Cluster:9ad37206ce6575b5133179ba7c6e0935
  4. Id:7bb26eb30c1c61456b5ae8d805c01cf1 Cluster:9ad37206ce6575b5133179ba7c6e0935
  • 查看 Node 详细信息,查看已有 Devices 信息(以 storage-0 为例)。
  1. [root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ff
  2. Node Id: 0108350a9d13578febbfd0502f8077ff
  3. State: online
  4. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  5. Zone: 1
  6. Management Hostname: 192.168.9.81
  7. Storage Hostname: 192.168.9.81
  8. Devices:
  9. Id:9af38756fe916fced666fcd3de786c19 Name:/dev/sdb State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Bricks:1
  10. Id:ab5f766ddc779449db2bf45bb165fbff Name:/dev/sdc State:online Size (GiB):49 Used (GiB):45 Free (GiB):4 Bricks:1

添加新的 Device

新添加的磁盘在系统中显示盘符为 /dev/sdd,每个 Node 均需要执行添加 Device 的命令。

  1. # 执行的命令
  2. heketi-cli device add --name /dev/sdd --node xxxxxx
  3. # 实际的输出结果如下
  4. [root@ks-storage-0 heketi]# heketi-cli device add --name /dev/sdd --node 0108350a9d13578febbfd0502f8077ff
  5. Device added successfully
  6. [root@ks-storage-0 heketi]# heketi-cli device add --name /dev/sdd --node 5e99fe0cd727b8066f200bad5524c544
  7. Device added successfully
  8. [root@ks-storage-0 heketi]# heketi-cli device add --name /dev/sdd --node 7bb26eb30c1c61456b5ae8d805c01cf1
  9. Device added successfully

查看更新后的 Node 信息

storage-0 节点为例,查看更新后的 Node 信息(重点查看 Devices 信息)。

  1. [root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ff
  2. Node Id: 0108350a9d13578febbfd0502f8077ff
  3. State: online
  4. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  5. Zone: 1
  6. Management Hostname: 192.168.9.81
  7. Storage Hostname: 192.168.9.81
  8. Devices:
  9. Id:9af38756fe916fced666fcd3de786c19 Name:/dev/sdb State:online Size (GiB):99 Used (GiB):95 Free (GiB):4 Bricks:1
  10. Id:ab5f766ddc779449db2bf45bb165fbff Name:/dev/sdc State:online Size (GiB):49 Used (GiB):45 Free (GiB):4 Bricks:1
  11. Id:c189451c573814e05ebd83d46ab9a0af Name:/dev/sdd State:online Size (GiB):49 Used (GiB):0 Free (GiB):49 Bricks:0

查看更新后的 Topology 信息

  1. [root@ks-storage-0 heketi]# heketi-cli topology info
  2. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  3. File: true
  4. Block: true
  5. Volumes:
  6. Name: vol_75c90b8463d73a7fd9187a8ca22ff91f
  7. Size: 95
  8. Id: 75c90b8463d73a7fd9187a8ca22ff91f
  9. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  10. Mount: 192.168.9.81:vol_75c90b8463d73a7fd9187a8ca22ff91f
  11. Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83
  12. Durability Type: replicate
  13. Replica: 3
  14. Snapshot: Enabled
  15. Snapshot Factor: 1.00
  16. Bricks:
  17. Id: 37006636e1fe713a395755e8d34f6f20
  18. Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
  19. Size (GiB): 95
  20. Node: 5e99fe0cd727b8066f200bad5524c544
  21. Device: 8fd529a668d5c19dfc37450b755230cd
  22. Id: 3dca27f98e1c20aa092c159226ddbe4d
  23. Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
  24. Size (GiB): 95
  25. Node: 7bb26eb30c1c61456b5ae8d805c01cf1
  26. Device: 51ad0981f8fed73002f5a7f2dd0d65c5
  27. Id: 7ac64e137d803cccd4b9fcaaed4be8ad
  28. Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
  29. Size (GiB): 95
  30. Node: 0108350a9d13578febbfd0502f8077ff
  31. Device: 9af38756fe916fced666fcd3de786c19
  32. Name: vol_ebd76f343b04f89ed4166c8f1ece0361
  33. Size: 45
  34. Id: ebd76f343b04f89ed4166c8f1ece0361
  35. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  36. Mount: 192.168.9.81:vol_ebd76f343b04f89ed4166c8f1ece0361
  37. Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83
  38. Durability Type: replicate
  39. Replica: 3
  40. Snapshot: Enabled
  41. Snapshot Factor: 1.00
  42. Bricks:
  43. Id: 27e193590ccdb5fba287fb66d5473074
  44. Path: /var/lib/heketi/mounts/vg_ab5f766ddc779449db2bf45bb165fbff/brick_27e193590ccdb5fba287fb66d5473074/brick
  45. Size (GiB): 45
  46. Node: 0108350a9d13578febbfd0502f8077ff
  47. Device: ab5f766ddc779449db2bf45bb165fbff
  48. Id: 4fab639b551e573c61141508d75bf605
  49. Path: /var/lib/heketi/mounts/vg_9b39c4e288d4a1783d204d2033444c00/brick_4fab639b551e573c61141508d75bf605/brick
  50. Size (GiB): 45
  51. Node: 7bb26eb30c1c61456b5ae8d805c01cf1
  52. Device: 9b39c4e288d4a1783d204d2033444c00
  53. Id: 8eba3fb2253452999a1ec60f647dcf03
  54. Path: /var/lib/heketi/mounts/vg_b648c995486b0e785f78a8b674d8b590/brick_8eba3fb2253452999a1ec60f647dcf03/brick
  55. Size (GiB): 45
  56. Node: 5e99fe0cd727b8066f200bad5524c544
  57. Device: b648c995486b0e785f78a8b674d8b590
  58. Nodes:
  59. Node Id: 0108350a9d13578febbfd0502f8077ff
  60. State: online
  61. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  62. Zone: 1
  63. Management Hostnames: 192.168.9.81
  64. Storage Hostnames: 192.168.9.81
  65. Devices:
  66. Id:9af38756fe916fced666fcd3de786c19 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4
  67. Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb
  68. Bricks:
  69. Id:7ac64e137d803cccd4b9fcaaed4be8ad Size (GiB):95 Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
  70. Id:ab5f766ddc779449db2bf45bb165fbff State:online Size (GiB):49 Used (GiB):45 Free (GiB):4
  71. Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc
  72. Bricks:
  73. Id:27e193590ccdb5fba287fb66d5473074 Size (GiB):45 Path: /var/lib/heketi/mounts/vg_ab5f766ddc779449db2bf45bb165fbff/brick_27e193590ccdb5fba287fb66d5473074/brick
  74. Id:c189451c573814e05ebd83d46ab9a0af State:online Size (GiB):49 Used (GiB):0 Free (GiB):49
  75. Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 /dev/disk/by-path/pci-0000:01:04.0-scsi-0:0:0:3 /dev/sdd
  76. Bricks:
  77. Node Id: 5e99fe0cd727b8066f200bad5524c544
  78. State: online
  79. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  80. Zone: 1
  81. Management Hostnames: 192.168.9.82
  82. Storage Hostnames: 192.168.9.82
  83. Devices:
  84. Id:5cd245e9826c0bfa46bef0c0d41ed0ed State:online Size (GiB):49 Used (GiB):0 Free (GiB):49
  85. Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 /dev/disk/by-path/pci-0000:01:04.0-scsi-0:0:0:3 /dev/sdd
  86. Bricks:
  87. Id:8fd529a668d5c19dfc37450b755230cd State:online Size (GiB):99 Used (GiB):95 Free (GiB):4
  88. Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/sdb
  89. Bricks:
  90. Id:37006636e1fe713a395755e8d34f6f20 Size (GiB):95 Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
  91. Id:b648c995486b0e785f78a8b674d8b590 State:online Size (GiB):49 Used (GiB):45 Free (GiB):4
  92. Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/sdc
  93. Bricks:
  94. Id:8eba3fb2253452999a1ec60f647dcf03 Size (GiB):45 Path: /var/lib/heketi/mounts/vg_b648c995486b0e785f78a8b674d8b590/brick_8eba3fb2253452999a1ec60f647dcf03/brick
  95. Node Id: 7bb26eb30c1c61456b5ae8d805c01cf1
  96. State: online
  97. Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
  98. Zone: 1
  99. Management Hostnames: 192.168.9.83
  100. Storage Hostnames: 192.168.9.83
  101. Devices:
  102. Id:51ad0981f8fed73002f5a7f2dd0d65c5 State:online Size (GiB):99 Used (GiB):95 Free (GiB):4
  103. Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb
  104. Bricks:
  105. Id:3dca27f98e1c20aa092c159226ddbe4d Size (GiB):95 Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
  106. Id:6656246eafefffaea49399444989eab1 State:online Size (GiB):49 Used (GiB):0 Free (GiB):49
  107. Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 /dev/disk/by-path/pci-0000:01:04.0-scsi-0:0:0:3 /dev/sdd
  108. Bricks:
  109. Id:9b39c4e288d4a1783d204d2033444c00 State:online Size (GiB):49 Used (GiB):45 Free (GiB):4
  110. Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc
  111. Bricks:
  112. Id:4fab639b551e573c61141508d75bf605 Size (GiB):45 Path: /var/lib/heketi/mounts/vg_9b39c4e288d4a1783d204d2033444c00/brick_4fab639b551e573c61141508d75bf605/brick

注意:重点查看 Devices 相关信息。

查看更新后的 VG 信息

storage-0 节点为例,查看更新后 VG 信息(输出结果中删除了系统 VG 信息)。

  1. [root@ks-storage-0 heketi]# vgs
  2. VG #PV #LV #SN Attr VSize VFree
  3. openeuler 1 2 0 wz--n- <19.00g 0
  4. vg_9af38756fe916fced666fcd3de786c19 1 2 0 wz--n- 99.87g <3.92g
  5. vg_ab5f766ddc779449db2bf45bb165fbff 1 2 0 wz--n- 49.87g <4.42g
  6. vg_c189451c573814e05ebd83d46ab9a0af 1 0 0 wz--n- 49.87g 49.87g

为了节省篇幅,此处省略了创建 PVC 验证、查看的过程。读者可以参考之前的操作自行验证测试。

至此,我们实战演示了通过 Heketi-CLI 扩容磁盘并验证测试的全过程。

常见问题

问题 1

  • 报错信息
  1. [root@ks-master-0 k8s-yaml]# kubectl apply -f pvc-test-10g.yaml
  2. The PersistentVolumeClaim "test-data-10G" is invalid: metadata.name: Invalid value: "test-data-10G": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9] "a-z0-9")?(\.[a-z0-9]([-a-z0-9]*[a-z0-9] "a-z0-9")?)*')
  • 解决方案

创建 pvc 时,yaml 文件中定义的 metadata.name 使用了大写字母 test-data-10G,改成小写 test-data-10g 就可以了。

问题 2

  • 报错信息
  1. The PersistentVolumeClaim "test-data-10g" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value
  • 解决方案

这个是自己操作失误,之前创建了一个名为 test-data-10g 的 PVC,后来在原来的配置文件基础上,将 storage 的值改小了再去执行创建动作,引发了上面的报错。

问题 3

  • 报错信息
  1. [root@ks-storage-0 heketi]# heketi-cli topology load --json=/etc/heketi/topology.json
  2. Found node 192.168.9.81 on cluster 9ad37206ce6575b5133179ba7c6e0935
  3. Found device /dev/sdb
  4. Adding device /dev/sdc ... Unable to add device: Initializing device /dev/sdc failed (already initialized or contains data?): No device found for /dev/sdc.
  5. Found node 192.168.9.82 on cluster 9ad37206ce6575b5133179ba7c6e0935
  6. Found device /dev/sdb
  7. Adding device /dev/sdc ... Unable to add device: Initializing device /dev/sdc failed (already initialized or contains data?): No device found for /dev/sdc.
  8. Found node 192.168.9.83 on cluster 9ad37206ce6575b5133179ba7c6e0935
  9. Found device /dev/sdb
  10. Adding device /dev/sdc ... Unable to add device: Initializing device /dev/sdc failed (already initialized or contains data?): No device found for /dev/sdc.
  • 解决方案

这个是自己操作失误,还没有添加磁盘 /dev/sdc 就去执行重载命令。

总结

本文详细介绍了基于 Hekiti 管理的 GlusterFS 存储集群,当出现数据盘空间分配满额无法创建数据卷的场景时,运维人员该如何增加新的物理磁盘并添加到已有存储集群中的两种解决方案。

  • 扩容方案之调整 Topology 配置文件

  • 扩容方案之 Heketi-CLI 直接扩容

本文来源于生产环境的真实案例,所有操作都经过实际验证。但,数据无价、扩容有风险,操作需谨慎。

本文由博客一文多发平台 OpenWrite 发布!

Kubernetes 对接 GlusterFS 磁盘扩容实战的更多相关文章

  1. SQL Server 磁盘空间告急(磁盘扩容)转载

    一.背景 在线上系统中,如果我们发现存放数据库文件的磁盘空间不够,我们应该怎么办呢?新买一个硬盘挂载上去可以嘛?(linux下可以直接挂载硬盘进行扩容),但是我们的SQL Server是运行在Wind ...

  2. Kubernetes之GlusterFS集群文件系统高可用安装,提供动态卷存储

    GlusterFS高可用安装 一. 准备工作 安装好的k8s集群,提供其中三个节点给GFS,这三个节点都至少有一个可用的裸块设备 在k8s所有节点安装所需要的组件 # ubuntu16.04 add- ...

  3. VMware下对虚拟机Ubuntu14系统所在分区sda1进行磁盘扩容

    VMware下对虚拟机Ubuntu14系统所在分区sda1进行磁盘扩容 一般来说,在对虚拟机里的Ubuntu下的磁盘进行扩容时,都是添加新的分区,而并不是对其系统所在分区进行扩容,如在此链接中http ...

  4. 【转载】CentOS LVM磁盘扩容

    转自:http://blog.sina.com.cn/s/blog_8882a6260101cpfs.html EXSI5.1主机有一个linux虚拟机,系统是centos运行httpd服务,因为是多 ...

  5. centos6.5磁盘扩容

    3台虚拟机都是20G磁盘,用着用着发现不够了,先扩容了一台,各种百度...各种坑,每个人的情况不一样,发现不一样的地方最后立即百度查看.一台扩容成功后,打算再扩容一台,目的是留一个记录.(我是用xsh ...

  6. ECS Linux服务器xfs磁盘扩容

    ECS Linux服务器xfs磁盘扩 ECS Linux服务器xfs磁盘使用阿里云官方提供的磁盘扩容方法扩容会有报错: [root@iZ28u04wmy2Z ~]# e2fsck /dev/xvdb1 ...

  7. es 加磁盘扩容

    elasticsearch多磁盘扩容   1.问题 由于早前elasticsearch集群数据存储路径只配置了一个,所以某天磁盘突然爆满,集群差点当机.需重新配置多路径存储路径,因为在生产环境,得保证 ...

  8. VMware 虚拟机快照、克隆、磁盘扩容

    1. 快照 快照是虚拟机某个时间点上完整系统的镜像,可以在虚拟机内部通过快照文件恢复系统到之前的节点. 拍摄快照: 恢复快照: 2. 克隆 克隆是原始虚拟机全部状态的一个拷贝,是脱离原始虚拟机独立存在 ...

  9. 故障处理:磁盘扩容出错:e2fsck: Bad magic number in super-block while trying to open /dev/vdb1

    按照阿里云官网教程对云服务器进行磁盘扩容,使用fdisk重新分区,最后使用e2fsck和resize2fs来完成文件系统层面的扩容 在执行“e2fsck -f /dev/vdb1”命令时报错,如果你的 ...

  10. Linux磁盘扩容

    Linux磁盘扩容 fdisk -l # 查看硬盘信息 lvextend -L +1G /dev/mapper/vg00-lvroot 或者 lvextend -l +%FREE /dev/mappe ...

随机推荐

  1. Ubuntu、windows10双系统删除Ubuntu后如何清除grub启动

    参考文章: 删除Grub,windows引导修复-百度经验 (baidu.com) ============================================== 自己的笔记本电脑一直都 ...

  2. 洛谷P1209修理牛棚 Barn Repair

    [USACO1.3] 修理牛棚 Barn Repair 题目描述 在一个月黑风高的暴风雨夜,Farmer John 的牛棚的屋顶.门被吹飞了 好在许多牛正在度假,所以牛棚没有住满. 牛棚一个紧挨着另一 ...

  3. list 中的Stream 累加操作

    ublic class Test { public static void main(String[] args) { double sum = 860.10 + 1808.09; double su ...

  4. C语言编程-GCC编译过程

    gcc编译 预处理 ->编译->汇编->链接 预处理 gcc -E helloworld.c -o helloworld.i 头文件展开:不检查语法错误,即可以展开任意文件: 宏定义 ...

  5. Python开发中,SQLAlchemy 的同步操作和异步操作封装,以及常规CRUD的处理。

    在我们使用Python来和数据库打交道中,SQLAlchemy是一个非常不错的ORM工具,通过它我们可以很好的实现多种数据库的统一模型接入,而且它提供了非常多的特性,通过结合不同的数据库驱动,我们可以 ...

  6. Vuex的四个轻骑兵:mapState、mapGetter、mapMutation、mapAction(转载)

    vuex进阶一.state1.1 引入vuex 以后,我们需要在state中定义变量,类似于vue中的data,通过state来存放状态 import Vue from 'vue'import Vue ...

  7. 肉夹馍(Rougamo)4.0.1 异步方法变量调试修复与IoC系列扩展

    肉夹馍(https://github.com/inversionhourglass/Rougamo),一款编译时AOP组件,无需在应用启动时进行初始化,也无需繁琐的配置:支持所有种类方法(同步和异步. ...

  8. Go实现常用的排序算法

    一.插入排序 1.从第一个元素开始,该元素可以认为已经被排序 2.取出下一个元素,在已经排序的元素序列中从后向前扫描 3.如果该元素(已排序)大于新元素,将该元素移到下一位置 4.重复步骤3,直到找到 ...

  9. 鸿蒙应用开发:如何与组件库(Glide)衔接?

    ​ Android 发展到现在不仅提供了很多 API,还提供了很多第三方库.这降低了我们开发者的开发难度,提升了开发效率,让应用开发更加的简单高效.众所周知,HarmonyOS 除了提供 16000  ...

  10. 音视频 SDK |合理配置视频参数,提升使用质量

    一.前言 在视频通话或直播时,开发者可以根据需要指定推流和拉流视频相关配置,如视频采集分辨率.视频编码输出分辨率.视频帧率.码率.视图模式和镜像模式. 设置合适的视频分辨率.帧率和码率可以在音视频场景 ...