cluster map

  1. [ceph: root@clienta /]# ceph mon dump
  2. epoch 4
  3. fsid 2ae6d05a-229a-11ec-925e-52540000fa0c
  4. last_changed 2021-10-01T09:33:53.880442+0000
  5. created 2021-10-01T09:30:30.146231+0000
  6. min_mon_release 16 (pacific)
  7. election_strategy: 1
  8. 0: [v2:172.25.250.12:3300/0,v1:172.25.250.12:6789/0] mon.serverc.lab.example.com
  9. 1: [v2:172.25.250.10:3300/0,v1:172.25.250.10:6789/0] mon.clienta
  10. 2: [v2:172.25.250.13:3300/0,v1:172.25.250.13:6789/0] mon.serverd
  11. 3: [v2:172.25.250.14:3300/0,v1:172.25.250.14:6789/0] mon.servere
  12. dumped monmap epoch 4 #数字方便同步
  13. [ceph: root@clienta /]#
  14. [ceph: root@clienta /]# ceph osd dump
  15. epoch 401
  16. fsid 2ae6d05a-229a-11ec-925e-52540000fa0c
  17. created 2021-10-01T09:30:32.028240+0000
  18. modified 2022-08-20T14:56:19.230208+0000
  19. flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
  20. crush_version 77
  21. full_ratio 0.95
  22. backfillfull_ratio 0.9
  23. nearfull_ratio 0.85
  24. require_min_compat_client luminous
  25. min_compat_client jewel
  26. require_osd_release pacific
  27. stretch_mode_enabled false
  28. pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 374 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth
  29. pool 2 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 48 flags hashpspool stripe_width 0 application rgw
  30. pool 3 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 50 flags hashpspool stripe_width 0 application rgw
  31. pool 4 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 52 flags hashpspool stripe_width 0 application rgw
  32. pool 5 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 184 lfor 0/184/182 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 8 application rgw
  33. pool 10 'pool1' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 266 flags hashpspool stripe_width 0
  34. pool 11 'ssdpool' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 338 flags hashpspool stripe_width 0
  35. pool 12 'myecpool' erasure profile myprofile1 size 4 min_size 3 crush_rule 3 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 345 flags hashpspool stripe_width 8192
  36. pool 13 'myecpool2' erasure profile myprofile2 size 4 min_size 3 crush_rule 4 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 350 flags hashpspool stripe_width 8192
  37. max_osd 9
  38. osd.0 up in weight 1 up_from 360 up_thru 398 down_at 354 last_clean_interval [243,350) [v2:172.25.250.12:6800/2528022353,v1:172.25.250.12:6801/2528022353] [v2:172.25.249.12:6802/2528022353,v1:172.25.249.12:6803/2528022353] exists,up 5be66be9-8262-4c4b-9654-ed549f6280f7
  39. osd.1 up in weight 1 up_from 359 up_thru 397 down_at 354 last_clean_interval [244,350) [v2:172.25.250.12:6808/3093181835,v1:172.25.250.12:6809/3093181835] [v2:172.25.249.12:6810/3093181835,v1:172.25.249.12:6811/3093181835] exists,up 3f751363-a03c-4b76-af92-8114e38bfa09
  40. osd.2 up in weight 1 up_from 363 up_thru 378 down_at 354 last_clean_interval [242,350) [v2:172.25.250.12:6816/1645468882,v1:172.25.250.12:6817/1645468882] [v2:172.25.249.12:6818/1645468882,v1:172.25.249.12:6819/1645468882] exists,up 68d72b66-4c99-4d54-a7e4-f1cb8f8e5054
  41. osd.3 up in weight 1 up_from 363 up_thru 390 down_at 354 last_clean_interval [236,350) [v2:172.25.250.13:6816/2535000344,v1:172.25.250.13:6817/2535000344] [v2:172.25.249.13:6818/2535000344,v1:172.25.249.13:6819/2535000344] exists,up 21a9ebe9-908d-4026-8a57-8fbee935033e
  42. osd.4 up in weight 1 up_from 354 up_thru 400 down_at 353 last_clean_interval [237,350) [v2:172.25.250.14:6800/408153468,v1:172.25.250.14:6801/408153468] [v2:172.25.249.14:6802/408153468,v1:172.25.249.14:6803/408153468] exists,up 85202210-9298-4443-9140-027792ddc891
  43. osd.5 up in weight 1 up_from 363 up_thru 399 down_at 354 last_clean_interval [235,350) [v2:172.25.250.13:6802/1745131990,v1:172.25.250.13:6803/1745131990] [v2:172.25.249.13:6804/1745131990,v1:172.25.249.13:6805/1745131990] exists,up 252d1668-c4c2-42ca-85fe-87c7419557d6
  44. osd.6 up in weight 1 up_from 353 up_thru 381 down_at 352 last_clean_interval [237,350) [v2:172.25.250.14:6804/1927667266,v1:172.25.250.14:6806/1927667266] [v2:172.25.249.14:6807/1927667266,v1:172.25.249.14:6811/1927667266] exists,up 2d753bfc-32f6-4663-9411-16067f366977
  45. osd.7 up in weight 1 up_from 363 up_thru 378 down_at 354 last_clean_interval [236,350) [v2:172.25.250.13:6800/4217605284,v1:172.25.250.13:6801/4217605284] [v2:172.25.249.13:6806/4217605284,v1:172.25.249.13:6808/4217605284] exists,up fccc62ed-9b04-456a-95c3-5c3cb27e56d4
  46. osd.8 up in weight 1 up_from 357 up_thru 399 down_at 356 last_clean_interval [237,350) [v2:172.25.250.14:6816/3368063169,v1:172.25.250.14:6817/3368063169] [v2:172.25.249.14:6818/3368063169,v1:172.25.249.14:6819/3368063169] exists,up 8b0789f2-f40e-4d63-ac52-343b8e11f24c
  47. blocklist 172.25.250.14:6825/1595923670 expires 2022-08-21T14:55:16.971863+0000
  48. blocklist 172.25.250.14:6824/1595923670 expires 2022-08-21T14:55:16.971863+0000
  49. blocklist 172.25.250.14:0/3491691321 expires 2022-08-21T14:55:16.971863+0000
  50. blocklist 172.25.250.14:0/2738777763 expires 2022-08-21T14:55:16.971863+0000
  51. blocklist 172.25.250.12:0/1239900377 expires 2022-08-20T16:19:27.333673+0000
  52. blocklist 172.25.250.12:6825/3912612299 expires 2022-08-20T16:19:27.333673+0000
  53. blocklist 172.25.250.12:6824/3912612299 expires 2022-08-20T16:19:27.333673+0000
  54. blocklist 172.25.250.12:0/2171541544 expires 2022-08-20T16:19:27.333673+0000
  55. blocklist 172.25.250.12:0/1139201862 expires 2022-08-20T16:19:27.333673+0000
  56. blocklist 172.25.250.12:0/2525786376 expires 2022-08-21T08:52:54.506446+0000
  57. blocklist 172.25.250.14:0/3949782568 expires 2022-08-21T14:55:16.971863+0000
  58. blocklist 172.25.250.12:0/1486113939 expires 2022-08-21T08:52:54.506446+0000
  59. blocklist 172.25.250.12:6825/2537331399 expires 2022-08-21T08:52:54.506446+0000
  60. blocklist 172.25.250.12:0/2290094124 expires 2022-08-21T08:52:54.506446+0000
  61. blocklist 172.25.250.12:6824/2537331399 expires 2022-08-21T08:52:54.506446+0000
  62. [ceph: root@clienta /]#
  63. [ceph: root@clienta /]# ceph pg dump
  64. #忽略输出,太多了
  65. osd 100-200 最多承载pg,建议值
  66. [ceph: root@clienta /]# ceph mgr dump | grep "dashboard"
  67. "config_dashboard": {
  68. "name": "config_dashboard",
  69. "default_value": "registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:latest",
  70. "name": "dashboard",
  71. "default_value": "osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps",
  72. "config_dashboard": {
  73. "name": "config_dashboard",
  74. "default_value": "registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:latest",
  75. "name": "dashboard",
  76. "default_value": "osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps",
  77. "config_dashboard": {
  78. "name": "config_dashboard",
  79. "default_value": "registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:latest",
  80. "name": "dashboard",
  81. "default_value": "osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps",
  82. "dashboard",
  83. "config_dashboard": {
  84. "name": "config_dashboard",
  85. "default_value": "registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:latest",
  86. "name": "dashboard",
  87. "default_value": "osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps",
  88. "dashboard": "https://172.25.250.14:8443/",
  89. [ceph: root@clienta /]#
  90. Cluster Map基本查询
  91. ceph mon dump
  92. ceph osd dump
  93. ceph osd crush dump
  94. ceph pg dump all
  95. ceph fs dump
  96. ceph mgr dump
  97. ceph service dump

mon小集群,三节点部署 存放所有map

  1. [root@serverc 2ae6d05a-229a-11ec-925e-52540000fa0c]# pwd
  2. /var/lib/ceph/2ae6d05a-229a-11ec-925e-52540000fa0c
  3. [root@serverc 2ae6d05a-229a-11ec-925e-52540000fa0c]# ll
  4. total 292
  5. drwx------. 3 root root 149 Oct 1 2021 alertmanager.serverc
  6. -rw-r--r--. 1 root root 295991 Oct 1 2021 cephadm.d7a73386d1e46cffff151775b8e1d098069c88b89aea56cab15b079c1a1f555f
  7. drwx------. 3 167 167 20 Oct 1 2021 crash
  8. drwx------. 2 167 167 167 Oct 1 2021 crash.serverc
  9. drwx------. 4 472 472 161 Oct 1 2021 grafana.serverc
  10. drwx------. 2 167 167 167 Oct 1 2021 mgr.serverc.lab.example.com.aiqepd
  11. drwx------. 3 167 167 224 Oct 1 2021 mon.serverc.lab.example.com
  12. drwx------. 2 nobody nobody 138 Oct 1 2021 node-exporter.serverc
  13. drwx------. 2 167 167 275 Aug 20 10:54 osd.0
  14. drwx------. 2 167 167 275 Aug 20 10:54 osd.1
  15. drwx------. 2 167 167 275 Aug 20 10:54 osd.2
  16. drwx------. 4 root root 161 Oct 1 2021 prometheus.serverc
  17. drwx------. 2 167 167 167 Oct 29 2021 rgw.realm.zone.serverc.bqwjcv
  18. drwxr-xr-x. 2 root root 6 Oct 1 2021 selinux
  19. [root@serverc 2ae6d05a-229a-11ec-925e-52540000fa0c]#
  20. 角色相关信息



奇数部署好一些

osd之间会发消息,确定心跳。osd无心跳时,osd会汇报mon

数据恢复:副本丢失情况下,恢复副本的过程

数据回填:当有新的osd加入时 (重平衡)

osd是看使用比率

osd 最大70%左右 再大的话就不好恢复

  1. [ceph: root@clienta /]# ceph osd set noout
  2. noout is set
  3. [ceph: root@clienta /]# ceph osd unset noout
  4. noout is unset
  5. [ceph: root@clienta /]#

nearfull_ratio 0.85 提醒集群容量块满了 health warn(扩容)

backfillfull_ratio 0.9 当osd使用比达到90%,数据禁止回填,但是可以恢复,正常对外提供读写

full_ratio 0.95 当osd使用比达到95%,数据禁止写入,可以读,可以恢复

  1. [ceph: root@clienta /]# ceph osd set-full-ratio 0.95
  2. osd set-full-ratio 0.95
  3. [ceph: root@clienta /]# ceph osd set-nearfull-ratio 0.85
  4. osd set-nearfull-ratio 0.85
  5. [ceph: root@clienta /]# ceph osd dump
  6. epoch 426
  7. fsid 2ae6d05a-229a-11ec-925e-52540000fa0c
  8. created 2021-10-01T09:30:32.028240+0000
  9. modified 2022-08-20T17:56:35.571847+0000
  10. flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
  11. crush_version 82
  12. full_ratio 0.95
  13. backfillfull_ratio 0.9
  14. nearfull_ratio 0.85

https://docs.ceph.com/en/quincy/?rtd_search=mon_osd_down_out_interval+

可以寻找这些参数



设置osd权重

0就是尽量不分配在这个osd上面,移除时,先改为0

  1. [ceph: root@clienta /]# ceph osd primary-affinity osd.0 0
  2. 降低权重
  3. [ceph: root@clienta /]# ceph pg dump pgs_brief
  4. PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY
  5. 4.8 active+clean [4,3,0] 4 [4,3,0] 4
  6. 3.f active+clean [7,4,0] 7 [7,4,0] 7
  7. 2.e active+clean [2,4,3] 2 [2,4,3] 2
  8. 4.b active+clean [7,0,4] 7 [7,0,4] 7
  9. 3.c active+clean [5,0,6] 5 [5,0,6] 5
  10. 2.d active+clean [4,3,2] 4 [4,3,2] 4
  11. 4.a active+clean [5,1,4] 5 [5,1,4] 5
  12. 3.d active+clean [7,6,2] 7 [7,6,2] 7
  13. 2.c active+clean [6,0,5] 6 [6,0,5] 6
  14. 3.a active+clean [3,1,8] 3 [3,1,8] 3

osd.0不在作为主了

  1. [ceph: root@clienta /]# ceph pg dump pgs_brief | grep "\[6"
  2. dumped pgs_brief
  3. 2.c active+clean [6,0,5] 6 [6,0,5] 6
  4. 2.a active+clean [6,1,3] 6 [6,1,3] 6
  5. 4.3 active+clean [6,7,1] 6 [6,7,1] 6

过滤带特殊符号

参数



上面的默认值可以去官网查,可能有变化

ceph 调优

ceph对吞吐量较高,需要大内存,则numa架构就不适合

如果你的程序不占用大内存,要求更快的程序运行时间,你应该选择限制值访问本numa node的方式来进行处理

Ceph部署最佳实践

MON的性能对集群总体性能至关重要,应用部署于专用节点,为确保正确仲裁,数量应为奇数个

在OSD节点上,操作系统、OSD数据、OSD日志应当位于独立的磁盘上,以确保满意的吞吐量

在集群安装后,需要监控集群、排除故障并维护,尽管 Ceph具有自愈功能。如果发生性能问题,首先在磁盘、网络和硬件层面上排查。然后逐步转向RADOS块设备和Ceph对象网关

RBD建议

块设备上的工作负载通常是I/O密集型负载,例如在OpenStack中虚拟机上运行数据库。

对于RBD,OSD日志应当位于SSD或者NVMe设备上

对后端存储,可以使用不同的存储设备以提供不同级别的服务

OSD建议硬件

将一个raid1磁盘用于操作系统

每个OSD一块硬盘,将SSD或者NVMe用于日志

使用多个10Gb网卡,每个网络一个双链路绑定

每个OSD预留1个CPU,每个逻辑核心1GHz

分配16GB内存,外加每个OSD 2G内存



现在ceph可以自动计算

cephpgc:可以去看一下,这个红帽官网的计算器,还挺有意思

Ceph网络

尽可能使用10Gb网络带宽

尽可能使用不同的cluster网络和public网络

网络监控

OSD建议硬件

将一个raid1磁盘用于操作系统

每个OSD一块硬盘,将SSD或者NVMe用于日志

使用多个10Gb网卡,每个网络一个双链路绑定

每个OSD预留1个CPU,每个逻辑核心1GHz

分配16GB内存,外加每个OSD 2G内存

其他性能测试工具

  1. dd
  2.  echo 3 > /proc/sys/vm/drop_caches
  3. dd if=/dev/zero of=/var/lib/ceph/osd/ceph-0/test.img bs=4M count=1024 oflag=direct
  4. dd if=/var/lib/ceph/osd/ceph-0/test.img of=/dev/null bs=4M count=1024 oflag=direct  
  5. fio
  6. https://help.aliyun.com/document_detail/95501.html?spm=a2c4g.11174283.6.640.6e904da23dhdcG
  7. [ceph: root@clienta /]# ceph osd pool create pool1
  8. pool 'pool1' created
  9. [ceph: root@clienta /]# rados bench -p pool1 10 write --no-cleanup
  10. hints = 1
  11. Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
  12. Object prefix: benchmark_data_clienta.lab.example.com_565
  13. sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
  14. 0 0 0 0 0 0 - 0
  15. 1 16 16 0 0 0 - 0
  16. 2 16 17 1 1.99911 2 1.77034 1.77034
  17. 3 16 19 3 3.99849 8 2.89986 2.41562
  18. 4 16 20 4 3.99855 4 3.87552 2.78059
  19. 5 16 26 10 7.99736 24 4.87003 3.66784
  20. 6 16 29 13 8.65267 12 5.8558 3.89705
  21. 7 16 36 20 11.4123 28 2.25577 4.20837
  22. 8 16 39 23 11.4849 12 3.18817 4.17326
  23. 9 16 49 33 14.6481 40 1.93119 3.67961
  24. 10 16 54 38 15.1205 20 4.61332 3.71135
  25. 11 15 54 39 14.1054 4 4.50752 3.73177
  26. 12 14 54 40 13.262 4 3.58412 3.72808
  27. 13 11 54 43 13.1608 12 3.9051 3.71667
  28. Total time run: 13.7161
  29. Total writes made: 54
  30. Write size: 4194304
  31. Object size: 4194304
  32. Bandwidth (MB/sec): 15.7479
  33. Stddev Bandwidth: 11.8495
  34. Max bandwidth (MB/sec): 40
  35. Min bandwidth (MB/sec): 0
  36. Average IOPS: 3
  37. Stddev IOPS: 3.00427
  38. Max IOPS: 10
  39. Min IOPS: 0
  40. Average Latency(s): 3.86659
  41. Stddev Latency(s): 1.48435
  42. Max latency(s): 7.45216
  43. Min latency(s): 1.17718
  44. [ceph: root@clienta /]#
  45. [ceph: root@clienta /]# rados bench -p pool1 10 seq
  46. hints = 1
  47. sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
  48. 0 0 0 0 0 0 - 0
  49. 1 16 24 8 31.8681 32 0.539518 0.460968
  50. 2 16 45 29 57.6669 84 0.657187 0.773738
  51. 3 5 54 49 65.0267 80 0.595555 0.685997
  52. 4 2 54 52 51.6497 12 2.35873 0.808986
  53. Total time run: 4.26827
  54. Total reads made: 54
  55. Read size: 4194304
  56. Object size: 4194304
  57. Bandwidth (MB/sec): 50.606
  58. Average IOPS: 12
  59. Stddev IOPS: 8.90693
  60. Max IOPS: 21
  61. Min IOPS: 3
  62. Average Latency(s): 0.856345
  63. Max latency(s): 3.07995
  64. Min latency(s): 0.0897737
  65. [ceph: root@clienta /]#
  66. [ceph: root@clienta /]# rados bench -p pool1 10 rand
  67. hints = 1
  68. sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
  69. 0 0 0 0 0 0 - 0
  70. 1 16 26 10 39.8594 40 0.450443 0.523675
  71. 2 16 45 29 57.894 76 1.81343 0.569421
  72. 3 16 54 38 50.5224 36 2.38602 0.792168
  73. 4 16 79 63 62.8348 100 0.0543633 0.813247
  74. 5 16 94 78 62.2342 60 2.35538 0.832442
  75. 6 16 127 111 73.8291 132 0.141455 0.779658
  76. 7 16 158 142 80.881 124 1.5348 0.742651
  77. 8 16 188 172 85.4177 120 0.431023 0.71256
  78. 9 16 208 192 84.786 80 0.657024 0.690867
  79. 10 16 213 197 78.2818 20 0.30201 0.702446
  80. 11 11 213 202 72.9987 20 2.83034 0.737541
  81. Total time run: 11.4804
  82. Total reads made: 213
  83. Read size: 4194304
  84. Object size: 4194304
  85. Bandwidth (MB/sec): 74.2134
  86. Average IOPS: 18
  87. Stddev IOPS: 10.4045
  88. Max IOPS: 33
  89. Min IOPS: 5
  90. Average Latency(s): 0.829176
  91. Max latency(s): 3.0047
  92. Min latency(s): 0.0343662

测之前还是得清除缓存和对象

  1. [ceph: root@clienta /]# rados -p pool1 cleanup
  2. Removed 54 objects
  3. [root@clienta ~]# sysctl vm.drop_caches=3

我这是虚拟机部署,不是物理机,与物理机比性能高下立判。物理机Bandwidth (MB/sec): 1000 虚拟机Bandwidth (MB/sec): 74.2134

  1. [ceph: root@clienta /]# rbd pool init pool1
  2. [ceph: root@clienta /]# rbd create --size 1G pool1/image1
  3. [ceph: root@clienta /]# rbd info pool1/image1
  4. rbd image 'image1':
  5. size 1 GiB in 256 objects
  6. order 22 (4 MiB objects)
  7. snapshot_count: 0
  8. id: 197ad26b4bdeb
  9. block_name_prefix: rbd_data.197ad26b4bdeb
  10. format: 2
  11. features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
  12. op_features:
  13. flags:
  14. create_timestamp: Sun Aug 21 10:29:30 2022
  15. access_timestamp: Sun Aug 21 10:29:30 2022
  16. modify_timestamp: Sun Aug 21 10:29:30 2022
  17. [ceph: root@clienta /]# rbd bench --io-type write image1 --pool=pool1
  18. bench type write io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
  19. SEC OPS OPS/SEC BYTES/SEC
  20. 1 6288 6174.26 24 MiB/s
  21. 2 6800 3117.98 12 MiB/s
  22. 3 7232 2402.35 9.4 MiB/s
  23. 4 7856 1891.83 7.4 MiB/s
  24. 5 8336 1666.05 6.5 MiB/s
  25. 6 9040 552.049 2.2 MiB/s
  26. 7 14160 1514.69 5.9 MiB/s
  27. 8 17472 2018.1 7.9 MiB/s
  28. 9 23056 3039.35 12 MiB/s
  29. 10 26000 3539.12 14 MiB/s
  30. 11 28416 3876.7 15 MiB/s

  1. [ceph: root@clienta /]# rbd bench --io-type read image1 --pool=pool1
  2. bench type read io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
  3. SEC OPS OPS/SEC BYTES/SEC
  4. 1 400 452.168 1.8 MiB/s
  5. 2 816 430.636 1.7 MiB/s
  6. 3 1248 431.099 1.7 MiB/s
  7. 4 1712 441.599 1.7 MiB/s
  8. 5 2144 438.929 1.7 MiB/s
  9. 6 2560 429.247 1.7 MiB/s
  10. 7 2896 414.255 1.6 MiB/s

ceph 010 clustermap ceph调优的更多相关文章

  1. ceph-性能调优

    Ceph 参数性能调优https://blog.csdn.net/changtao381/article/details/49907115这篇文章对我的环境有较大帮助 ceph优化记录 ceph.co ...

  2. 【转】XGBoost参数调优完全指南(附Python代码)

    xgboost入门非常经典的材料,虽然读起来比较吃力,但是会有很大的帮助: 英文原文链接:https://www.analyticsvidhya.com/blog/2016/03/complete-g ...

  3. JVM 性能调优实战之:一次系统性能瓶颈的寻找过程

    玩过性能优化的朋友都清楚,性能优化的关键并不在于怎么进行优化,而在于怎么找到当前系统的性能瓶颈.性能优化分为好几个层次,比如系统层次.算法层次.代码层次…JVM 的性能优化被认为是底层优化,门槛较高, ...

  4. hadoop调优之一:概述

    hadoop集群性能低下的常见原因 (一)硬件环境 1.CPU/内存不足,或未充分利用 2.网络原因 3.磁盘原因 (二)map任务原因 1.输入文件中小文件过多,导致多次启动和停止JVM进程.可以设 ...

  5. XGBoost参数调优完全指南

    简介 如果你的预测模型表现得有些不尽如人意,那就用XGBoost吧.XGBoost算法现在已经成为很多数据工程师的重要武器.它是一种十分精致的算法,可以处理各种不规则的数据.构造一个使用XGBoost ...

  6. xgboost 参数调优指南

    一.XGBoost的优势 XGBoost算法可以给预测模型带来能力的提升.当我对它的表现有更多了解的时候,当我对它的高准确率背后的原理有更多了解的时候,我发现它具有很多优势: 1 正则化 标准GBDT ...

  7. 32-hadoop-hbase调优

    1, 数据膨胀后, 才对region进行分区, 效率比较低, 所以需要预创建region, 进行负载均衡写入 package com.wenbronk.hbase; import org.apache ...

  8. XGBoost参数调优

    XGBoost参数调优 http://blog.csdn.net/hhy518518/article/details/54988024 摘要: 转载:http://blog.csdn.NET/han_ ...

  9. MySQL性能调优与架构设计——第8章 MySQL数据库Query的优化

    第8章 MySQL数据库Query的优化 前言: 在之前“影响 MySQL 应用系统性能的相关因素”一章中我们就已经分析过了Query语句对数据库性能的影响非常大,所以本章将专门针对 MySQL 的 ...

随机推荐

  1. python基础学习8

    python基础学习8 内容概要 字典的内置方法 元组的内置方法 集合的内置方法 垃圾回收机制 内容详情 字典的内置方法 一.类型转换 res = dict(name='jason', pwd=123 ...

  2. 3D编程模式:依赖隔离模式

    大家好~本文提出了"依赖隔离"模式 系列文章详见: 3D编程模式:开篇 本文相关代码在这里: 相关代码 目录 编辑器需要替换引擎 设计意图 定义 应用 扩展 最佳实践 更多资料推荐 ...

  3. CF1665A GCD vs LCM

  4. mybatis踩过的坑

    <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE mapper PUBLIC "- ...

  5. 安装rlwrap

    一. 安装readlineyum install readline* -y 二. 安装rlwrap[root@dbserver ~]# tar -zxvf rlwrap-0.43.tar.gz[roo ...

  6. JDBC:批处理

    1.批处理: 当要执行某条SQL语句很多次时.例如,批量添加数据:使用批处理的效率要高的多. 2.如何实现批处理 实践: package com.dgd.test; import java.io.Fi ...

  7. 泛型容器类和ArrayList操作

    泛型 比如ArrayList<E> E就是泛型 在没有泛型之前,从集合读取到的每一个对象都必须进行转换,如果有人不小心插入了类型错误的对象,在运行时的转换处理就会出错 有了泛型之后,可以告 ...

  8. c# 反射专题—————— 介绍一下是什么是反射[ 一]

    前言 为什么有反射这个系列,这个系列后,asp net 将会进入深入篇,如果没有这个反射系列,那么asp net的源码,看了可能会觉得头晕,里面的依赖注入包括框架源码是大量的反射. 正文 下面是官方文 ...

  9. JavaWEB-01-MySQL基础

    JavaWeb内容 数据库 – 数据存储 MySQL JDBC Maven - 项目管理工具 Mybatis 前端 - 为了前端哥们沟通 HTML+CSS JavaScript Ajax + Vue ...

  10. squareline搭档OneOS图形组件之可视化GUI开发

    LVGL+OneOS! LVGL,一款很火的GUI开发库,一个高度可裁剪.低资源占用.界面美观且易用的嵌入式系统图形库.本身并不依赖特定的硬件平台,任何满足LVGL硬件配置要求的微控制器均可运行LVG ...