1. 常用命令
    ceph -w
  2. ceph df
  3. ceph features
  4. ceph fs ls
  5. ceph fs status
  6. ceph fsid
  7. ceph health
  8. ceph -s
  9. ceph status
  10. ceph mgr module ls
  11. ceph mgr module enable dashboard
  12. ceph mgr services
  13. ceph mon feature ls
  14. ceph node ls
  15. ceph osd crush rule ls
  16. ceph osd crush rule dump
  17. ceph osd df tree
  18. ceph osd lspools
  19. ceph osd perf
  20. watch ceph osd perf
  21. ceph osd pool get kycrbd all
  22. ceph osd pool ls
  23. ceph osd pool ls detail
  24. ceph osd pool stats
  25. ceph osd status
  26. ceph osd tree
  27. ceph osd utilization
  28.  
  29. pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
  30. ceph pg dump all
  31. ceph pg dump summary
  32. ceph pg dump sum
  33. ceph pg dump delta
  34. ceph pg dump pools
  35. ceph pg dump osds
  36. ceph pg dump pgs
  37. ceph pg dump pgs_brief
  38. ceph pg dump pgs
  39. ceph pg ls
  40. ceph pg ls-by-osd osd.0
  41. ceph pg ls-by-pool kycfs_metadata
  42. ceph pg ls-by-primary
  43. ceph pg map 7.1e8
  44. ceph report
  45.  
  46. ceph time-sync-status
  47. ceph version
  48. ceph versions
  1. root@cu-pve04:~# ceph fs get kycfs
  2. Filesystem 'kycfs' ()
  3. fs_name kycfs
  4. epoch
  5. flags c
  6. created -- ::48.957941
  7. modified -- ::33.599472
  8. tableserver
  9. root
  10. session_timeout
  11. session_autoclose
  12. max_file_size
  13. last_failure
  14. last_failure_osd_epoch
  15. compat compat={},rocompat={},incompat={=base v0.,=client writeable ranges,=default file layouts on dirs,=dir inode in separate object,=mds uses versioned encoding,=dirfrag is stored in omap,=no anchor table,=file layout v2}
  16. max_mds
  17. in
  18. up {=}
  19. failed
  20. damaged
  21. stopped
  22. data_pools []
  23. metadata_pool
  24. inline_data disabled
  25. balancer
  26. standby_count_wanted
  27. : 192.168.7.205:/ 'cu-pve05' mds.0.12 up:active seq (standby for rank - 'pve')
  28.  
  29. root@cu-pve04:~# ceph fs ls
  30. name: kycfs, metadata pool: kycfs_metadata, data pools: [kycfs_data ]
  31.  
  32. root@cu-pve04:~# ceph fs status
  33. kycfs - clients
  34. =====
  35. +------+--------+----------+---------------+-------+-------+
  36. | Rank | State | MDS | Activity | dns | inos |
  37. +------+--------+----------+---------------+-------+-------+
  38. | | active | cu-pve05 | Reqs: /s | | |
  39. +------+--------+----------+---------------+-------+-------+
  40. +----------------+----------+-------+-------+
  41. | Pool | type | used | avail |
  42. +----------------+----------+-------+-------+
  43. | kycfs_metadata | metadata | 89.7M | .3T |
  44. | kycfs_data | data | .0G | .3T |
  45. +----------------+----------+-------+-------+
  46.  
  47. +-------------+
  48. | Standby MDS |
  49. +-------------+
  50. | cu-pve04 |
  51. | cu-pve06 |
  52. +-------------+
  53. MDS version: ceph version 12.2. (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)
  54.  
  55. root@cu-pve04:~# ceph fsid
  56. b5fd132b-9ff4-470a-9a14-172eb48dc973
  57. root@cu-pve04:~# ceph health
  58. HEALTH_OK
  59. root@cu-pve04:~# ceph -s
  60. cluster:
  61. id: b5fd132b-9ff4-470a-9a14-172eb48dc973
  62. health: HEALTH_OK
  63.  
  64. services:
  65. mon: daemons, quorum cu-pve04,cu-pve05,cu-pve06
  66. mgr: cu-pve04(active), standbys: cu-pve05, cu-pve06
  67. mds: kycfs-// up {=cu-pve05=up:active}, up:standby
  68. osd: osds: up, in
  69.  
  70. data:
  71. pools: pools, pgs
  72. objects: .35k objects, 176GiB
  73. usage: 550GiB used, .9TiB / .4TiB avail
  74. pgs: active+clean
  75.  
  76. io:
  77. client: 0B/s rd, .5KiB/s wr, 0op/s rd, 6op/s wr
  78.  
  79. root@cu-pve04:~# ceph mgr module ls
  80. {
  81. "enabled_modules": [
  82. "balancer",
  83. "dashboard",
  84. "restful",
  85. "status"
  86. ],
  87. "disabled_modules": [
  88. "influx",
  89. "localpool",
  90. "prometheus",
  91. "selftest",
  92. "zabbix"
  93. ]
  94. }
  95.  
  96. root@cu-pve04:~# ceph mgr module enable dashboard
  97.  
  98. root@cu-pve04:~# ceph mgr services
  99. {
  100. "dashboard": "http://cu-pve04.ka1che.com:7000/"
  101. }
  102.  
  103. root@cu-pve04:~# ceph -v
  104. ceph version 12.2. (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)
  105.  
  106. root@cu-pve04:~# ceph mds versions
  107. {
  108. "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
  109. }
  110. root@cu-pve04:~# ceph mgr versions
  111. {
  112. "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
  113. }
  114. root@cu-pve04:~# ceph mon versions
  115. {
  116. "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
  117. }
  118. root@cu-pve04:~# ceph osd versions
  119. {
  120. "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
  121. }
  122.  
  123. root@cu-pve04:~# ceph mon feature ls
  124. all features
  125. supported: [kraken,luminous]
  126. persistent: [kraken,luminous]
  127. on current monmap (epoch )
  128. persistent: [kraken,luminous]
  129. required: [kraken,luminous]
  130.  
  131. root@cu-pve04:~# ceph mds stat
  132. kycfs-// up {=cu-pve05=up:active}, up:standby
  133.  
  134. root@cu-pve04:~# ceph mon stat
  135. e3: mons at {cu-pve04=192.168.7.204:/,cu-pve05=192.168.7.205:/,cu-pve06=192.168.7.206:/}, election epoch , leader cu-pve04, quorum ,, cu-pve04,cu-pve05,cu-pve06
  136.  
  137. root@cu-pve04:~# ceph osd stat
  138. osds: up, in
  139.  
  140. root@cu-pve04:~# ceph pg stat
  141. pgs: active+clean; 176GiB data, 550GiB used, .9TiB / .4TiB avail; 673B/s rd, 197KiB/s wr, 23op/s
  142.  
  143. root@cu-pve04:~# ceph node ls
  144. {
  145. "mon": {
  146. "cu-pve04": [
  147.  
  148. ],
  149. "cu-pve05": [
  150.  
  151. ],
  152. "cu-pve06": [
  153.  
  154. ]
  155. },
  156. "osd": {
  157. "cu-pve04": [
  158. ,
  159. ,
  160. ,
  161. ,
  162. ,
  163. ,
  164. ,
  165.  
  166. ],
  167. "cu-pve05": [
  168. ,
  169. ,
  170. ,
  171. ,
  172. ,
  173. ,
  174. ,
  175.  
  176. ],
  177. "cu-pve06": [
  178. ,
  179. ,
  180. ,
  181. ,
  182. ,
  183. ,
  184. ,
  185.  
  186. ]
  187. },
  188. "mds": {
  189. "cu-pve04": [
  190. -
  191. ],
  192. "cu-pve05": [
  193.  
  194. ],
  195. "cu-pve06": [
  196. -
  197. ]
  198. }
  199. }
  200.  
  201. root@cu-pve04:~# ceph osd crush rule ls
  202. replicated_rule
  203. root@cu-pve04:~# ceph osd crush rule dump
  204. [
  205. {
  206. "rule_id": ,
  207. "rule_name": "replicated_rule",
  208. "ruleset": ,
  209. "type": ,
  210. "min_size": ,
  211. "max_size": ,
  212. "steps": [
  213. {
  214. "op": "take",
  215. "item": -,
  216. "item_name": "default"
  217. },
  218. {
  219. "op": "chooseleaf_firstn",
  220. "num": ,
  221. "type": "host"
  222. },
  223. {
  224. "op": "emit"
  225. }
  226. ]
  227. }
  228. ]
  229.  
  230. root@cu-pve04:~# ceph osd df tree
  231. ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME
  232. - 52.39417 - .4TiB 550GiB .9TiB 1.03 1.00 - root default
  233. - 17.46472 - .5TiB 183GiB .3TiB 1.03 1.00 - host cu-pve04
  234. hdd 2.18309 1.00000 .18TiB .1GiB .16TiB 1.04 1.01 osd.
  235. hdd 2.18309 1.00000 .18TiB .2GiB .16TiB 0.90 0.88 osd.
  236. hdd 2.18309 1.00000 .18TiB .1GiB .16TiB 1.12 1.10 osd.
  237. hdd 2.18309 1.00000 .18TiB .0GiB .16TiB 1.21 1.18 osd.
  238. hdd 2.18309 1.00000 .18TiB .1GiB .16TiB 0.85 0.83 osd.
  239. hdd 2.18309 1.00000 .18TiB .1GiB .16TiB 1.12 1.09 osd.
  240. hdd 2.18309 1.00000 .18TiB .2GiB .16TiB 1.04 1.01 osd.
  241. hdd 2.18309 1.00000 .18TiB .6GiB .16TiB 0.92 0.90 osd.
  242. - 17.46472 - .5TiB 183GiB .3TiB 1.03 1.00 - host cu-pve05
  243. hdd 2.18309 1.00000 .18TiB .0GiB .16TiB 1.21 1.18 osd.
  244. hdd 2.18309 1.00000 .18TiB .4GiB .16TiB 1.09 1.07 osd.
  245. hdd 2.18309 1.00000 .18TiB .4GiB .16TiB 1.09 1.06 osd.
  246. hdd 2.18309 1.00000 .18TiB .2GiB .16TiB 0.99 0.97 osd.
  247. hdd 2.18309 1.00000 .18TiB .9GiB .16TiB 1.02 1.00 osd.
  248. hdd 2.18309 1.00000 .18TiB .2GiB .16TiB 1.00 0.97 osd.
  249. hdd 2.18309 1.00000 .18TiB .3GiB .16TiB 0.91 0.89 osd.
  250. hdd 2.18309 1.00000 .18TiB .9GiB .16TiB 0.89 0.87 osd.
  251. - 17.46472 - .5TiB 183GiB .3TiB 1.03 1.00 - host cu-pve06
  252. hdd 2.18309 1.00000 .18TiB .9GiB .16TiB 1.03 1.00 osd.
  253. hdd 2.18309 1.00000 .18TiB .3GiB .16TiB 1.04 1.02 osd.
  254. hdd 2.18309 1.00000 .18TiB .0GiB .16TiB 1.16 1.13 osd.
  255. hdd 2.18309 1.00000 .18TiB .0GiB .16TiB 0.94 0.92 osd.
  256. hdd 2.18309 1.00000 .18TiB .4GiB .16TiB 1.14 1.11 osd.
  257. hdd 2.18309 1.00000 .18TiB .8GiB .16TiB 0.84 0.82 osd.
  258. hdd 2.18309 1.00000 .18TiB .4GiB .16TiB 1.09 1.06 osd.
  259. hdd 2.18309 1.00000 .18TiB .5GiB .16TiB 0.96 0.94 osd.
  260. TOTAL .4TiB 550GiB .9TiB 1.03
  261. MIN/MAX VAR: 0.82/1.18 STDDEV: 0.11
  262.  
  263. root@cu-pve04:~# ceph osd lspools
  264. kycfs_data, kycfs_metadata, kycrbd,
  265.  
  266. root@cu-pve04:~# ceph osd perf
  267. osd commit_latency(ms) apply_latency(ms)
  268.  
  269. root@cu-pve04:~# ceph osd pool get kycrbd all
  270. size:
  271. min_size:
  272. crash_replay_interval:
  273. pg_num:
  274. pgp_num:
  275. crush_rule: replicated_rule
  276. hashpspool: true
  277. nodelete: false
  278. nopgchange: false
  279. nosizechange: false
  280. write_fadvise_dontneed: false
  281. noscrub: false
  282. nodeep-scrub: false
  283. use_gmt_hitset:
  284. auid:
  285. fast_read:
  286.  
  287. [root@ceph1 ceph]# ceph osd pool create cfs_data
  288. pool 'cfs_data' created
  289. [root@ceph1 ceph]# ceph osd pool create cfs_meta
  290. pool 'cfs_meta' created
  291. [root@ceph1 ceph]# ceph fs new cefs cfs_meta cfs_data
  292. new fs with metadata pool and data pool
  293.  
  294. root@cu-pve04:~# ceph osd pool ls
  295. kycfs_data
  296. kycfs_metadata
  297. kycrbd
  298.  
  299. root@cu-pve04:~# ceph osd pool ls detail
  300. pool 'kycfs_data' replicated size min_size crush_rule object_hash rjenkins pg_num pgp_num last_change flags hashpspool stripe_width application cephfs
  301. pool 'kycfs_metadata' replicated size min_size crush_rule object_hash rjenkins pg_num pgp_num last_change flags hashpspool stripe_width application cephfs
  302. pool 'kycrbd' replicated size min_size crush_rule object_hash rjenkins pg_num pgp_num last_change flags hashpspool stripe_width application rbd
  303. removed_snaps [~]
  304.  
  305. root@cu-pve04:~# ceph osd pool stats
  306. pool kycfs_data id
  307. client io .42KiB/s wr, 0op/s rd, 0op/s wr
  308.  
  309. pool kycfs_metadata id
  310. client io .08KiB/s wr, 0op/s rd, 0op/s wr
  311.  
  312. pool kycrbd id
  313. client io 0B/s rd, 357KiB/s wr, 0op/s rd, 25op/s wr
  314.  
  315. root@cu-pve04:~# ceph osd status
  316. +----+----------+-------+-------+--------+---------+--------+---------+-----------+
  317. | id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
  318. +----+----------+-------+-------+--------+---------+--------+---------+-----------+
  319. | | cu-pve04 | .1G | 2212G | | | | | exists,up |
  320. | | cu-pve04 | .1G | 2215G | | .8k | | | exists,up |
  321. | | cu-pve04 | .1G | 2210G | | | | | exists,up |
  322. | | cu-pve04 | .0G | 2208G | | .2k | | | exists,up |
  323. | | cu-pve04 | .0G | 2216G | | | | | exists,up |
  324. | | cu-pve04 | .0G | 2210G | | .5k | | | exists,up |
  325. | | cu-pve04 | .2G | 2212G | | .0k | | | exists,up |
  326. | | cu-pve04 | .5G | 2214G | | .0k | | | exists,up |
  327. | | cu-pve05 | .0G | 2208G | | .2k | | | exists,up |
  328. | | cu-pve05 | .4G | 2211G | | | | | exists,up |
  329. | | cu-pve05 | .3G | 2211G | | .4k | | | exists,up |
  330. | | cu-pve05 | .2G | 2213G | | .8k | | | exists,up |
  331. | | cu-pve05 | .8G | 2212G | | | | | exists,up |
  332. | | cu-pve05 | .2G | 2213G | | .1k | | | exists,up |
  333. | | cu-pve05 | .3G | 2215G | | .8k | | | exists,up |
  334. | | cu-pve05 | .8G | 2215G | | | | | exists,up |
  335. | | cu-pve06 | .9G | 2212G | | .4k | | | exists,up |
  336. | | cu-pve06 | .3G | 2212G | | .6k | | | exists,up |
  337. | | cu-pve06 | .9G | 2209G | | | | | exists,up |
  338. | | cu-pve06 | .0G | 2214G | | | | | exists,up |
  339. | | cu-pve06 | .4G | 2210G | | .2k | | | exists,up |
  340. | | cu-pve06 | .8G | 2216G | | | | | exists,up |
  341. | | cu-pve06 | .3G | 2211G | | .9k | | | exists,up |
  342. | | cu-pve06 | .4G | 2214G | | | | | exists,up |
  343. +----+----------+-------+-------+--------+---------+--------+---------+-----------+
  344.  
  345. root@cu-pve04:~# ceph osd tree
  346. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  347. - 52.39417 root default
  348. - 17.46472 host cu-pve04
  349. hdd 2.18309 osd. up 1.00000 1.00000
  350. hdd 2.18309 osd. up 1.00000 1.00000
  351. hdd 2.18309 osd. up 1.00000 1.00000
  352. hdd 2.18309 osd. up 1.00000 1.00000
  353. hdd 2.18309 osd. up 1.00000 1.00000
  354. hdd 2.18309 osd. up 1.00000 1.00000
  355. hdd 2.18309 osd. up 1.00000 1.00000
  356. hdd 2.18309 osd. up 1.00000 1.00000
  357. - 17.46472 host cu-pve05
  358. hdd 2.18309 osd. up 1.00000 1.00000
  359. hdd 2.18309 osd. up 1.00000 1.00000
  360. hdd 2.18309 osd. up 1.00000 1.00000
  361. hdd 2.18309 osd. up 1.00000 1.00000
  362. hdd 2.18309 osd. up 1.00000 1.00000
  363. hdd 2.18309 osd. up 1.00000 1.00000
  364. hdd 2.18309 osd. up 1.00000 1.00000
  365. hdd 2.18309 osd. up 1.00000 1.00000
  366. - 17.46472 host cu-pve06
  367. hdd 2.18309 osd. up 1.00000 1.00000
  368. hdd 2.18309 osd. up 1.00000 1.00000
  369. hdd 2.18309 osd. up 1.00000 1.00000
  370. hdd 2.18309 osd. up 1.00000 1.00000
  371. hdd 2.18309 osd. up 1.00000 1.00000
  372. hdd 2.18309 osd. up 1.00000 1.00000
  373. hdd 2.18309 osd. up 1.00000 1.00000
  374. hdd 2.18309 osd. up 1.00000 1.00000
  375.  
  376. root@cu-pve04:~# ceph osd utilization
  377. avg
  378. stddev 9.49561 (expected baseline 11.7473)
  379. min osd. with pgs (0.875 * mean)
  380. max osd. with pgs (1.13889 * mean)
  381.  
  382. root@cu-pve04:~# ceph pg dump sum
  383. dumped sum
  384. version
  385. stamp -- ::45.513442
  386. last_osdmap_epoch
  387. last_pg_scan
  388. full_ratio
  389. nearfull_ratio
  390. PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG
  391. sum
  392. OSD_STAT USED AVAIL TOTAL
  393. sum 550GiB .9TiB .4TiB
  394. root@cu-pve04:~# ceph pg dump pools
  395. dumped pools
  396. POOLID OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG
  397.  
  398. root@cu-pve04:~# ceph pg dump osds
  399. dumped osds
  400. OSD_STAT USED AVAIL TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM
  401. .5GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,]
  402. .4GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  403. .4GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  404. .0GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,]
  405. .6GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,]
  406. .2GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  407. .1GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  408. .1GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  409. .2GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,]
  410. .2GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  411. .1GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  412. .0GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  413. .4GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  414. .2GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  415. .9GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  416. .2GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  417. .3GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  418. .9GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,]
  419. .9GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,,,]
  420. .3GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  421. .0GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  422. .0GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  423. .4GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  424. .8GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
  425. sum 550GiB .9TiB .4TiB
  426.  
  427. root@cu-pve04:~# ceph pg map 7.1e8
  428. osdmap e190 pg 7.1e8 (7.1e8) -> up [,,] acting [,,]
  429.  
  430. root@cu-pve04:~# ceph status
  431. cluster:
  432. id: b5fd132b-9ff4-470a-9a14-172eb48dc973
  433. health: HEALTH_OK
  434.  
  435. services:
  436. mon: daemons, quorum cu-pve04,cu-pve05,cu-pve06
  437. mgr: cu-pve04(active), standbys: cu-pve05, cu-pve06
  438. mds: kycfs-// up {=cu-pve05=up:active}, up:standby
  439. osd: osds: up, in
  440.  
  441. data:
  442. pools: pools, pgs
  443. objects: .35k objects, 176GiB
  444. usage: 550GiB used, .9TiB / .4TiB avail
  445. pgs: active+clean
  446.  
  447. io:
  448. client: 0B/s rd, 290KiB/s wr, 0op/s rd, 15op/s wr
  449.  
  450. root@cu-pve04:~# ceph time-sync-status
  451. {
  452. "time_skew_status": {
  453. "cu-pve04": {
  454. "skew": 0.000000,
  455. "latency": 0.000000,
  456. "health": "HEALTH_OK"
  457. },
  458. "cu-pve05": {
  459. "skew": 0.002848,
  460. "latency": 0.001070,
  461. "health": "HEALTH_OK"
  462. },
  463. "cu-pve06": {
  464. "skew": 0.002570,
  465. "latency": 0.001064,
  466. "health": "HEALTH_OK"
  467. }
  468. },
  469. "timechecks": {
  470. "epoch": ,
  471. "round": ,
  472. "round_status": "finished"
  473. }
  474. }
  475.  
  476. root@cu-pve04:~# ceph versions
  477. {
  478. "mon": {
  479. "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
  480. },
  481. "mgr": {
  482. "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
  483. },
  484. "osd": {
  485. "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
  486. },
  487. "mds": {
  488. "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
  489. },
  490. "overall": {
  491. "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
  492. }
  493. }
  1. =========================================
  2.  
  3. [sceph@ceph1 ~]$ ceph-authtool ceph.mon.keyring -l
  4. [mon.]
  5. key = AQBYF5JcAAAAABAAZageA/U12ulwiTj1qy9jKw==
  6. caps mon = "allow *"
  7. [sceph@ceph1 ~]$ ceph-authtool ceph.client.admin.keyring -l
  8. [client.admin]
  9. key = AQBaPZNcCalvLRAAt4iyva3DHfb8NbOX4MxBAw==
  10. caps mds = "allow *"
  11. caps mgr = "allow *"
  12. caps mon = "allow *"
  13. caps osd = "allow *"
  14.  
  15. =========================================
  16. [sceph@ceph1 ~]$ sudo ceph auth ls
  17. installed auth entries:
  18.  
  19. mds.ceph1
  20. key: AQBUmpRc/KdcGhAAx3uWwlKVGu296HWFL3YhCw==
  21. caps: [mds] allow
  22. caps: [mon] allow profile mds
  23. caps: [osd] allow rwx
  24. mds.ceph2
  25. key: AQCelpRcyn1WJBAAeXJ2e2ykDEHq7BYEFD57Tw==
  26. caps: [mds] allow
  27. caps: [mon] allow profile mds
  28. caps: [osd] allow rwx
  29. osd.
  30. key: AQDrWpNcAextBRAA7usr2GT7OiEmnH5+Ya7iGg==
  31. caps: [mgr] allow profile osd
  32. caps: [mon] allow profile osd
  33. caps: [osd] allow *
  34. osd.
  35. key: AQBGXJNc2fVyGhAAvNLbJSssGM6W9Om9gvGH/Q==
  36. caps: [mgr] allow profile osd
  37. caps: [mon] allow profile osd
  38. caps: [osd] allow *
  39. osd.
  40. key: AQBcXJNcqPGOJxAA+U57mkFuRrNUjzEaR6EjIA==
  41. caps: [mgr] allow profile osd
  42. caps: [mon] allow profile osd
  43. caps: [osd] allow *
  44. client.admin
  45. key: AQBaPZNcCalvLRAAt4iyva3DHfb8NbOX4MxBAw==
  46. caps: [mds] allow *
  47. caps: [mgr] allow *
  48. caps: [mon] allow *
  49. caps: [osd] allow *
  50. client.bootstrap-mds
  51. key: AQBaPZNcqO1vLRAANqPF730wvwPJWBbCqeW12w==
  52. caps: [mon] allow profile bootstrap-mds
  53. client.bootstrap-mgr
  54. key: AQBaPZNcCCBwLRAAMGaeplDux+rd0jbTQVLNVw==
  55. caps: [mon] allow profile bootstrap-mgr
  56. client.bootstrap-osd
  57. key: AQBaPZNcVE5wLRAA61JRSlzl72n65Dp5ZLpa/A==
  58. caps: [mon] allow profile bootstrap-osd
  59. client.bootstrap-rbd
  60. key: AQBaPZNcpn5wLRAAps+/Xoxs7JoPHqO19KKQOA==
  61. caps: [mon] allow profile bootstrap-rbd
  62. client.bootstrap-rgw
  63. key: AQBaPZNcEqtwLRAA/aW2qqnW+1uC4HAj1deONg==
  64. caps: [mon] allow profile bootstrap-rgw
  65. client.rgw.ceph1
  66. key: AQDCl5RcUlRJEBAA25xPrLTfwnAwD+uSzc2T4Q==
  67. caps: [mon] allow rw
  68. caps: [osd] allow rwx
  69. mgr.ceph2
  70. key: AQDeWJNcqqItORAAPwDv8I4BcudMqzuzZFaY6w==
  71. caps: [mds] allow *
  72. caps: [mon] allow profile mgr
  73. caps: [osd] allow *
  74. [sceph@ceph1 ~]$
  75.  
  76. ===========================================
  77. admin socket
  78.  
  79. root@cu-pve04:~# ceph daemon mon.cu-pve04 help
  80. root@cu-pve04:~# ceph daemon mon.cu-pve04 sessions
  81. [root@ceph1 ceph]# ceph daemon osd. config show
  82.  
  83. [root@ceph1 rbdpool]# ceph daemon osd. help
  84. {
  85. "calc_objectstore_db_histogram": "Generate key value histogram of kvdb(rocksdb) which used by bluestore",
  86. "compact": "Commpact object store's omap. WARNING: Compaction probably slows your requests",
  87. "config diff": "dump diff of current config and default config",
  88. "config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
  89. "config get": "config get <field>: get the config value",
  90. "config help": "get config setting schema and descriptions",
  91. "config set": "config set <field> <val> [<val> ...]: set a config variable",
  92. "config show": "dump current config settings",
  93. "config unset": "config unset <field>: unset a config variable",
  94. "dump_blacklist": "dump blacklisted clients and times",
  95. "dump_blocked_ops": "show the blocked ops currently in flight",
  96. "dump_historic_ops": "show recent ops",
  97. "dump_historic_ops_by_duration": "show slowest recent ops, sorted by duration",
  98. "dump_historic_slow_ops": "show slowest recent ops",
  99. "dump_mempools": "get mempool stats",
  100. "dump_objectstore_kv_stats": "print statistics of kvdb which used by bluestore",
  101. "dump_op_pq_state": "dump op priority queue state",
  102. "dump_ops_in_flight": "show the ops currently in flight",
  103. "dump_pgstate_history": "show recent state history",
  104. "dump_reservations": "show recovery reservations",
  105. "dump_scrubs": "print scheduled scrubs",
  106. "dump_watchers": "show clients which have active watches, and on which objects",
  107. "flush_journal": "flush the journal to permanent store",
  108. "flush_store_cache": "Flush bluestore internal cache",
  109. "get_command_descriptions": "list available commands",
  110. "get_heap_property": "get malloc extension heap property",
  111. "get_latest_osdmap": "force osd to update the latest map from the mon",
  112. "get_mapped_pools": "dump pools whose PG(s) are mapped to this OSD.",
  113. "getomap": "output entire object map",
  114. "git_version": "get git sha1",
  115. "heap": "show heap usage info (available only if compiled with tcmalloc)",
  116. "help": "list available commands",
  117. "injectdataerr": "inject data error to an object",
  118. "injectfull": "Inject a full disk (optional count times)",
  119. "injectmdataerr": "inject metadata error to an object",
  120. "list_devices": "list OSD devices.",
  121. "log dump": "dump recent log entries to log file",
  122. "log flush": "flush log entries to log file",
  123. "log reopen": "reopen log file",
  124. "objecter_requests": "show in-progress osd requests",
  125. "ops": "show the ops currently in flight",
  126. "perf dump": "dump perfcounters value",
  127. "perf histogram dump": "dump perf histogram values",
  128. "perf histogram schema": "dump perf histogram schema",
  129. "perf reset": "perf reset <name>: perf reset all or one perfcounter name",
  130. "perf schema": "dump perfcounters schema",
  131. "rmomapkey": "remove omap key",
  132. "set_heap_property": "update malloc extension heap property",
  133. "set_recovery_delay": "Delay osd recovery by specified seconds",
  134. "setomapheader": "set omap header",
  135. "setomapval": "set omap key",
  136. "smart": "probe OSD devices for SMART data.",
  137. "status": "high-level status of OSD",
  138. "trigger_deep_scrub": "Trigger a scheduled deep scrub ",
  139. "trigger_scrub": "Trigger a scheduled scrub ",
  140. "truncobj": "truncate object to length",
  141. "version": "get ceph version"
  142. }
  143. [root@ceph1 rbdpool]#
  144.  
  145. [root@ceph1 rbdpool]# ceph daemon mon.ceph1 sessions
  146. [
  147. "MonSession(mon.0 192.168.7.151:6789/0 is open allow *, features 0x3ffddff8ffacfffb (luminous))",
  148. "MonSession(osd.0 192.168.7.151:6800/1988823 is open allow profile osd, features 0x3ffddff8ffacfffb (luminous))",
  149. "MonSession(osd.1 192.168.7.152:6801/1821392 is open allow profile osd, features 0x3ffddff8ffacfffb (luminous))",
  150. "MonSession(mds.? 192.168.7.152:6805/1783208616 is open allow profile mds, features 0x3ffddff8ffacfffb (luminous))",
  151. "MonSession(mds.? 192.168.7.151:6804/3007499436 is open allow profile mds, features 0x3ffddff8ffacfffb (luminous))",
  152. "MonSession(client.? 192.168.7.151:0/2871664294 is open allow rw, features 0x3ffddff8ffacfffb (luminous))",
  153. "MonSession(osd.2 192.168.7.153:6800/6408 is open allow profile osd, features 0x3ffddff8ffacfffb (luminous))",
  154. "MonSession(unknown.0 192.168.7.161:0/2782938665 is open allow *, features 0x27018fb86aa42ada (jewel))",
  155. "MonSession(mgr.4729 192.168.7.152:0/2358460 is open allow profile mgr, features 0x3ffddff8ffacfffb (luminous))",
  156. "MonSession(client.? 192.168.7.152:0/1860240871 is open allow profile mgr, features 0x3ffddff8ffacfffb (luminous))",
  157. "MonSession(unknown.0 192.168.7.151:0/819943570 is open allow *, features 0x27018fb86aa42ada (jewel))"
  158. ]

ceph命令拷屏的更多相关文章

  1. sed命令拷屏

    http://blog.sina.com.cn/s/blog_45497dfa0100w6r3.html  sed样例较多,可以参考 http://blog.sina.com.cn/s/blog_6d ...

  2. awk命令拷屏

    如果不指明采取什么动作,awk默认打印出所有浏览出的记录,与{print $}是一样的 模式和动作两者是可选的,如果没有模式,则action应用到全部记录,如果没有action,则输出匹配全部记录. ...

  3. rbd_rados命令拷屏

    mimic或者luminous rbd_rados sudo mount -t ceph 192.168.7.151:6789:/ /mnt -o name=admin,secret=AQBaPZNc ...

  4. 使用ceph命令提示handle_connect_reply connect got BADAUTHORIZER

    输入命令提示如下错误: [root@node1 ~]# rados -p testpool ls 2017-10-21 06:13:25.743045 7f8f89b6d700 0 -- 192.16 ...

  5. VI打开和编辑多个文件的命令 分屏操作

    VI打开和编辑多个文件的命令 可分两种情况: 1.在同一窗口中打开多个文件: vi file1 file2 file3:n  切换到下一个文件 (n=next):N  切换到上一个文件 2.在不同窗口 ...

  6. Ceph 命令

    引用自: https://www.cnblogs.com/schangech/p/8036161.html 一.集群 1.启动一个ceph 进程启动mon进程 service ceph start  ...

  7. script命令录屏

    关于linux上的操作,我们的确可以使用'history'命令来显示出来操作记录,但是有些时候,我们不仅仅需要知道做了什么,还需要知道操作的时候,产生了什么效果,这个时候‘history’命令就显示无 ...

  8. ceph命令

    chen@admin-node:~$ ceph --help General usage: ============== usage: ceph [-h] [-c CEPHCONF] [-i INPU ...

  9. C# 图像处理:复制屏幕到内存中,拷屏操作

    /// <summary> /// 复制屏幕到内存中 /// </summary> /// <returns>返回内存流</returns> publi ...

随机推荐

  1. (转)springboot应用启动原理(一) 将启动脚本嵌入jar

    转:https://segmentfault.com/a/1190000013489340 Spring Boot Takes an opinionated view of building prod ...

  2. C#链接mysql出现 One of the identified items was in an invalid format

    这个问题在tolist查询结果的时候就会出现但是count就不会出现,后来才发现是数据生成工具生成出来的ID有问题导致的,只要保证iD不重复并且按照指定的类型建立ID就可以了

  3. Git-学习开源代码的技巧

    从最初提交开始学习每次提交的代码 https://stackoverflow.com/questions/5630110/how-to-read-source-code-using-git 很久以前就 ...

  4. 测开之路三十四:html常用标签

    网页的结构: HTML:超文本标记语言是迄今为止网络上应用最为广泛的语言,也是构成网页文档的主要语言.HTML文本是由HTML命令组成的描述性文本,HTML命令可以说明文字.图形.动画.声音.表格.链 ...

  5. xshell的安装及连接linux的使用方法

    版权声明:本文为博主原创文章,遵循 CC 4.0 by-sa 版权协议,转载请附上原文出处链接和本声明.本文链接:https://blog.csdn.net/lx_Frolf/article/deta ...

  6. mysql + grafana监控

      1.首先需要增加授权 CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'XXXXXXXX' WITH MAX_USER_CONNECTIONS 3 ...

  7. 正则表达式中(?:pattern)、(?=pattern)、(?!pattern)、(?<=pattern)和(?<!pattern)

    (?:pattern) ()表示捕获分组,()会把每个分组里的匹配的值保存起来,从左向右,以分组的左括号为标志,第一个出现的分组的组号为1,第二个为2,以此类推 (?:)表示非捕获分组,和捕获分组唯一 ...

  8. Springboot系列1_什么是Springboot

    Springboot系列1_什么是Springboot */--> code {color: #FF0000} pre.src {background-color: #002b36; color ...

  9. send csv to es with filebeat

    ## filebeat *.csv 2019-11-30 23:27:50,111111,222222,VIEW,333333333333 filebeat filebeat.inputs:- pat ...

  10. Codeforces 356D 倍增优化背包

    题目链接:http://codeforces.com/contest/356/problem/D 思路(官方题解):http://codeforces.com/blog/entry/9210 此题需要 ...