ceph命令拷屏
常用命令
ceph -w
ceph df
ceph features
ceph fs ls
ceph fs status
ceph fsid
ceph health
ceph -s
ceph status
ceph mgr module ls
ceph mgr module enable dashboard
ceph mgr services
ceph mon feature ls
ceph node ls
ceph osd crush rule ls
ceph osd crush rule dump
ceph osd df tree
ceph osd lspools
ceph osd perf
watch ceph osd perf
ceph osd pool get kycrbd all
ceph osd pool ls
ceph osd pool ls detail
ceph osd pool stats
ceph osd status
ceph osd tree
ceph osd utilization pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
ceph pg dump all
ceph pg dump summary
ceph pg dump sum
ceph pg dump delta
ceph pg dump pools
ceph pg dump osds
ceph pg dump pgs
ceph pg dump pgs_brief
ceph pg dump pgs
ceph pg ls
ceph pg ls-by-osd osd.0
ceph pg ls-by-pool kycfs_metadata
ceph pg ls-by-primary
ceph pg map 7.1e8
ceph report ceph time-sync-status
ceph version
ceph versions
root@cu-pve04:~# ceph fs get kycfs
Filesystem 'kycfs' ()
fs_name kycfs
epoch
flags c
created -- ::48.957941
modified -- ::33.599472
tableserver
root
session_timeout
session_autoclose
max_file_size
last_failure
last_failure_osd_epoch
compat compat={},rocompat={},incompat={=base v0.,=client writeable ranges,=default file layouts on dirs,=dir inode in separate object,=mds uses versioned encoding,=dirfrag is stored in omap,=no anchor table,=file layout v2}
max_mds
in
up {=}
failed
damaged
stopped
data_pools []
metadata_pool
inline_data disabled
balancer
standby_count_wanted
: 192.168.7.205:/ 'cu-pve05' mds.0.12 up:active seq (standby for rank - 'pve') root@cu-pve04:~# ceph fs ls
name: kycfs, metadata pool: kycfs_metadata, data pools: [kycfs_data ] root@cu-pve04:~# ceph fs status
kycfs - clients
=====
+------+--------+----------+---------------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+--------+----------+---------------+-------+-------+
| | active | cu-pve05 | Reqs: /s | | |
+------+--------+----------+---------------+-------+-------+
+----------------+----------+-------+-------+
| Pool | type | used | avail |
+----------------+----------+-------+-------+
| kycfs_metadata | metadata | 89.7M | .3T |
| kycfs_data | data | .0G | .3T |
+----------------+----------+-------+-------+ +-------------+
| Standby MDS |
+-------------+
| cu-pve04 |
| cu-pve06 |
+-------------+
MDS version: ceph version 12.2. (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable) root@cu-pve04:~# ceph fsid
b5fd132b-9ff4-470a-9a14-172eb48dc973
root@cu-pve04:~# ceph health
HEALTH_OK
root@cu-pve04:~# ceph -s
cluster:
id: b5fd132b-9ff4-470a-9a14-172eb48dc973
health: HEALTH_OK services:
mon: daemons, quorum cu-pve04,cu-pve05,cu-pve06
mgr: cu-pve04(active), standbys: cu-pve05, cu-pve06
mds: kycfs-// up {=cu-pve05=up:active}, up:standby
osd: osds: up, in data:
pools: pools, pgs
objects: .35k objects, 176GiB
usage: 550GiB used, .9TiB / .4TiB avail
pgs: active+clean io:
client: 0B/s rd, .5KiB/s wr, 0op/s rd, 6op/s wr root@cu-pve04:~# ceph mgr module ls
{
"enabled_modules": [
"balancer",
"dashboard",
"restful",
"status"
],
"disabled_modules": [
"influx",
"localpool",
"prometheus",
"selftest",
"zabbix"
]
} root@cu-pve04:~# ceph mgr module enable dashboard root@cu-pve04:~# ceph mgr services
{
"dashboard": "http://cu-pve04.ka1che.com:7000/"
} root@cu-pve04:~# ceph -v
ceph version 12.2. (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable) root@cu-pve04:~# ceph mds versions
{
"ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
}
root@cu-pve04:~# ceph mgr versions
{
"ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
}
root@cu-pve04:~# ceph mon versions
{
"ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
}
root@cu-pve04:~# ceph osd versions
{
"ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
} root@cu-pve04:~# ceph mon feature ls
all features
supported: [kraken,luminous]
persistent: [kraken,luminous]
on current monmap (epoch )
persistent: [kraken,luminous]
required: [kraken,luminous] root@cu-pve04:~# ceph mds stat
kycfs-// up {=cu-pve05=up:active}, up:standby root@cu-pve04:~# ceph mon stat
e3: mons at {cu-pve04=192.168.7.204:/,cu-pve05=192.168.7.205:/,cu-pve06=192.168.7.206:/}, election epoch , leader cu-pve04, quorum ,, cu-pve04,cu-pve05,cu-pve06 root@cu-pve04:~# ceph osd stat
osds: up, in root@cu-pve04:~# ceph pg stat
pgs: active+clean; 176GiB data, 550GiB used, .9TiB / .4TiB avail; 673B/s rd, 197KiB/s wr, 23op/s root@cu-pve04:~# ceph node ls
{
"mon": {
"cu-pve04": [ ],
"cu-pve05": [ ],
"cu-pve06": [ ]
},
"osd": {
"cu-pve04": [
,
,
,
,
,
,
, ],
"cu-pve05": [
,
,
,
,
,
,
, ],
"cu-pve06": [
,
,
,
,
,
,
, ]
},
"mds": {
"cu-pve04": [
-
],
"cu-pve05": [ ],
"cu-pve06": [
-
]
}
} root@cu-pve04:~# ceph osd crush rule ls
replicated_rule
root@cu-pve04:~# ceph osd crush rule dump
[
{
"rule_id": ,
"rule_name": "replicated_rule",
"ruleset": ,
"type": ,
"min_size": ,
"max_size": ,
"steps": [
{
"op": "take",
"item": -,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": ,
"type": "host"
},
{
"op": "emit"
}
]
}
] root@cu-pve04:~# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME
- 52.39417 - .4TiB 550GiB .9TiB 1.03 1.00 - root default
- 17.46472 - .5TiB 183GiB .3TiB 1.03 1.00 - host cu-pve04
hdd 2.18309 1.00000 .18TiB .1GiB .16TiB 1.04 1.01 osd.
hdd 2.18309 1.00000 .18TiB .2GiB .16TiB 0.90 0.88 osd.
hdd 2.18309 1.00000 .18TiB .1GiB .16TiB 1.12 1.10 osd.
hdd 2.18309 1.00000 .18TiB .0GiB .16TiB 1.21 1.18 osd.
hdd 2.18309 1.00000 .18TiB .1GiB .16TiB 0.85 0.83 osd.
hdd 2.18309 1.00000 .18TiB .1GiB .16TiB 1.12 1.09 osd.
hdd 2.18309 1.00000 .18TiB .2GiB .16TiB 1.04 1.01 osd.
hdd 2.18309 1.00000 .18TiB .6GiB .16TiB 0.92 0.90 osd.
- 17.46472 - .5TiB 183GiB .3TiB 1.03 1.00 - host cu-pve05
hdd 2.18309 1.00000 .18TiB .0GiB .16TiB 1.21 1.18 osd.
hdd 2.18309 1.00000 .18TiB .4GiB .16TiB 1.09 1.07 osd.
hdd 2.18309 1.00000 .18TiB .4GiB .16TiB 1.09 1.06 osd.
hdd 2.18309 1.00000 .18TiB .2GiB .16TiB 0.99 0.97 osd.
hdd 2.18309 1.00000 .18TiB .9GiB .16TiB 1.02 1.00 osd.
hdd 2.18309 1.00000 .18TiB .2GiB .16TiB 1.00 0.97 osd.
hdd 2.18309 1.00000 .18TiB .3GiB .16TiB 0.91 0.89 osd.
hdd 2.18309 1.00000 .18TiB .9GiB .16TiB 0.89 0.87 osd.
- 17.46472 - .5TiB 183GiB .3TiB 1.03 1.00 - host cu-pve06
hdd 2.18309 1.00000 .18TiB .9GiB .16TiB 1.03 1.00 osd.
hdd 2.18309 1.00000 .18TiB .3GiB .16TiB 1.04 1.02 osd.
hdd 2.18309 1.00000 .18TiB .0GiB .16TiB 1.16 1.13 osd.
hdd 2.18309 1.00000 .18TiB .0GiB .16TiB 0.94 0.92 osd.
hdd 2.18309 1.00000 .18TiB .4GiB .16TiB 1.14 1.11 osd.
hdd 2.18309 1.00000 .18TiB .8GiB .16TiB 0.84 0.82 osd.
hdd 2.18309 1.00000 .18TiB .4GiB .16TiB 1.09 1.06 osd.
hdd 2.18309 1.00000 .18TiB .5GiB .16TiB 0.96 0.94 osd.
TOTAL .4TiB 550GiB .9TiB 1.03
MIN/MAX VAR: 0.82/1.18 STDDEV: 0.11 root@cu-pve04:~# ceph osd lspools
kycfs_data, kycfs_metadata, kycrbd, root@cu-pve04:~# ceph osd perf
osd commit_latency(ms) apply_latency(ms) root@cu-pve04:~# ceph osd pool get kycrbd all
size:
min_size:
crash_replay_interval:
pg_num:
pgp_num:
crush_rule: replicated_rule
hashpspool: true
nodelete: false
nopgchange: false
nosizechange: false
write_fadvise_dontneed: false
noscrub: false
nodeep-scrub: false
use_gmt_hitset:
auid:
fast_read: [root@ceph1 ceph]# ceph osd pool create cfs_data
pool 'cfs_data' created
[root@ceph1 ceph]# ceph osd pool create cfs_meta
pool 'cfs_meta' created
[root@ceph1 ceph]# ceph fs new cefs cfs_meta cfs_data
new fs with metadata pool and data pool root@cu-pve04:~# ceph osd pool ls
kycfs_data
kycfs_metadata
kycrbd root@cu-pve04:~# ceph osd pool ls detail
pool 'kycfs_data' replicated size min_size crush_rule object_hash rjenkins pg_num pgp_num last_change flags hashpspool stripe_width application cephfs
pool 'kycfs_metadata' replicated size min_size crush_rule object_hash rjenkins pg_num pgp_num last_change flags hashpspool stripe_width application cephfs
pool 'kycrbd' replicated size min_size crush_rule object_hash rjenkins pg_num pgp_num last_change flags hashpspool stripe_width application rbd
removed_snaps [~] root@cu-pve04:~# ceph osd pool stats
pool kycfs_data id
client io .42KiB/s wr, 0op/s rd, 0op/s wr pool kycfs_metadata id
client io .08KiB/s wr, 0op/s rd, 0op/s wr pool kycrbd id
client io 0B/s rd, 357KiB/s wr, 0op/s rd, 25op/s wr root@cu-pve04:~# ceph osd status
+----+----------+-------+-------+--------+---------+--------+---------+-----------+
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
+----+----------+-------+-------+--------+---------+--------+---------+-----------+
| | cu-pve04 | .1G | 2212G | | | | | exists,up |
| | cu-pve04 | .1G | 2215G | | .8k | | | exists,up |
| | cu-pve04 | .1G | 2210G | | | | | exists,up |
| | cu-pve04 | .0G | 2208G | | .2k | | | exists,up |
| | cu-pve04 | .0G | 2216G | | | | | exists,up |
| | cu-pve04 | .0G | 2210G | | .5k | | | exists,up |
| | cu-pve04 | .2G | 2212G | | .0k | | | exists,up |
| | cu-pve04 | .5G | 2214G | | .0k | | | exists,up |
| | cu-pve05 | .0G | 2208G | | .2k | | | exists,up |
| | cu-pve05 | .4G | 2211G | | | | | exists,up |
| | cu-pve05 | .3G | 2211G | | .4k | | | exists,up |
| | cu-pve05 | .2G | 2213G | | .8k | | | exists,up |
| | cu-pve05 | .8G | 2212G | | | | | exists,up |
| | cu-pve05 | .2G | 2213G | | .1k | | | exists,up |
| | cu-pve05 | .3G | 2215G | | .8k | | | exists,up |
| | cu-pve05 | .8G | 2215G | | | | | exists,up |
| | cu-pve06 | .9G | 2212G | | .4k | | | exists,up |
| | cu-pve06 | .3G | 2212G | | .6k | | | exists,up |
| | cu-pve06 | .9G | 2209G | | | | | exists,up |
| | cu-pve06 | .0G | 2214G | | | | | exists,up |
| | cu-pve06 | .4G | 2210G | | .2k | | | exists,up |
| | cu-pve06 | .8G | 2216G | | | | | exists,up |
| | cu-pve06 | .3G | 2211G | | .9k | | | exists,up |
| | cu-pve06 | .4G | 2214G | | | | | exists,up |
+----+----------+-------+-------+--------+---------+--------+---------+-----------+ root@cu-pve04:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
- 52.39417 root default
- 17.46472 host cu-pve04
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
- 17.46472 host cu-pve05
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
- 17.46472 host cu-pve06
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000
hdd 2.18309 osd. up 1.00000 1.00000 root@cu-pve04:~# ceph osd utilization
avg
stddev 9.49561 (expected baseline 11.7473)
min osd. with pgs (0.875 * mean)
max osd. with pgs (1.13889 * mean) root@cu-pve04:~# ceph pg dump sum
dumped sum
version
stamp -- ::45.513442
last_osdmap_epoch
last_pg_scan
full_ratio
nearfull_ratio
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG
sum
OSD_STAT USED AVAIL TOTAL
sum 550GiB .9TiB .4TiB
root@cu-pve04:~# ceph pg dump pools
dumped pools
POOLID OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG root@cu-pve04:~# ceph pg dump osds
dumped osds
OSD_STAT USED AVAIL TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM
.5GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,]
.4GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.4GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.0GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,]
.6GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,]
.2GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.1GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.1GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.2GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,]
.2GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.1GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.0GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.4GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.2GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.9GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.2GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.3GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.9GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,]
.9GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,,,]
.3GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.0GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.0GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.4GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
.8GiB .16TiB .18TiB [,,,,,,,,,,,,,,,,,]
sum 550GiB .9TiB .4TiB root@cu-pve04:~# ceph pg map 7.1e8
osdmap e190 pg 7.1e8 (7.1e8) -> up [,,] acting [,,] root@cu-pve04:~# ceph status
cluster:
id: b5fd132b-9ff4-470a-9a14-172eb48dc973
health: HEALTH_OK services:
mon: daemons, quorum cu-pve04,cu-pve05,cu-pve06
mgr: cu-pve04(active), standbys: cu-pve05, cu-pve06
mds: kycfs-// up {=cu-pve05=up:active}, up:standby
osd: osds: up, in data:
pools: pools, pgs
objects: .35k objects, 176GiB
usage: 550GiB used, .9TiB / .4TiB avail
pgs: active+clean io:
client: 0B/s rd, 290KiB/s wr, 0op/s rd, 15op/s wr root@cu-pve04:~# ceph time-sync-status
{
"time_skew_status": {
"cu-pve04": {
"skew": 0.000000,
"latency": 0.000000,
"health": "HEALTH_OK"
},
"cu-pve05": {
"skew": 0.002848,
"latency": 0.001070,
"health": "HEALTH_OK"
},
"cu-pve06": {
"skew": 0.002570,
"latency": 0.001064,
"health": "HEALTH_OK"
}
},
"timechecks": {
"epoch": ,
"round": ,
"round_status": "finished"
}
} root@cu-pve04:~# ceph versions
{
"mon": {
"ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
},
"mgr": {
"ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
},
"osd": {
"ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
},
"mds": {
"ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
},
"overall": {
"ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)":
}
}
========================================= [sceph@ceph1 ~]$ ceph-authtool ceph.mon.keyring -l
[mon.]
key = AQBYF5JcAAAAABAAZageA/U12ulwiTj1qy9jKw==
caps mon = "allow *"
[sceph@ceph1 ~]$ ceph-authtool ceph.client.admin.keyring -l
[client.admin]
key = AQBaPZNcCalvLRAAt4iyva3DHfb8NbOX4MxBAw==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *" =========================================
[sceph@ceph1 ~]$ sudo ceph auth ls
installed auth entries: mds.ceph1
key: AQBUmpRc/KdcGhAAx3uWwlKVGu296HWFL3YhCw==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
mds.ceph2
key: AQCelpRcyn1WJBAAeXJ2e2ykDEHq7BYEFD57Tw==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
osd.
key: AQDrWpNcAextBRAA7usr2GT7OiEmnH5+Ya7iGg==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.
key: AQBGXJNc2fVyGhAAvNLbJSssGM6W9Om9gvGH/Q==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.
key: AQBcXJNcqPGOJxAA+U57mkFuRrNUjzEaR6EjIA==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQBaPZNcCalvLRAAt4iyva3DHfb8NbOX4MxBAw==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQBaPZNcqO1vLRAANqPF730wvwPJWBbCqeW12w==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
key: AQBaPZNcCCBwLRAAMGaeplDux+rd0jbTQVLNVw==
caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
key: AQBaPZNcVE5wLRAA61JRSlzl72n65Dp5ZLpa/A==
caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
key: AQBaPZNcpn5wLRAAps+/Xoxs7JoPHqO19KKQOA==
caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rgw
key: AQBaPZNcEqtwLRAA/aW2qqnW+1uC4HAj1deONg==
caps: [mon] allow profile bootstrap-rgw
client.rgw.ceph1
key: AQDCl5RcUlRJEBAA25xPrLTfwnAwD+uSzc2T4Q==
caps: [mon] allow rw
caps: [osd] allow rwx
mgr.ceph2
key: AQDeWJNcqqItORAAPwDv8I4BcudMqzuzZFaY6w==
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *
[sceph@ceph1 ~]$ ===========================================
admin socket root@cu-pve04:~# ceph daemon mon.cu-pve04 help
root@cu-pve04:~# ceph daemon mon.cu-pve04 sessions
[root@ceph1 ceph]# ceph daemon osd. config show [root@ceph1 rbdpool]# ceph daemon osd. help
{
"calc_objectstore_db_histogram": "Generate key value histogram of kvdb(rocksdb) which used by bluestore",
"compact": "Commpact object store's omap. WARNING: Compaction probably slows your requests",
"config diff": "dump diff of current config and default config",
"config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
"config get": "config get <field>: get the config value",
"config help": "get config setting schema and descriptions",
"config set": "config set <field> <val> [<val> ...]: set a config variable",
"config show": "dump current config settings",
"config unset": "config unset <field>: unset a config variable",
"dump_blacklist": "dump blacklisted clients and times",
"dump_blocked_ops": "show the blocked ops currently in flight",
"dump_historic_ops": "show recent ops",
"dump_historic_ops_by_duration": "show slowest recent ops, sorted by duration",
"dump_historic_slow_ops": "show slowest recent ops",
"dump_mempools": "get mempool stats",
"dump_objectstore_kv_stats": "print statistics of kvdb which used by bluestore",
"dump_op_pq_state": "dump op priority queue state",
"dump_ops_in_flight": "show the ops currently in flight",
"dump_pgstate_history": "show recent state history",
"dump_reservations": "show recovery reservations",
"dump_scrubs": "print scheduled scrubs",
"dump_watchers": "show clients which have active watches, and on which objects",
"flush_journal": "flush the journal to permanent store",
"flush_store_cache": "Flush bluestore internal cache",
"get_command_descriptions": "list available commands",
"get_heap_property": "get malloc extension heap property",
"get_latest_osdmap": "force osd to update the latest map from the mon",
"get_mapped_pools": "dump pools whose PG(s) are mapped to this OSD.",
"getomap": "output entire object map",
"git_version": "get git sha1",
"heap": "show heap usage info (available only if compiled with tcmalloc)",
"help": "list available commands",
"injectdataerr": "inject data error to an object",
"injectfull": "Inject a full disk (optional count times)",
"injectmdataerr": "inject metadata error to an object",
"list_devices": "list OSD devices.",
"log dump": "dump recent log entries to log file",
"log flush": "flush log entries to log file",
"log reopen": "reopen log file",
"objecter_requests": "show in-progress osd requests",
"ops": "show the ops currently in flight",
"perf dump": "dump perfcounters value",
"perf histogram dump": "dump perf histogram values",
"perf histogram schema": "dump perf histogram schema",
"perf reset": "perf reset <name>: perf reset all or one perfcounter name",
"perf schema": "dump perfcounters schema",
"rmomapkey": "remove omap key",
"set_heap_property": "update malloc extension heap property",
"set_recovery_delay": "Delay osd recovery by specified seconds",
"setomapheader": "set omap header",
"setomapval": "set omap key",
"smart": "probe OSD devices for SMART data.",
"status": "high-level status of OSD",
"trigger_deep_scrub": "Trigger a scheduled deep scrub ",
"trigger_scrub": "Trigger a scheduled scrub ",
"truncobj": "truncate object to length",
"version": "get ceph version"
}
[root@ceph1 rbdpool]# [root@ceph1 rbdpool]# ceph daemon mon.ceph1 sessions
[
"MonSession(mon.0 192.168.7.151:6789/0 is open allow *, features 0x3ffddff8ffacfffb (luminous))",
"MonSession(osd.0 192.168.7.151:6800/1988823 is open allow profile osd, features 0x3ffddff8ffacfffb (luminous))",
"MonSession(osd.1 192.168.7.152:6801/1821392 is open allow profile osd, features 0x3ffddff8ffacfffb (luminous))",
"MonSession(mds.? 192.168.7.152:6805/1783208616 is open allow profile mds, features 0x3ffddff8ffacfffb (luminous))",
"MonSession(mds.? 192.168.7.151:6804/3007499436 is open allow profile mds, features 0x3ffddff8ffacfffb (luminous))",
"MonSession(client.? 192.168.7.151:0/2871664294 is open allow rw, features 0x3ffddff8ffacfffb (luminous))",
"MonSession(osd.2 192.168.7.153:6800/6408 is open allow profile osd, features 0x3ffddff8ffacfffb (luminous))",
"MonSession(unknown.0 192.168.7.161:0/2782938665 is open allow *, features 0x27018fb86aa42ada (jewel))",
"MonSession(mgr.4729 192.168.7.152:0/2358460 is open allow profile mgr, features 0x3ffddff8ffacfffb (luminous))",
"MonSession(client.? 192.168.7.152:0/1860240871 is open allow profile mgr, features 0x3ffddff8ffacfffb (luminous))",
"MonSession(unknown.0 192.168.7.151:0/819943570 is open allow *, features 0x27018fb86aa42ada (jewel))"
]
ceph命令拷屏的更多相关文章
- sed命令拷屏
http://blog.sina.com.cn/s/blog_45497dfa0100w6r3.html sed样例较多,可以参考 http://blog.sina.com.cn/s/blog_6d ...
- awk命令拷屏
如果不指明采取什么动作,awk默认打印出所有浏览出的记录,与{print $}是一样的 模式和动作两者是可选的,如果没有模式,则action应用到全部记录,如果没有action,则输出匹配全部记录. ...
- rbd_rados命令拷屏
mimic或者luminous rbd_rados sudo mount -t ceph 192.168.7.151:6789:/ /mnt -o name=admin,secret=AQBaPZNc ...
- 使用ceph命令提示handle_connect_reply connect got BADAUTHORIZER
输入命令提示如下错误: [root@node1 ~]# rados -p testpool ls 2017-10-21 06:13:25.743045 7f8f89b6d700 0 -- 192.16 ...
- VI打开和编辑多个文件的命令 分屏操作
VI打开和编辑多个文件的命令 可分两种情况: 1.在同一窗口中打开多个文件: vi file1 file2 file3:n 切换到下一个文件 (n=next):N 切换到上一个文件 2.在不同窗口 ...
- Ceph 命令
引用自: https://www.cnblogs.com/schangech/p/8036161.html 一.集群 1.启动一个ceph 进程启动mon进程 service ceph start ...
- script命令录屏
关于linux上的操作,我们的确可以使用'history'命令来显示出来操作记录,但是有些时候,我们不仅仅需要知道做了什么,还需要知道操作的时候,产生了什么效果,这个时候‘history’命令就显示无 ...
- ceph命令
chen@admin-node:~$ ceph --help General usage: ============== usage: ceph [-h] [-c CEPHCONF] [-i INPU ...
- C# 图像处理:复制屏幕到内存中,拷屏操作
/// <summary> /// 复制屏幕到内存中 /// </summary> /// <returns>返回内存流</returns> publi ...
随机推荐
- 北风设计模式课程---20、UML类图介绍
北风设计模式课程---20.UML类图介绍 一.总结 一句话总结: 不仅要通过视频学,还要看别的博客里面的介绍,搜讲解,搜作用,搜实例 设计模式都是对生活的抽象,比如用户获得装备,我可以先装备工厂先生 ...
- GHCi Prelude学习
参考:http://www.cse.unsw.edu.au/~en1000/haskell/inbuilt.html http://www.cse.unsw.edu.au/~en1000/haskel ...
- 关于列表倒序输出的几种方法——python第7天
项目:将列表li1 = [1, 6, 4, 3, 7, 9]倒序输出为[9, 7, 6, 4, 3, 1] li2 = ['a', 'm', 's', 'g']倒序输出为['g', 's', 'm', ...
- Fatal Error -26000: Not enough memory (12320 bytes) for “new buffer in LrwSrvNetTaskIt 问题解决及lr脚本心得
Fatal Error -26000: Not enough memory (12320 bytes) for “new buffer in LrwSrvNetTaskIt 问题解决及lr脚本心得 2 ...
- box-shadow 制作单边阴影效果,不影响其它边的效果
box-shadow 制作单边阴影效果,不影响其它边的效果: https://blog.csdn.net/u010289111/article/details/53171128 CSS 样式实现单边 ...
- 关于Java序列化你应该知道的一切
什么是序列化 我们的对象并不只是存在内存中,还需要传输网络,或者保存起来下次再加载出来用,所以需要Java序列化技术. Java序列化技术正是将对象转变成一串由二进制字节组成的数组,可以通过将二进制数 ...
- SpringBoot-技术专区-配置文件加密
工程中的配置文件如果把数据库的用户名密码写成明文的话是一件很危险的事情,之前也看见网上说可以对密码进行加密,用的时候再解密,因此今天我就尝试如何在spring boot 中的项目中实现关键信息的加密解 ...
- js返回顶部小Demo
<style> .divH { height: 1800px; } .divT { width: 50px; height: 50px; font-size: 18px; backgrou ...
- JavaScript如何诞生
JavaScript之父谈语言诞生记 发表于2011-06-27 10:30| 9749次阅读| 来源ruanyifeng.com| 0 条评论| 作者阮一峰 prototypeprimitiveja ...
- Codeforces 354C 暴力 数论
题意:给你一个数组,你可以把数组中的数减少最多k,问数组中的所有数的GCD最大是多少? 思路:容易发现,GCD的上限是数组中最小的那个数,而因为最多可以减少k,及可以凑出来的余数最大是k,那么GCD的 ...