ceph-cluster map
知道cluster topology,是因为这5种cluster map。
======================================
知道cluster topology,是因为这5种cluster map。
相关命令
有命令补全,跟交换机命令行一样
ceph mon dump
ceph osd dump
ceph fs dump
ceph pg dump 这个需要反编译,来得到文本
ceph osd getcrushmap -o crush
crushtool -d crush -o crush1
======================================
[root@ali- dd]# ceph mon dump
dumped monmap epoch
epoch
fsid 69e6081b-075f-4f39-8cf3-f1e5bd68908b
last_changed -- ::31.228140
created -- ::21.704124
: 192.168.3.51:/ mon.ali-
: 192.168.3.52:/ mon.ali-
: 192.168.3.53:/ mon.ali- ======================================
[root@ali- dd]# ceph fs dump
dumped fsmap epoch
e1
enable_multiple, ever_enabled_multiple: ,
compat: compat={},rocompat={},incompat={=base v0.,=client writeable ranges,=default file layouts on dirs,=dir inode in separate object,=mds uses versioned encoding,=dirfrag is stored in omap,=file layout v2}
legacy client fscid: - No filesystems configured ====================================== [root@ali- dd]# ceph pg dump
dumped all
version
stamp -- ::24.077612
last_osdmap_epoch
last_pg_scan
full_ratio 0.9
nearfull_ratio 0.8 [root@ceph1 ~]# ceph pg ls
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES LOG STATE STATE_STAMP VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP
1.0 active+clean -- ::54.430131 '2 57:95 [1,2,0]p1 [1,2,0]p1 2019-03-28 02:42:54.430020 2019-03-28 02:42:54.430020
1.1 active+clean -- ::33.846731 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-27 20:42:33.846600 2019-03-27 20:42:33.846600
1.2 active+clean -- ::31.853254 '0 57:92 [1,0,2]p1 [1,0,2]p1 2019-03-27 20:02:31.853127 2019-03-21 18:53:07.286885
1.3 active+clean -- ::29.499574 '0 57:94 [0,1,2]p0 [0,1,2]p0 2019-03-28 01:04:29.499476 2019-03-21 18:53:07.286885
1.4 active+clean -- ::42.694788 '0 57:77 [2,1,0]p2 [2,1,0]p2 2019-03-28 10:17:42.694658 2019-03-21 18:53:07.286885
1.5 active+clean -- ::49.922515 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-28 14:33:49.922414 2019-03-21 18:53:07.286885
1.6 active+clean -- ::08.897114 '0 57:78 [2,1,0]p2 [2,1,0]p2 2019-03-28 08:33:08.897044 2019-03-25 19:51:32.716535
1.7 active+clean -- ::16.417698 '0 57:92 [1,2,0]p1 [1,2,0]p1 2019-03-27 21:37:16.417553 2019-03-22 23:05:53.863908
2.0 active+clean -- ::09.127196 '1 57:155 [1,2,0]p1 [1,2,0]p1 2019-03-27 15:07:09.127107 2019-03-22 15:05:32.211389
2.1 active+clean -- ::41.958378 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 20:55:41.958328 2019-03-27 20:55:41.958328
2.2 active+clean -- ::45.117140 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-28 03:09:45.117036 2019-03-28 03:09:45.117036
2.3 active+clean -- ::17.944907 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-27 08:54:17.944792 2019-03-26 05:44:21.586541
2.4 active+clean -- ::52.040458 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 23:42:52.040353 2019-03-22 15:05:32.211389
2.5 active+clean -- ::15.908085 '0 57:73 [2,0,1]p2 [2,0,1]p2 2019-03-27 14:26:15.908022 2019-03-22 15:05:32.211389
2.6 active+clean -- ::22.282027 '2 57:161 [0,2,1]p0 [0,2,1]p0 2019-03-28 15:00:22.281923 2019-03-26 05:39:41.395132
2.7 active+clean -- ::39.415262 '4 57:253 [1,2,0]p1 [1,2,0]p1 2019-03-27 17:09:39.415167 2019-03-27 17:09:39.415167 [root@ceph1 rbdpool]# ceph pg map 8.13
osdmap e55 pg 8.13 (8.13) -> up [,,] acting [,,] pg id由{pool-num}.{pg-id}组成
ceph osd lspools [root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; GiB data, GiB used, 8.4 GiB / GiB avail
[root@client mnt]# rm -rf a*
上面的删除操作后,下面的pg才开始清理
[root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; 2.5 MiB data, 3.5 GiB used, GiB / GiB avail; 8.7 KiB/s rd, B/s wr, op/s ====================================== [root@ali- dd]# ceph osd getcrushmap -o crush [root@ali- dd]# file crush
crush: MS Windows icon resource - icons, -colors [root@ali- dd]# crushtool -d crush -o crush1
[root@ali- dd]# file crush1
crush1: ASCII text [root@ali- dd]# cat crush1
# begin crush map
tunable choose_local_tries
tunable choose_local_fallback_tries
tunable choose_total_tries
......
rule pool-d83c6154956b44aea7639c7bd4c45c65-rule {
id
type replicated
min_size
max_size
step take pool-d83c6154956b44aea7639c7bd4c45c65-root
step chooseleaf firstn type rack
step emit
} # end crush map
[root@ali- dd]# ====================================== [root@ali- dd]# ceph osd dump
epoch
fsid 69e6081b-075f-4f39-8cf3-f1e5bd68908b
created -- ::22.409031
modified -- ::38.522821
flags nodeep-scrub,sortbitwise,recovery_deletes,purged_snapdirs
crush_version
full_ratio 0.9
backfillfull_ratio 0.85
nearfull_ratio 0.8
omap_full_ratio 0.9
omap_backfillfull_ratio 0.85
omap_nearfull_ratio 0.8
require_min_compat_client luminous
min_compat_client luminous
require_osd_release luminous
pool 'pool-d83c6154956b44aea7639c7bd4c45c65' replicated size min_size crush_rule object_hash rjenkins pg_num pgp_num last_change flags hashpspool stripe_width async_recovery_max_updates osd_backfillfull_ratio 0.85 osd_full_ratio 0.9 osd_nearfull_ratio 0.8 osd_omap_backfillfull_ratio 0.85 osd_omap_nearfull_ratio 0.8
removed_snaps [~]
max_osd
osd. down out weight up_from up_thru down_at last_clean_interval [,) 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.1.53:/ autoout,exists 54e32850-b1ef-44e1-8df9-d3c93bfe4807
osd. down out weight up_from up_thru down_at last_clean_interval [,) 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.1.53:/ autoout,exists 17af8207-2a25-405b-b87d-1c6d7806cc8d
osd. down out weight up_from up_thru down_at last_clean_interval [,) 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.1.53:/ autoout,exists 06cf6578-e516-4e4a-a494-10423b8999cd
osd. down out weight up_from up_thru down_at last_clean_interval [,) 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.1.53:/ autoout,exists bc31e4ab-a135--81b3-e92969921ba7
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.1.53:/ exists,up 62edd341-50b8-4cca-852f-852a51f96760
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.1.53:/ exists,up 00d0cd89-2e74--b4b4-6deaf465b97e
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.1.53:/ exists,up 8ed2597f-1a92-4b90--43b7953cffea
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.3.53:/ 192.168.1.53:/ exists,up f5723232-3f04-4c22--bdc69d7bcff6 osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.1.52:/ exists,up f75a6ee5-cd79-499c--db400f0bed93
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.1.52:/ exists,up 30431fd9-306c--a5bd-cf6b9bc77ca1
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.1.52:/ exists,up 6ed49e4d-d640--957e-94d2f4ba055f
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.1.52:/ exists,up -5c5e-475c-8b41-d58980da3f43
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.1.52:/ exists,up 6168f2cd-de56--8fe5-c80e93f134cd
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.1.52:/ exists,up 26e54a1c-601a-4f3b-afdc-a0c5b140affc
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.1.52:/ exists,up fa366bda-3ac8---b156acffb4aa
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.3.52:/ 192.168.1.52:/ exists,up e9a16507--465c-af80-9d371a9018ad osd. down out weight up_from up_thru down_at last_clean_interval [,) 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.1.51:/ autoout,exists c39c2030-4ad2-49b2-a2bd-d6f26d9cc2c8
osd. down out weight up_from up_thru down_at last_clean_interval [,) 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.1.51:/ autoout,exists 9fa68652-dda8-485a--92d109bc7283
osd. down out weight up_from up_thru down_at last_clean_interval [,) 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.1.51:/ autoout,exists f91dc889-379d-427a--9525deb70603
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.1.51:/ exists,up 254c1dc1-c5aa-406d-a144-408c757f6b34
osd. down out weight up_from up_thru down_at last_clean_interval [,) 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.1.51:/ autoout,exists c13c44fd-397f-465d-bc14-917e8899e2fd
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.1.51:/ exists,up c5028149-28ec-4bd4-a5fe-3d13bdb82c6a
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.1.51:/ exists,up 27c2a32e-eef3-41c9--15246fb20ac4
osd. up in weight up_from up_thru down_at last_clean_interval [,) 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.3.51:/ 192.168.1.51:/ exists,up 4f877615-df0d-40d0-a351-a21dc518c3f4
pg_upmap_items 1.1 [,]
pg_upmap_items 1.2 [,,,]
pg_upmap_items 1.3 [,]
ceph-cluster map的更多相关文章
- openstack(Pike 版)集群部署(八)--- 连接Ceph Cluster 作为后端存储
一.openstack Glance + ceph Cluster 部署: 博客:http://www.cnblogs.com/weijie0717/p/8563294.html 参考 续 部分. ...
- docker-compose 快速部署Prometheus之服务端并监控ceph cluster 使用钉钉webhook 报警
现在环境是这样: ceph 4台: 192.168.100.21 ceph-node1 192.168.100.22 ceph-node2 192.168.100.23 ceph-node3 1 ...
- docker-compose 快速部署Prometheus,监控docker 容器, 宿主机,ceph -- cluster集群
话不多说上菜: 现在环境是这样: ceph 4台: 192.168.100.21 ceph-node1 192.168.100.22 ceph-node2 192.168.100.23 ceph ...
- 理解 OpenStack + Ceph (2):Ceph 的物理和逻辑结构 [Ceph Architecture]
本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...
- CEPH集群操作入门--配置
参考文档:CEPH官网集群操作文档 概述 Ceph存储集群是所有Ceph部署的基础. 基于RADOS,Ceph存储集群由两种类型的守护进程组成:Ceph OSD守护进程(OSD)将数据作为对象 ...
- ceph mimic版本 部署安装
ceph 寻址过程 1. file --- object映射, 把file分割成N个相同的对象 2. object - PG 映射, 利用静态hash得到objectID的伪随机值,在 "位 ...
- Ceph剖析:Leader选举
作者:吴香伟 发表于 2014/09/11 版权声明:可以任意转载,转载时务必以超链接形式标明文章原始出处和作者信息以及版权声明 Paxos算法存在活锁问题.从节点中选出Leader,然后将所有对数据 ...
- 理解 OpenStack + Ceph (7): Ceph 的基本操作和常见故障排除方法
本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...
- 理解 QEMU/KVM 和 Ceph(1):QEMU-KVM 和 Ceph RBD 的 缓存机制总结
本系列文章会总结 QEMU/KVM 和 Ceph 之间的整合: (1)QEMU-KVM 和 Ceph RBD 的 缓存机制总结 (2)QEMU 的 RBD 块驱动(block driver) (3)存 ...
- 理解 OpenStack + Ceph (3):Ceph RBD 接口和工具 [Ceph RBD API and Tools]
本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...
随机推荐
- python-生成式的基本使用
生成式是python中的一种高级玩法,起码看起来显得要高级一点.它可以使用简单的一行代码实现列表.字典等数据类型的创建或数据类型的转换等任务.另外,它和生成器还有些许关联. 列表生成式 列表生成式即生 ...
- CompletionService用法踩坑解决优化
转自:https://blog.csdn.net/xiao__miao/article/details/86352380 1.近期工作的时候,运维通知一个系统的内存一直在增长,leader叫我去排查, ...
- BZOJ 1109 (LIS)
题面 传送门 分析 设dp[i]是第i个积木在自己的位置上时,前i个积木中最多能回到自己位置的数目. \(dp[i]=max(dp[j])+1 (i>j,a[i]>a[j],a[i]-a[ ...
- Java arraylist重复使用问题
arraylist同一个实例重复使用时,需要使用clear()及时清空,否则会在上次的结果后面添加项. List<Double> weightsList = new ArrayList&l ...
- python学习第十三天元组创建和操作方法
有人问,有了列表,为什么还要有元组呢,到底元组是什么,元组是不可变的有序的列表,一旦创建不能改变,那些地方用到元组呢,小编知道可以应用到数据库连接. 1,元组的创建 n1 = () 元组用的是小括号 ...
- 使用onfocus与onblur实现搜索框附加信息
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...
- Page.IsPostBack
ASP.NET页面的执行顺序说明:Page_Init(页面初始化引发的事件)——Page_Load(加载页面时引发的事件)——ControlEvent(服务器控件引发的事件)——Page_UnLoad ...
- Android单位转换 (px、dp、sp之间的转换工具类)
在Android开发中,涉及到屏幕视频问题的时候,px.dp.sp之间的转换比较重要的一部分,所以杨哥整理了一个工具类给大伙用. package com.zw.express.tool; import ...
- elasticsearch 基础 —— Jion父子关系
前言 由于ES6.X版本以后,每个索引下面只支持单一的类型(type),因此不再支持以下形式的父子关系: PUT /company { "mappings": { "br ...
- 2018-2-13-win10-UWP-RSS阅读器
title author date CreateTime categories win10 UWP RSS阅读器 lindexi 2018-2-13 17:23:3 +0800 2018-2-13 1 ...