版本:mimic

https://192.168.1.5:8006/pve-docs/chapter-pveceph.html#pve_ceph_osds

As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used by an OSD. OSD caching will use additional memory.
mon_command failed - pg_num 128 size 3 would mean 6147 total pgs, which exceeds max 6000 (mon_max_pg_per_osd 250 * num_in_osds 24)

mon_command failed - pg_num  size  would mean  total pgs, which exceeds max  (mon_max_pg_per_osd  * num_in_osds )

[root@ali- dd]# ceph pg dump
dumped all
version
stamp -- ::24.077612
last_osdmap_epoch
last_pg_scan
full_ratio 0.9
nearfull_ratio 0.8 [root@ceph1 ~]# ceph pg ls
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES LOG STATE STATE_STAMP VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP
1.0 active+clean -- ::54.430131 '2 57:95 [1,2,0]p1 [1,2,0]p1 2019-03-28 02:42:54.430020 2019-03-28 02:42:54.430020
1.1 active+clean -- ::33.846731 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-27 20:42:33.846600 2019-03-27 20:42:33.846600
1.2 active+clean -- ::31.853254 '0 57:92 [1,0,2]p1 [1,0,2]p1 2019-03-27 20:02:31.853127 2019-03-21 18:53:07.286885
1.3 active+clean -- ::29.499574 '0 57:94 [0,1,2]p0 [0,1,2]p0 2019-03-28 01:04:29.499476 2019-03-21 18:53:07.286885
1.4 active+clean -- ::42.694788 '0 57:77 [2,1,0]p2 [2,1,0]p2 2019-03-28 10:17:42.694658 2019-03-21 18:53:07.286885
1.5 active+clean -- ::49.922515 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-28 14:33:49.922414 2019-03-21 18:53:07.286885
1.6 active+clean -- ::08.897114 '0 57:78 [2,1,0]p2 [2,1,0]p2 2019-03-28 08:33:08.897044 2019-03-25 19:51:32.716535
1.7 active+clean -- ::16.417698 '0 57:92 [1,2,0]p1 [1,2,0]p1 2019-03-27 21:37:16.417553 2019-03-22 23:05:53.863908
2.0 active+clean -- ::09.127196 '1 57:155 [1,2,0]p1 [1,2,0]p1 2019-03-27 15:07:09.127107 2019-03-22 15:05:32.211389
2.1 active+clean -- ::41.958378 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 20:55:41.958328 2019-03-27 20:55:41.958328
2.2 active+clean -- ::45.117140 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-28 03:09:45.117036 2019-03-28 03:09:45.117036
2.3 active+clean -- ::17.944907 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-27 08:54:17.944792 2019-03-26 05:44:21.586541
2.4 active+clean -- ::52.040458 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 23:42:52.040353 2019-03-22 15:05:32.211389
2.5 active+clean -- ::15.908085 '0 57:73 [2,0,1]p2 [2,0,1]p2 2019-03-27 14:26:15.908022 2019-03-22 15:05:32.211389
2.6 active+clean -- ::22.282027 '2 57:161 [0,2,1]p0 [0,2,1]p0 2019-03-28 15:00:22.281923 2019-03-26 05:39:41.395132
2.7 active+clean -- ::39.415262 '4 57:253 [1,2,0]p1 [1,2,0]p1 2019-03-27 17:09:39.415167 2019-03-27 17:09:39.415167 [root@ceph1 rbdpool]# ceph pg map 8.13
osdmap e55 pg 8.13 (8.13) -> up [,,] acting [,,] pg id由{pool-num}.{pg-id}组成
ceph osd lspools [root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; GiB data, GiB used, 8.4 GiB / GiB avail
[root@client mnt]# rm -rf a*
上面的删除操作后,下面的pg才开始清理
[root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; 2.5 MiB data, 3.5 GiB used, GiB / GiB avail; 8.7 KiB/s rd, B/s wr, op/s [root@ceph1 ~]# ceph pg dump
dumped all
version
stamp -- ::18.312134
last_osdmap_epoch
last_pg_scan
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG STATE STATE_STAMP VERSION REPORTED UP UP_PRIMARY ACTING ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP LAST_DEEP_SCRUB DEEP_SCRUB_STAMP SNAPTRIMQ_LEN
8.3f active+clean -- ::27.945410 '0 57:30 [0,1,2] 0 [0,1,2] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3e active+clean -- ::27.967178 '0 57:28 [2,1,0] 2 [2,1,0] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.3d active+clean -- ::27.946169 '0 57:29 [0,2,1] 0 [0,2,1] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3c active+clean -- ::27.954775 '0 57:29 [1,2,0] 1 [1,2,0] 1 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3b active+clean -- ::27.958550 '0 57:28 [1,2,0] 1 [1,2,0] 1 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3a active+clean -- ::27.968929 '2 57:31 [2,0,1] 2 [2,0,1] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.39 active+clean -- ::27.966700 '0 57:28 [2,0,1] 2 [2,0,1] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.38 active+clean -- ::27.946091 '0 57:29 [0,2,1] 0 [0,2,1] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0 sum
OSD_STAT USED AVAIL TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM
1.3 GiB GiB GiB [,]
1.1 GiB GiB GiB [,]
1.1 GiB GiB GiB [,]
sum 3.5 GiB GiB GiB
[root@ceph1 ~]# [root@ceph1 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
GiB GiB 3.5 GiB 1.93
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
mypool B GiB
.rgw.root 1.1 KiB GiB
default.rgw.control B GiB
default.rgw.meta B GiB
default.rgw.log B GiB
cfs_data KiB GiB
cfs_meta 2.4 MiB GiB
rbdpool B GiB [root@ceph1 ~]# ceph pg 8.1 query [root@ceph1 ~]# ceph osd map cfs_data secure
osdmap e58 pool 'cfs_data' () object 'secure' -> pg .a67b1c61 (6.1) -> up ([,,], p2) acting ([,,], p2) ===========================================
root@cu-pve05:/mnt/pve# ceph osd pool stats
pool kyc_block01 id
client io 0B/s rd, 0op/s rd, 0op/s wr pool cephfs_data id
nothing is going on pool cephfs_metadata id
nothing is going on pool system_disks id
client io 0B/s rd, 576B/s wr, 0op/s rd, 0op/s wr pool data_disks id
nothing is going on pool fs01 id
nothing is going on root@cu-pve05:/mnt/pve# root@cu-pve05:/mnt/pve# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
.4TiB .9TiB 528GiB 0.98
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
kyc_block01 130GiB 0.77 .4TiB
cephfs_data .62GiB 0.04 .4TiB
cephfs_metadata 645KiB .4TiB
system_disks .1GiB 0.19 .4TiB
data_disks 0B .4TiB
fs01 128MiB .4TiB
root@cu-pve05:/mnt/pve# root@cu-pve05:/mnt/pve# ceph pg dump pgs_brief|grep ^|wc -l
dumped pgs_brief 上面的这个结果就是pve中的pool上的pg_num数量。 PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY
9.21 active+clean [,,] [,,]
9.20 active+clean [,,] [,,]
9.27 active+clean [,,] [,,] ===============================
root@cu-pve05:/mnt/pve# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
cephfs_data .14GiB .73GiB .9GiB
cephfs_metadata 698KiB 556KiB .01MiB
fs01 128MiB 0B 256MiB
kyc_block01 133GiB 524GiB 223GiB
system_disks .1GiB .3GiB 109GiB total_objects
total_used 539GiB
total_avail .9TiB
total_space .4TiB OBJECTS*=COPIES
===============================
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
:4pg
metadata是128pg
data是512pg
placement groups objects object_size
vm--disk- 32GiB .19k 4MiB
8.19*=.76g 8.19+0.643=8.833 .18t*=17.44
17.44*=52.32
52.39 TiB PGs active+clean:
24osd each node has 1921pgs

ceph-pg的更多相关文章

  1. ceph PG数量调整/PG的状态说明

    优化: PG Number PG和PGP数量一定要根据OSD的数量进行调整,计算公式如下,但是最后算出的结果一定要接近或者等于一个2的指数.调整PGP不会引起PG内的对象的分裂,但是会引起PG的分布的 ...

  2. Ceph PG介绍及故障状态和修复

    1 PG介绍pg的全称是placement group,中文译为放置组,是用于放置object的一个载体,pg的创建是在创建ceph存储池的时候指定的,同时跟指定的副本数也有关系,比如是3副本的则会有 ...

  3. 利用火焰图分析ceph pg分布

    前言 性能优化大神Brendan Gregg发明了火焰图来定位性能问题,通过图表就可以发现问题出在哪里,通过svg矢量图来查看性能卡在哪个点,哪个操作占用的资源最多 在查看了原始数据后,这个分析的原理 ...

  4. 记一次ceph pg unfound处理过程

    今天检查ceph集群,发现有pg丢失,于是就有了本文~~~ 1.查看集群状态 [root@k8snode001 ~]# ceph health detail HEALTH_ERR 1/973013 o ...

  5. [转] 关于 Ceph PG

    本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...

  6. Ceph pg分裂流程及可行性分析

    转自:https://www.ustack.com/blog/ceph-pg-fenlie/ 1 pg分裂 Ceph作为一个scalable的分布式系统,集群规模会逐渐增大,为了保证数据分布的均匀性, ...

  7. ceph 存储池PG查看和PG存放OSD位置

    1. 查看PG (ceph-mon)[root@controller /]# ceph pg stat 512 pgs: 512 active+clean; 0 bytes data, 1936 MB ...

  8. ceph之查看osd上pg的分布

    一.概述 osd上pg的分布决定了数据分布的均匀与否,所以能直观的看到pg到osd的上分布是很有必要的: ceph只身提供了相关的命令: #ceph pg ls-by-osd.{osd_id} #fo ...

  9. Ceph中PG和PGP的区别

    http://www.zphj1987.com/2016/10/19/Ceph%E4%B8%ADPG%E5%92%8CPGP%E7%9A%84%E5%8C%BA%E5%88%AB/ 一.前言 首先来一 ...

  10. 分布式存储Ceph之PG状态详解

    https://www.jianshu.com/p/36c2d5682d87 1. PG介绍 继上次分享的<Ceph介绍及原理架构分享>,这次主要来分享Ceph中的PG各种状态详解,PG是 ...

随机推荐

  1. Linux后台执行脚本 &与nohup

    Linux后台执行脚本的方式: 0.脚本代码 [root@VM_1_3_centos apps]# cat test.php <?php sleep(5); echo "hello w ...

  2. (67) c# 序列化

    二进制序列化器 xml序列化器 数据契约序列化器

  3. 项目搭建(一):windows UIAutomation API 框架

    [环境] 操作系统:Windows7 集成环境:Visual Studio2015 编程语言:C# 目标框架:.net framework4.6 1.新建项目 Visual Studio 2015 [ ...

  4. Chrome 强行修改配置

    大约有两个月没写了,一是最近这两个月还挺忙,更重要的是也没有遇到什么好玩的,或者是要记录的,今天无意间遇到一个非技术问题:Chrome设置的问题. 问题描述: chrome 在下载文件时,默认情况下是 ...

  5. centos6.2 shutdown now关机进入单用户模式

    在centos5.5时当我们输入 shutdown now 系统会进入关机状态.而centos6.2时并非如此,其他版本不清楚,而进入了单用户模式.(进入系统后想维护可做此操作.)会出现如下提示:(注 ...

  6. 斯坦福【概率与统计】课程笔记(三):EDA | 直方图

    单个定量变量的直方图表示 大家知道,定量变量是连续型变量,即不会像分类变量那样有明显的分类,那么如何将其画成直方图呢?一般来说,会将其按照某个维度来将其分组(group),举个例子. 我们有15个学生 ...

  7. python读取excel保存到mysql

    首先安装xlrd模块:pip install xlrd ,核心代码网上有很多,这里主要是关于一些个人实际碰到问题细节的处理 1.excel数据不规范导致读取的数据存在空白行和列: 2.参数化执行sql ...

  8. java并发编程之美-阅读记录11

    java并发编程实践 11.1ArrayBlockingQueue的使用 有关logback异步日志打印中的ArrayBlockingQueue的使用 1.异步日志打印模型概述 在高并发.高流量并且响 ...

  9. arcpy脚本使用多接图表图斑对对应多幅影像进行裁边处理

    插个广告,制作ArcGIS的Tool工具学习下面的教程就对了: 零基础学习Python制作ArcGIS自定义工具观看链接 <零基础学习Python制作ArcGIS自定义工具>课程简介 先将 ...

  10. Java使用Jsoup简单解析页面

    jsoup 是一款 Java 的 HTML 解析器,可直接解析某个 URL 地址.HTML 文本内容.它提供了一套非常省力的 API,可通过 DOM,CSS 以及类似于 jQuery 的操作方法来取出 ...