ceph-pg
版本:mimic
https://192.168.1.5:8006/pve-docs/chapter-pveceph.html#pve_ceph_osds
As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used by an OSD. OSD caching will use additional memory.
mon_command failed - pg_num 128 size 3 would mean 6147 total pgs, which exceeds max 6000 (mon_max_pg_per_osd 250 * num_in_osds 24)
mon_command failed - pg_num size would mean total pgs, which exceeds max (mon_max_pg_per_osd * num_in_osds ) [root@ali- dd]# ceph pg dump
dumped all
version
stamp -- ::24.077612
last_osdmap_epoch
last_pg_scan
full_ratio 0.9
nearfull_ratio 0.8 [root@ceph1 ~]# ceph pg ls
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES LOG STATE STATE_STAMP VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP
1.0 active+clean -- ::54.430131 '2 57:95 [1,2,0]p1 [1,2,0]p1 2019-03-28 02:42:54.430020 2019-03-28 02:42:54.430020
1.1 active+clean -- ::33.846731 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-27 20:42:33.846600 2019-03-27 20:42:33.846600
1.2 active+clean -- ::31.853254 '0 57:92 [1,0,2]p1 [1,0,2]p1 2019-03-27 20:02:31.853127 2019-03-21 18:53:07.286885
1.3 active+clean -- ::29.499574 '0 57:94 [0,1,2]p0 [0,1,2]p0 2019-03-28 01:04:29.499476 2019-03-21 18:53:07.286885
1.4 active+clean -- ::42.694788 '0 57:77 [2,1,0]p2 [2,1,0]p2 2019-03-28 10:17:42.694658 2019-03-21 18:53:07.286885
1.5 active+clean -- ::49.922515 '0 57:78 [2,0,1]p2 [2,0,1]p2 2019-03-28 14:33:49.922414 2019-03-21 18:53:07.286885
1.6 active+clean -- ::08.897114 '0 57:78 [2,1,0]p2 [2,1,0]p2 2019-03-28 08:33:08.897044 2019-03-25 19:51:32.716535
1.7 active+clean -- ::16.417698 '0 57:92 [1,2,0]p1 [1,2,0]p1 2019-03-27 21:37:16.417553 2019-03-22 23:05:53.863908
2.0 active+clean -- ::09.127196 '1 57:155 [1,2,0]p1 [1,2,0]p1 2019-03-27 15:07:09.127107 2019-03-22 15:05:32.211389
2.1 active+clean -- ::41.958378 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 20:55:41.958328 2019-03-27 20:55:41.958328
2.2 active+clean -- ::45.117140 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-28 03:09:45.117036 2019-03-28 03:09:45.117036
2.3 active+clean -- ::17.944907 '0 57:87 [1,0,2]p1 [1,0,2]p1 2019-03-27 08:54:17.944792 2019-03-26 05:44:21.586541
2.4 active+clean -- ::52.040458 '0 57:89 [0,2,1]p0 [0,2,1]p0 2019-03-27 23:42:52.040353 2019-03-22 15:05:32.211389
2.5 active+clean -- ::15.908085 '0 57:73 [2,0,1]p2 [2,0,1]p2 2019-03-27 14:26:15.908022 2019-03-22 15:05:32.211389
2.6 active+clean -- ::22.282027 '2 57:161 [0,2,1]p0 [0,2,1]p0 2019-03-28 15:00:22.281923 2019-03-26 05:39:41.395132
2.7 active+clean -- ::39.415262 '4 57:253 [1,2,0]p1 [1,2,0]p1 2019-03-27 17:09:39.415167 2019-03-27 17:09:39.415167 [root@ceph1 rbdpool]# ceph pg map 8.13
osdmap e55 pg 8.13 (8.13) -> up [,,] acting [,,] pg id由{pool-num}.{pg-id}组成
ceph osd lspools [root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; GiB data, GiB used, 8.4 GiB / GiB avail
[root@client mnt]# rm -rf a*
上面的删除操作后,下面的pg才开始清理
[root@ceph1 rbdpool]# ceph pg stat
pgs: active+clean; 2.5 MiB data, 3.5 GiB used, GiB / GiB avail; 8.7 KiB/s rd, B/s wr, op/s [root@ceph1 ~]# ceph pg dump
dumped all
version
stamp -- ::18.312134
last_osdmap_epoch
last_pg_scan
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG STATE STATE_STAMP VERSION REPORTED UP UP_PRIMARY ACTING ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP LAST_DEEP_SCRUB DEEP_SCRUB_STAMP SNAPTRIMQ_LEN
8.3f active+clean -- ::27.945410 '0 57:30 [0,1,2] 0 [0,1,2] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3e active+clean -- ::27.967178 '0 57:28 [2,1,0] 2 [2,1,0] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.3d active+clean -- ::27.946169 '0 57:29 [0,2,1] 0 [0,2,1] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3c active+clean -- ::27.954775 '0 57:29 [1,2,0] 1 [1,2,0] 1 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3b active+clean -- ::27.958550 '0 57:28 [1,2,0] 1 [1,2,0] 1 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
.3a active+clean -- ::27.968929 '2 57:31 [2,0,1] 2 [2,0,1] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.39 active+clean -- ::27.966700 '0 57:28 [2,0,1] 2 [2,0,1] 2 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0
8.38 active+clean -- ::27.946091 '0 57:29 [0,2,1] 0 [0,2,1] 0 0' -- ::20.276896 '0 2019-03-27 17:58:20.276896 0 sum
OSD_STAT USED AVAIL TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM
1.3 GiB GiB GiB [,]
1.1 GiB GiB GiB [,]
1.1 GiB GiB GiB [,]
sum 3.5 GiB GiB GiB
[root@ceph1 ~]# [root@ceph1 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
GiB GiB 3.5 GiB 1.93
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
mypool B GiB
.rgw.root 1.1 KiB GiB
default.rgw.control B GiB
default.rgw.meta B GiB
default.rgw.log B GiB
cfs_data KiB GiB
cfs_meta 2.4 MiB GiB
rbdpool B GiB [root@ceph1 ~]# ceph pg 8.1 query [root@ceph1 ~]# ceph osd map cfs_data secure
osdmap e58 pool 'cfs_data' () object 'secure' -> pg .a67b1c61 (6.1) -> up ([,,], p2) acting ([,,], p2) ===========================================
root@cu-pve05:/mnt/pve# ceph osd pool stats
pool kyc_block01 id
client io 0B/s rd, 0op/s rd, 0op/s wr pool cephfs_data id
nothing is going on pool cephfs_metadata id
nothing is going on pool system_disks id
client io 0B/s rd, 576B/s wr, 0op/s rd, 0op/s wr pool data_disks id
nothing is going on pool fs01 id
nothing is going on root@cu-pve05:/mnt/pve# root@cu-pve05:/mnt/pve# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
.4TiB .9TiB 528GiB 0.98
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
kyc_block01 130GiB 0.77 .4TiB
cephfs_data .62GiB 0.04 .4TiB
cephfs_metadata 645KiB .4TiB
system_disks .1GiB 0.19 .4TiB
data_disks 0B .4TiB
fs01 128MiB .4TiB
root@cu-pve05:/mnt/pve# root@cu-pve05:/mnt/pve# ceph pg dump pgs_brief|grep ^|wc -l
dumped pgs_brief 上面的这个结果就是pve中的pool上的pg_num数量。 PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY
9.21 active+clean [,,] [,,]
9.20 active+clean [,,] [,,]
9.27 active+clean [,,] [,,] ===============================
root@cu-pve05:/mnt/pve# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
cephfs_data .14GiB .73GiB .9GiB
cephfs_metadata 698KiB 556KiB .01MiB
fs01 128MiB 0B 256MiB
kyc_block01 133GiB 524GiB 223GiB
system_disks .1GiB .3GiB 109GiB total_objects
total_used 539GiB
total_avail .9TiB
total_space .4TiB OBJECTS*=COPIES
===============================
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
:4pg
metadata是128pg
data是512pg
placement groups objects object_size
vm--disk- 32GiB .19k 4MiB
8.19*=.76g 8.19+0.643=8.833 .18t*=17.44
17.44*=52.32
52.39 TiB PGs active+clean:
24osd each node has 1921pgs
ceph-pg的更多相关文章
- ceph PG数量调整/PG的状态说明
优化: PG Number PG和PGP数量一定要根据OSD的数量进行调整,计算公式如下,但是最后算出的结果一定要接近或者等于一个2的指数.调整PGP不会引起PG内的对象的分裂,但是会引起PG的分布的 ...
- Ceph PG介绍及故障状态和修复
1 PG介绍pg的全称是placement group,中文译为放置组,是用于放置object的一个载体,pg的创建是在创建ceph存储池的时候指定的,同时跟指定的副本数也有关系,比如是3副本的则会有 ...
- 利用火焰图分析ceph pg分布
前言 性能优化大神Brendan Gregg发明了火焰图来定位性能问题,通过图表就可以发现问题出在哪里,通过svg矢量图来查看性能卡在哪个点,哪个操作占用的资源最多 在查看了原始数据后,这个分析的原理 ...
- 记一次ceph pg unfound处理过程
今天检查ceph集群,发现有pg丢失,于是就有了本文~~~ 1.查看集群状态 [root@k8snode001 ~]# ceph health detail HEALTH_ERR 1/973013 o ...
- [转] 关于 Ceph PG
本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...
- Ceph pg分裂流程及可行性分析
转自:https://www.ustack.com/blog/ceph-pg-fenlie/ 1 pg分裂 Ceph作为一个scalable的分布式系统,集群规模会逐渐增大,为了保证数据分布的均匀性, ...
- ceph 存储池PG查看和PG存放OSD位置
1. 查看PG (ceph-mon)[root@controller /]# ceph pg stat 512 pgs: 512 active+clean; 0 bytes data, 1936 MB ...
- ceph之查看osd上pg的分布
一.概述 osd上pg的分布决定了数据分布的均匀与否,所以能直观的看到pg到osd的上分布是很有必要的: ceph只身提供了相关的命令: #ceph pg ls-by-osd.{osd_id} #fo ...
- Ceph中PG和PGP的区别
http://www.zphj1987.com/2016/10/19/Ceph%E4%B8%ADPG%E5%92%8CPGP%E7%9A%84%E5%8C%BA%E5%88%AB/ 一.前言 首先来一 ...
- 分布式存储Ceph之PG状态详解
https://www.jianshu.com/p/36c2d5682d87 1. PG介绍 继上次分享的<Ceph介绍及原理架构分享>,这次主要来分享Ceph中的PG各种状态详解,PG是 ...
随机推荐
- 测开之路四十七:Django之请求静态资源与模板
框架必要的配置 import sysfrom django.conf.urls import urlfrom django.conf import settingsfrom django.http i ...
- 测开之路四十二:常用的jquery事件
$(‘selector’).click() 触发点击事件$(‘selector’).click(function) 添加点击事件$(‘selector’).dbclick() 触发双击事件$(‘sel ...
- Python笔记(四)_字符串的方法
字符串的方法 []表示该参数时可选的,start和end参数表示范围 count(sub[, start[, end]]) 返回sub在字符串里边出现的次数 find(sub[, start[, en ...
- 将本地图片数据制作成内存对象数据集|tensorflow|手写数字制作成内存对象数据集|tf队列|线程
样本说明: tensorflow经典实例之手写数字识别.MNIST数据集. 数据集dir名称 每个文件夹代表一个标签label,每个label中有820个手写数字的图片 标签label为0的文件夹 ...
- UVA12589_Learning Vector
大致题意: 有n个向量要你选k个,把这k个向量连起来,画出来的与x轴围成的面积最大 思路: 这个是斜率dp,让斜率大的排在前面,记忆化搜索的时候要加入一个当前高的信息,因为这个向量形成面积不仅和斜率有 ...
- 刚安装的程序要卸载,如何Ubuntu查看程序安装记录
如果新装一个程序,突然发现需要卸载,又忘记了程序名字,怎么解决呢? /var/log/apt/history.log /var/log/apt/term.log /var/log/aptitude 看 ...
- 组件化框架设计之apt编译时期自动生成代码&动态类加载(二)
阿里P7移动互联网架构师进阶视频(每日更新中)免费学习请点击:https://space.bilibili.com/474380680 本篇文章将继续从以下两个内容来介绍组件化框架设计: apt编译时 ...
- HDU 6464 /// 权值线段树
题目大意: 共Q次操作 操作有两种 操作一 在序列尾部加入f[i]个s[i] 操作二 查询序列第f[i]小到第s[i]小之间的总和 离线操作 把序列内的值离散化 然后利用离散化后的值 在线段树上对应权 ...
- IntelliJ IDEA 常用快捷键和技巧
IntelliJ Idea 常用快捷键列表 Alt+回车 导入包,自动修正Ctrl+N 查找类Ctrl+Shift+N 查找文件Ctrl+Alt+L 格式化代码Ctrl+Alt+O 优化导入的类和 ...
- JS事件循环(Event Loop)机制
前言 众所周知,为了与浏览器进行交互,Javascript是一门非阻塞单线程脚本语言. 为何单线程? 因为如果在DOM操作中,有两个线程一个添加节点,一个删除节点,浏览器并不知道以哪个为准,所以只能选 ...