ceph对象存储场景
安装ceph-radosgw
[root@ceph-node1 ~]# cd /etc/ceph
# 这里要注意ceph的源,要和之前安装的ceph集群同一个版本
[root@ceph-node1 ceph]# sudo yum install -y ceph-radosgw
创建RGW用户和keyring
在ceph-node1服务器上创建keyring:
[root@ceph-node1 ceph]# sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
[root@ceph-node1 ceph]# sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring
生成ceph-radosgw服务对应的用户和key:
[root@ceph-node1 ceph]# sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key
为用户添加访问权限:
[root@ceph-node1 ceph]# sudo ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
导入keyring到集群中:
[root@ceph-node1 ceph]# sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring
创建资源池
由于RGW要求专门的pool存储数据,这里手动创建这些Pool,在admin-node上执行:
ceph osd pool create .rgw 64 64
ceph osd pool create .rgw.root 64 64
ceph osd pool create .rgw.control 64 64
ceph osd pool create .rgw.gc 64 64
ceph osd pool create .rgw.buckets 64 64
ceph osd pool create .rgw.buckets.index 64 64
ceph osd pool create .rgw.buckets.extra 64 64
ceph osd pool create .log 64 64
ceph osd pool create .intent-log 64 64
ceph osd pool create .usage 64 64
ceph osd pool create .users 64 64
ceph osd pool create .users.email 64 64
ceph osd pool create .users.swift 64 64
ceph osd pool create .users.uid 64 64
列出pool信息确认全部成功创建:
[root@ceph-admin ~]# rados lspools
cephfs_data
cephfs_metadata
rbd_data
.rgw
.rgw.root
.rgw.control
.rgw.gc
.rgw.buckets
.rgw.buckets.index
.rgw.buckets.extra
.log
.intent-log
.usage
.users
.users.email
.users.swift
.users.uid
default.rgw.control
default.rgw.meta
default.rgw.log
[root@ceph-admin ~]#
报错:too many PGs per OSD (492 > max 250)
[cephfsd@ceph-admin ceph]$ ceph -s
cluster:
id: 6d3fd8ed-d630-48f7-aa8d-ed79da7a69eb
health: HEALTH_ERR
779 PGs pending on creation
Reduced data availability: 236 pgs inactive
application not enabled on 1 pool(s)
14 slow requests are blocked > 32 sec. Implicated osds
58 stuck requests are blocked > 4096 sec. Implicated osds 3,5
too many PGs per OSD (492 > max 250) services:
mon: 1 daemons, quorum ceph-admin
mgr: ceph-admin(active)
mds: cephfs-1/1/1 up {0=ceph-node3=up:active}, 2 up:standby
osd: 6 osds: 6 up, 6 in
rgw: 1 daemon active data:
pools: 19 pools, 1104 pgs
objects: 153 objects, 246MiB
usage: 18.8GiB used, 161GiB / 180GiB avail
pgs: 10.779% pgs unknown
10.598% pgs not active
868 active+clean
119 unknown
117 creating+activating
修改配置admin上的ceph.conf
[cephfsd@ceph-admin ceph]$ vim /etc/ceph/ceph.conf
mon_max_pg_per_osd = 1000
mon_pg_warn_max_per_osd = 1000
# 重启服务
[cephfsd@ceph-admin ceph]$ systemctl restart ceph-mon.target
[cephfsd@ceph-admin ceph]$ systemctl restart ceph-mgr.target
[cephfsd@ceph-admin ceph]$ ceph -s
cluster:
id: 6d3fd8ed-d630-48f7-aa8d-ed79da7a69eb
health: HEALTH_ERR
779 PGs pending on creation
Reduced data availability: 236 pgs inactive
application not enabled on 1 pool(s)
14 slow requests are blocked > 32 sec. Implicated osds
62 stuck requests are blocked > 4096 sec. Implicated osds 3,5 services:
mon: 1 daemons, quorum ceph-admin
mgr: ceph-admin(active)
mds: cephfs-1/1/1 up {0=ceph-node3=up:active}, 2 up:standby
osd: 6 osds: 6 up, 6 in
rgw: 1 daemon active data:
pools: 19 pools, 1104 pgs
objects: 153 objects, 246MiB
usage: 18.8GiB used, 161GiB / 180GiB avail
pgs: 10.779% pgs unknown
10.598% pgs not active
868 active+clean
119 unknown
117 creating+activating [cephfsd@ceph-admin ceph]$
仍然报错Implicated osds 3,5
# 查看osd 3和5在哪个节点上
[cephfsd@ceph-admin ceph]$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.17578 root default
-3 0.05859 host ceph-node1
0 ssd 0.00980 osd.0 up 1.00000 1.00000
3 ssd 0.04880 osd.3 up 1.00000 1.00000
-5 0.05859 host ceph-node2
1 ssd 0.00980 osd.1 up 1.00000 1.00000
4 ssd 0.04880 osd.4 up 1.00000 1.00000
-7 0.05859 host ceph-node3
2 ssd 0.00980 osd.2 up 1.00000 1.00000
5 ssd 0.04880 osd.5 up 1.00000 1.00000
[cephfsd@ceph-admin ceph]$
去到相对应的节点重启服务
# ceph-node1上重启osd3
[root@ceph-node1 ceph]# systemctl restart ceph-osd@3.service
# ceph-node3上重启osd5
[root@ceph-node3 ceph]# systemctl restart ceph-osd@5.service
再次检查ceph健康状态,报application not enabled on 1 pool(s)
[cephfsd@ceph-admin ceph]$ ceph -s
cluster:
id: 6d3fd8ed-d630-48f7-aa8d-ed79da7a69eb
health: HEALTH_WARN
643 PGs pending on creation
Reduced data availability: 225 pgs inactive, 53 pgs peering
Degraded data redundancy: 62/459 objects degraded (13.508%), 27 pgs degraded
application not enabled on 1 pool(s) services:
mon: 1 daemons, quorum ceph-admin
mgr: ceph-admin(active)
mds: cephfs-1/1/1 up {0=ceph-node3=up:active}, 2 up:standby
osd: 6 osds: 6 up, 6 in
rgw: 1 daemon active data:
pools: 19 pools, 1104 pgs
objects: 153 objects, 246MiB
usage: 18.8GiB used, 161GiB / 180GiB avail
pgs: 5.163% pgs unknown
24.094% pgs not active
62/459 objects degraded (13.508%)
481 active+clean
273 active+undersized
145 peering
69 creating+activating+undersized
57 unknown
34 creating+activating
27 active+undersized+degraded
14 stale+creating+activating
4 creating+peering [cephfsd@ceph-admin ceph]$
# 查看详细信息
[root@ceph-admin ~]# ceph health detail
HEALTH_WARN Reduced data availability: 175 pgs inactive; application not enabled on 1 pool(s); 6 slow requests are blocked > 32 sec. Implicated osds 4,5
PG_AVAILABILITY Reduced data availability: 175 pgs inactive
pg 24.20 is stuck inactive for 13882.112368, current state creating+activating, last acting [5,4,3]
pg 24.21 is stuck inactive for 13882.112368, current state creating+activating, last acting [5,3,4]
pg 24.2c is stuck inactive for 13882.112368, current state creating+activating, last acting [3,2,4]
pg 24.32 is stuck inactive for 13882.112368, current state creating+activating, last acting [5,3,4]
pg 25.9 is stuck inactive for 13881.051411, current state creating+activating, last acting [3,5,4]
pg 25.20 is stuck inactive for 13881.051411, current state creating+activating, last acting [5,3,4]
pg 25.21 is stuck inactive for 13881.051411, current state creating+activating, last acting [3,4,2]
pg 25.22 is stuck inactive for 13881.051411, current state creating+activating, last acting [5,4,3]
pg 25.25 is stuck inactive for 13881.051411, current state creating+activating, last acting [3,4,5]
pg 25.29 is stuck inactive for 13881.051411, current state creating+activating, last acting [3,5,4]
pg 25.2a is stuck inactive for 13881.051411, current state creating+activating, last acting [0,5,4]
pg 25.2b is stuck inactive for 13881.051411, current state creating+activating, last acting [5,4,3]
pg 25.2c is stuck inactive for 13881.051411, current state creating+activating, last acting [3,4,2]
pg 25.2f is stuck inactive for 13881.051411, current state creating+activating, last acting [3,2,4]
pg 25.33 is stuck inactive for 13881.051411, current state creating+activating, last acting [5,4,0]
pg 26.a is stuck inactive for 13880.050194, current state creating+activating, last acting [5,4,3]
pg 26.20 is stuck inactive for 13880.050194, current state creating+activating, last acting [3,5,4]
pg 26.21 is stuck inactive for 13880.050194, current state creating+activating, last acting [3,4,5]
pg 26.22 is stuck inactive for 13880.050194, current state creating+activating, last acting [5,3,4]
pg 26.23 is stuck inactive for 736.400482, current state unknown, last acting []
pg 26.24 is stuck inactive for 13880.050194, current state creating+activating, last acting [5,4,3]
pg 26.25 is stuck inactive for 13880.050194, current state creating+activating, last acting [2,4,3]
pg 26.26 is stuck inactive for 736.400482, current state unknown, last acting []
pg 26.27 is stuck inactive for 13880.050194, current state creating+activating, last acting [0,5,4]
pg 26.28 is stuck inactive for 13880.050194, current state creating+activating, last acting [5,4,3]
pg 26.29 is stuck inactive for 736.400482, current state unknown, last acting []
pg 26.2a is stuck inactive for 13880.050194, current state creating+activating, last acting [3,2,4]
pg 26.2b is stuck inactive for 13880.050194, current state creating+activating, last acting [5,4,3]
pg 26.2c is stuck inactive for 13880.050194, current state creating+activating, last acting [3,5,4]
pg 26.2d is stuck inactive for 13880.050194, current state creating+activating, last acting [5,4,3]
pg 26.2e is stuck inactive for 736.400482, current state unknown, last acting []
pg 26.30 is stuck inactive for 13880.050194, current state creating+activating, last acting [3,5,4]
pg 26.31 is stuck inactive for 13880.050194, current state creating+activating, last acting [3,4,2]
pg 27.a is stuck inactive for 13877.888382, current state creating+activating, last acting [3,5,4]
pg 27.b is stuck inactive for 13877.888382, current state creating+activating, last acting [3,2,4]
pg 27.20 is stuck inactive for 13877.888382, current state creating+activating, last acting [5,4,3]
pg 27.21 is stuck inactive for 13877.888382, current state creating+activating, last acting [3,5,4]
pg 27.22 is stuck inactive for 13877.888382, current state creating+activating, last acting [5,3,4]
pg 27.23 is stuck inactive for 736.400482, current state unknown, last acting []
pg 27.24 is stuck inactive for 13877.888382, current state creating+activating, last acting [3,5,4]
pg 27.25 is stuck inactive for 736.400482, current state unknown, last acting []
pg 27.27 is stuck inactive for 13877.888382, current state creating+activating, last acting [5,3,4]
pg 27.28 is stuck inactive for 736.400482, current state unknown, last acting []
pg 27.29 is stuck inactive for 736.400482, current state unknown, last acting []
pg 27.2a is stuck inactive for 13877.888382, current state creating+activating, last acting [5,3,4]
pg 27.2b is stuck inactive for 736.400482, current state unknown, last acting []
pg 27.2c is stuck inactive for 13877.888382, current state creating+activating, last acting [3,4,2]
pg 27.2d is stuck inactive for 13877.888382, current state creating+activating, last acting [3,2,4]
pg 27.2f is stuck inactive for 13877.888382, current state creating+activating, last acting [3,4,5]
pg 27.30 is stuck inactive for 13877.888382, current state creating+activating, last acting [3,4,5]
pg 27.31 is stuck inactive for 13877.888382, current state creating+activating, last acting [3,5,4]
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool '.rgw.root'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
REQUEST_SLOW 6 slow requests are blocked > 32 sec. Implicated osds 4,5
4 ops are blocked > 262.144 sec
1 ops are blocked > 65.536 sec
1 ops are blocked > 32.768 sec
osds 4,5 have blocked requests > 262.144 sec
[root@ceph-admin ~]#
# 允许就好了
[root@ceph-admin ~]# ceph osd pool application enable .rgw.root rgw
enabled application 'rgw' on pool '.rgw.root'
# 再次查看健康状态,Ok了
[root@ceph-admin ~]# ceph -s
cluster:
id: 6d3fd8ed-d630-48f7-aa8d-ed79da7a69eb
health: HEALTH_OK services:
mon: 1 daemons, quorum ceph-admin
mgr: ceph-admin(active)
mds: cephfs-1/1/1 up {0=ceph-node3=up:active}, 2 up:standby
osd: 6 osds: 6 up, 6 in
rgw: 3 daemons active data:
pools: 20 pools, 1112 pgs
objects: 336 objects, 246MiB
usage: 19.1GiB used, 161GiB / 180GiB avail
pgs: 1112 active+clean [root@ceph-admin ~]#
# node1上的7480端口也起来了
[root@ceph-node1 ceph]# netstat -anplut|grep 7480
tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 45110/radosgw
[root@ceph-node1 ceph]#
参考:http://www.strugglesquirrel.com/2019/04/23/centos7%E9%83%A8%E7%BD%B2ceph/
http://docs.ceph.org.cn/radosgw/
ceph对象存储场景的更多相关文章
- 腾讯云存储专家深度解读基于Ceph对象存储的混合云机制
背景 毫无疑问,乘着云计算发展的东风,Ceph已经是当今最火热的软件定义存储开源项目.如下图所示,它在同一底层平台之上可以对外提供三种存储接口,分别是文件存储.对象存储以及块存储,本文主要关注的是对象 ...
- Ceph对象存储网关中的索引工作原理<转>
Ceph 对象存储网关允许你通过 Swift 及 S3 API 访问 Ceph .它将这些 API 请求转化为 librados 请求.Librados 是一个非常出色的对象存储(库)但是它无法高效的 ...
- 006.Ceph对象存储基础使用
一 Ceph文件系统 1.1 概述 Ceph 对象网关是一个构建在 librados 之上的对象存储接口,它为应用程序访问Ceph 存储集群提供了一个 RESTful 风格的网关 . Ceph 对象存 ...
- 基于LAMP php7.1搭建owncloud云盘与ceph对象存储S3借口整合案例
ownCloud简介 是一个来自 KDE 社区开发的免费软件,提供私人的 Web 服务.当前主要功能包括文件管理(内建文件分享).音乐.日历.联系人等等,可在PC和服务器上运行. 简单来说就是一个基于 ...
- Ceph对象存储 S3
ceph对象存储 作为文件系统的磁盘,操作系统不能直接访问对象存储.相反,它只能通过应用程序级别的API访问.ceph是一种分布式对象存储系统,通过ceph对象网关提供对象存储接口,也称为RADOS网 ...
- ceph 对象存储跨机房容灾
场景分析 每个机房的Ceph都是独立的cluster,彼此之间没有任何关系. 多个机房都独立的提供对象存储功能,每个Ceph Radosgw都有自己独立的命名空间和存储空间. 这样带来两个问题: 针对 ...
- ceph对象存储RADOSGW安装与使用
本文章ceph版本为luminous,操作系统为centos7.7,ceph安装部署方法可以参考本人其他文章. [root@ceph1 ceph-install]# ceph -v ceph vers ...
- ceph块存储场景
1.创建rbd使用的存储池. admin节点需要安装ceph才能使用该命令,如果没有,也可以切换到ceph-node1节点去操作. [cephfsd@ceph-admin ceph]$ ceph os ...
- CEPH 对象存储的系统池介绍
RGW抽象来看就是基于rados集群之上的一个rados-client实例. Object和pool简述 Rados集群网上介绍的文章很多,这里就不一一叙述,主要要说明的是object和pool.在r ...
随机推荐
- JAVA笔记7__接口应用/Object类/简单工厂模式/静态代理模式/适配器模式
/** * 接口应用 */ public class Main { public static void main(String[] args) { Person p = new Person(&qu ...
- 20191310Lee_yellow缓冲区溢出实验
缓冲区溢出实验 1.什么是缓冲区溢出 缓冲区溢出是指程序试图向缓冲区写入超出预分配固定长度数据的情况.这一漏洞可以被恶意用户利用来改变程序的流控制,甚至执行代码的任意片段.这一漏洞的出现是由于数据 ...
- RocketMQ源码详解 | Broker篇 · 其三:CommitLog、索引、消费队列
概述 上一章中,已经介绍了 Broker 的文件系统的各个层次与部分细节,本章将继续了解在逻辑存储层的三个文件 CommitLog.IndexFile.ConsumerQueue 的一些细节.文章最后 ...
- git diff 比较差异
说明 以下命令可以不指定 <filename>,表示对全部文件操作. 命令涉及和 Git本地仓库对比的,均可指定 commit 的版本. HEAD 最近一次 commit HEAD^ 上次 ...
- Uncaught (in promise) Error: Request failed with status code 500解决方案
今天又学到一种修改bug的方法 : let newpwd = crypto.createHash('md5').update(req.body.upwd).digest('hex'); 在点击按钮加 ...
- JavaScript数组方法大集合
JavaScript数组方法集合 本文总结一下js数组处理用到的所有的方法.自己做个笔记. 数组方法 concat() 合并两个或多个数组 concat()能合并两个或者多个数组,不会更改当前数组,而 ...
- 本地以sysdba 身份登录数据库实例时,报错ORA-01031 权限不足
在linux 操作系统的数据库服务器上,使用"sqlplus / as sysdba" 登录Oracle 10.2 数据库实例时,登录失败,显示ORA-01031: 权限不足. ...
- 限制q-error,防止产生次优计划
原文:<Preventing bad plans by bounding the impact of cardinality estimation errors> 摘要 文章定义了一个衡量 ...
- 菜鸡的Java笔记 Object 类
Object 类 Object类 的主要作用 Object 类的常用方法 内容 虽然有了对象的向上转型,可以解决了参数的统一问题,但是 ...
- Linux——搭建Samba(CIFS)服务器
一.Samba的基本概念 Samba服务:是提供基于Linux和Windows的共享文件服务,服务端和客户端都可以是Linux或Windows操作系统.可以基于特定的用户访问,功能比NFS更强大. S ...