随着业务的扩展,原有的存储池不够用了,这时我们就需要给ceph添加新的存储节点,这里以新加ceph-host-05节点为例
 
准备工作
给所有节点hosts文件添加10.30.1.225 ceph-host-05,并修改ceph-host-05的/etc/hosts节点如下
[root@ceph-host-05 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.30.1.221 ceph-host-01
10.30.1.222 ceph-host-02
10.30.1.223 ceph-host-03
10.30.1.224 ceph-host-04
10.30.1.225 ceph-host-05
 
添加阿里的ceph源
[root@ceph-host-05 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
enabled=1
gpgcheck=1
type=rpm-md
 
[Ceph-noarch]
name=Ceph noarch packages
enabled=1
gpgcheck=1
type=rpm-md
 
[ceph-source]
name=Ceph source packages
enabled=1
gpgcheck=1
type=rpm-md
 
 
把ceph-host-01(ceph-deploy节点)的ssh key拷贝给ceph-host-05节点
[root@ceph-host-01 ceph-cluster]# ssh-copy-id ceph-host-05
 
ceph-host-05节点安装ceph和ceph-radosgw程序
[root@ceph-host-01 ceph-cluster]#  ceph-deploy install  --no-adjust-repos ceph-host-05
 
或者使用手动安装,手动安装命令如下
[root@ceph-host-05 ~]# yum install ceph ceph-radosgw -y
 
在管理节点把配置文件和 admin 密钥拷贝到Ceph 节点
 
[root@ceph-host-01 ceph-cluster]# ceph-deploy admin  ceph-host-05
 
执行ceph-deploy admin命令前后变化,多了配置文件ceph.conf和admin密钥ceph.client.admin.keyring,其实在管理节点使用scp命令直接复制过去也行。
[root@ceph-host-05 ~]# ls -lh /etc/ceph/
total 4.0K
-rw-r--r-- 1 root root 92 Feb  1 02:09 rbdmap
[root@ceph-host-05 ~]# ls -lh /etc/ceph/
total 12K
-rw------- 1 root root 151 Feb  4 21:28 ceph.client.admin.keyring
-rw-r--r-- 1 root root 644 Feb  4 21:28 ceph.conf
-rw-r--r-- 1 root root  92 Feb  1 02:09 rbdmap
-rw------- 1 root root   0 Feb  4 21:28 tmpeiAD3g
 
在每个节点上赋予 ceph.client.admin.keyring 有操作权限
# chmod +r /etc/ceph/ceph.client.admin.keyring
 
好了我们能查看集群的状态了
[root@ceph-host-05 ~]# ceph -s
  cluster:
    id:     272905d2-fd66-4ef6-a772-9cd73a274683
    health: HEALTH_WARN
            3 daemons have recently crashed
            1/3 mons down, quorum ceph-host-02,ceph-host-03
  services:
    mon: 3 daemons, quorum ceph-host-02,ceph-host-03 (age 31m), out of quorum: ceph-host-01
    mgr: ceph-host-02(active, since 31m), standbys: ceph-host-01, ceph-host-03
    mds: nova:1 {0=ceph-host-02=up:active} 1 up:standby
    osd: 15 osds: 15 up (since 44m), 15 in (since 3h)
  data:
    pools:   2 pools, 128 pgs
    objects: 423 objects, 1.4 GiB
    usage:   21 GiB used, 1.1 TiB / 1.2 TiB avail
    pgs:     128 active+clean
  io:
    client:   5.7 KiB/s rd, 46 KiB/s wr, 1 op/s rd, 4 op/s wr
 
我们添加新节点ceph-host-05的vdb磁盘到存储池中
[root@ceph-host-01 ceph-cluster]# ceph-deploy osd create --data /dev/vdb ceph-host-05
 
查看新的osd
[root@ceph-host-05 ~]# ceph osd tree
ID  CLASS WEIGHT  TYPE NAME             STATUS REWEIGHT PRI-AFF
-1       1.23340 root default                                  
-3       0.30835     host ceph-host-01                         
  0   hdd 0.07709         osd.0             up  1.00000 1.00000
  4   hdd 0.07709         osd.4             up  1.00000 1.00000
  8   hdd 0.07709         osd.8           down  1.00000 1.00000
12   hdd 0.07709         osd.12            up  1.00000 1.00000
-5       0.23126     host ceph-host-02                         
  1   hdd 0.07709         osd.1             up  1.00000 1.00000
  5   hdd 0.07709         osd.5             up  1.00000 1.00000
  9   hdd 0.07709         osd.9             up  1.00000 1.00000
-7       0.30835     host ceph-host-03                         
  2   hdd 0.07709         osd.2             up  1.00000 1.00000
  6   hdd 0.07709         osd.6             up  1.00000 1.00000
10   hdd 0.07709         osd.10            up  1.00000 1.00000
13   hdd 0.07709         osd.13            up  1.00000 1.00000
-9       0.30835     host ceph-host-04                         
  3   hdd 0.07709         osd.3             up  1.00000 1.00000
  7   hdd 0.07709         osd.7             up  1.00000 1.00000
11   hdd 0.07709         osd.11            up  1.00000 1.00000
14   hdd 0.07709         osd.14            up  1.00000 1.00000
-11       0.07709     host ceph-host-05                         
15   hdd 0.07709         osd.15            up  1.00000 1.00000
 
[root@ceph-host-05 ~]# ceph osd dump
epoch 465
fsid 272905d2-fd66-4ef6-a772-9cd73a274683
created 2020-02-03 03:13:00.528959
modified 2020-02-04 21:33:51.679093
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 35
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release nautilus
pool 6 'nova-metadata' replicated size 4 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 152 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 7 'nova-data' replicated size 4 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 152 flags hashpspool stripe_width 0 application cephfs
max_osd 16
osd.0 up   in  weight 1 up_from 423 up_thru 457 down_at 418 last_clean_interval [328,417) [v2:10.30.1.221:6802/7327,v1:10.30.1.221:6803/7327] [v2:192.168.9.211:6808/7327,v1:192.168.9.211:6809/7327] exists,up 5903a2c7-ca1f-4eb8-baff-2583e0db38c8
osd.1 up   in  weight 1 up_from 457 up_thru 458 down_at 451 last_clean_interval [278,449) [v2:10.30.1.222:6802/5678,v1:10.30.1.222:6803/5678] [v2:192.168.9.212:6800/5678,v1:192.168.9.212:6801/5678] exists,up bd1f8700-c318-4a35-a0ac-16b16e9c1179
osd.2 up   in  weight 1 up_from 431 up_thru 457 down_at 427 last_clean_interval [272,426) [v2:10.30.1.223:6810/3927,v1:10.30.1.223:6812/3927] [v2:192.168.9.213:6810/3927,v1:192.168.9.213:6812/3927] exists,up 1d4e71da-1956-48bb-bf93-af6c4eae0799
osd.3 up   in  weight 1 up_from 355 up_thru 458 down_at 351 last_clean_interval [275,352) [v2:10.30.1.224:6802/3856,v1:10.30.1.224:6803/3856] [v2:192.168.9.214:6802/3856,v1:192.168.9.214:6803/3856] exists,up ecd3b813-c1d7-4612-8448-a9834af18d8f
osd.4 up   in  weight 1 up_from 400 up_thru 457 down_at 392 last_clean_interval [273,389) [v2:10.30.1.221:6800/6694,v1:10.30.1.221:6801/6694] [v2:192.168.9.211:6800/6694,v1:192.168.9.211:6801/6694] exists,up 28488ddd-240a-4a21-a245-351472a7deaa
osd.5 up   in  weight 1 up_from 398 up_thru 454 down_at 390 last_clean_interval [279,389) [v2:10.30.1.222:6805/4521,v1:10.30.1.222:6807/4521] [v2:192.168.9.212:6803/4521,v1:192.168.9.212:6804/4521] exists,up cc8742ff-9d93-46b7-9fdb-60405ac09b6f
osd.6 up   in  weight 1 up_from 431 up_thru 457 down_at 427 last_clean_interval [273,426) [v2:10.30.1.223:6800/3929,v1:10.30.1.223:6801/3929] [v2:192.168.9.213:6800/3929,v1:192.168.9.213:6801/3929] exists,up 27910039-7ee6-4bf9-8d6b-06a0b8c3491a
osd.7 up   in  weight 1 up_from 353 up_thru 464 down_at 351 last_clean_interval [271,352) [v2:10.30.1.224:6800/3858,v1:10.30.1.224:6801/3858] [v2:192.168.9.214:6800/3858,v1:192.168.9.214:6801/3858] exists,up ef7c51dd-b9ee-44ef-872a-2861c3ad2f5a
osd.8 down in  weight 1 up_from 420 up_thru 443 down_at 454 last_clean_interval [346,418) [v2:10.30.1.221:6814/4681,v1:10.30.1.221:6815/4681] [v2:192.168.9.211:6804/2004681,v1:192.168.9.211:6805/2004681] exists 4e8582b0-e06e-497d-8058-43e6d882ba6b
osd.9 up   in  weight 1 up_from 382 up_thru 461 down_at 377 last_clean_interval [280,375) [v2:10.30.1.222:6810/4374,v1:10.30.1.222:6811/4374] [v2:192.168.9.212:6808/4374,v1:192.168.9.212:6809/4374] exists,up baef9f86-2d3d-4f1a-8d1b-777034371968
osd.10 up   in  weight 1 up_from 430 up_thru 456 down_at 427 last_clean_interval [272,426) [v2:10.30.1.223:6808/3921,v1:10.30.1.223:6809/3921] [v2:192.168.9.213:6808/3921,v1:192.168.9.213:6809/3921] exists,up b6cd0b80-9ef1-42ad-b0c8-2f5b8d07da98
osd.11 up   in  weight 1 up_from 354 up_thru 458 down_at 351 last_clean_interval [278,352) [v2:10.30.1.224:6808/3859,v1:10.30.1.224:6809/3859] [v2:192.168.9.214:6808/3859,v1:192.168.9.214:6809/3859] exists,up 788897e9-1b8b-456d-b379-1c1c376e5bf0
osd.12 up   in  weight 1 up_from 420 up_thru 458 down_at 418 last_clean_interval [383,418) [v2:10.30.1.221:6810/6453,v1:10.30.1.221:6811/6453] [v2:192.168.9.211:6814/2006453,v1:192.168.9.211:6815/2006453] exists,up bf5765f0-cb28-4ef8-a92d-f7fe1b5f2a09
osd.13 up   in  weight 1 up_from 431 up_thru 457 down_at 427 last_clean_interval [274,426) [v2:10.30.1.223:6804/3922,v1:10.30.1.223:6805/3922] [v2:192.168.9.213:6804/3922,v1:192.168.9.213:6805/3922] exists,up 54a3b38f-e772-4e6f-bb6a-afadaf766a4e
osd.14 up   in  weight 1 up_from 353 up_thru 457 down_at 351 last_clean_interval [273,352) [v2:10.30.1.224:6812/3860,v1:10.30.1.224:6813/3860] [v2:192.168.9.214:6812/3860,v1:192.168.9.214:6813/3860] exists,up 2652556d-b2a9-4bce-a4a2-3039a80f3c29
osd.15 up   in  weight 1 up_from 443 up_thru 462 down_at 0 last_clean_interval [0,0) [v2:10.30.1.225:6800/26134,v1:10.30.1.225:6801/26134] [v2:192.168.9.215:6800/26134,v1:192.168.9.215:6801/26134] exists,up 229fac50-a084-4853-860e-7fbd90a0b2fe
pg_temp 6.25 [1,2,15,7]
pg_temp 6.36 [7,9,15,10]
pg_temp 6.39 [11,15,13,5]
pg_temp 7.7 [15,6,1,3]
pg_temp 7.9 [0,2,5]
pg_temp 7.c [15,11,2,1]
pg_temp 7.11 [9,7,10]
pg_temp 7.12 [14,9,15,10]
pg_temp 7.15 [3,12,2]
pg_temp 7.1b [0,3,5]
pg_temp 7.23 [15,14,1,6]
pg_temp 7.27 [9,12,6]
pg_temp 7.2a [0,14,10]
pg_temp 7.31 [10,14,15,9]
pg_temp 7.33 [3,2,12]
pg_temp 7.37 [11,2,0]
pg_temp 7.39 [3,9,15,13]
pg_temp 7.3b [5,0,13]
pg_temp 7.3d [1,15,14,2]
blacklist 10.30.1.221:6805/1539363681 expires 2020-02-05 20:59:28.979301
blacklist 10.30.1.221:6804/1539363681 expires 2020-02-05 20:59:28.979301
blacklist 10.30.1.221:6829/1662704623 expires 2020-02-05 05:33:02.570724
blacklist 10.30.1.221:6828/1662704623 expires 2020-02-05 05:33:02.570724
blacklist 10.30.1.222:6800/1416583747 expires 2020-02-05 17:54:46.478629
blacklist 10.30.1.222:6801/1416583747 expires 2020-02-05 17:54:46.478629
blacklist 10.30.1.222:6800/3620735873 expires 2020-02-05 19:03:42.652746
blacklist 10.30.1.222:6801/3620735873 expires 2020-02-05 19:03:42.652746
 
 

ceph新加存储节点的更多相关文章

  1. hadoop 集群 加入一个新的存储节点和删除一个计算节点需要刷新集群状态命令

    加入一个新的存储节点和删除一个计算节点需要刷新集群状态命令 方式1:静态添加datanode,停止namenode方式 1.停止namenode 2.修改slaves文件,并更新到各个节点3.启动na ...

  2. OpenStack 新加计算节点后修改

    Contents [hide] 1 前提 2 iptables禁止snat= 3 vlan支持 4 Quota支持 5 修改物理资源设置. 6 添加collectd 7 重启服务 前提 我们使用fue ...

  3. OpenStack运维(三):OpenStack存储节点和配置管理

    1.对象存储节点维护 1.1 重启存储节点 如果一个存储节点需要重启,直接重启即可. 1.2 关闭存储节点 如果一个存储节点需要关闭很长一段时间,可以考虑将该节点从存储环中移除. swift-ring ...

  4. redis存储对象,实体类新加字段空指针问题处理

    redis是一个key-value存储系统.和Memcached类似,它支持存储的value类型相对更多,包括string(字符串).list(链表).set(集合).zset(sorted set ...

  5. Exadata 18.1新特性--云平台存储节点升级

    1.传统方式的存储节点升级流程: (1).将存储节点升级包下载到数据库服务器,通常是DB01上. (2).解压缩存储节点升级包. (3).用升级包中的patchmgr工具滚动或非滚动地升级每个存储节点 ...

  6. PXC(percona xtradb cluster)新加节点避免SST的方法

    环境: node1:192.168.0.100  pxc节点 node2:192.168.0.101  新节点 把新加入的节点先建立为node1的从库,可以使用mysqldump或innobackup ...

  7. 025-Cinder服务-->安装并配置一个本地存储节点(ISCSI)

    一:Cinder提供块级别的存储服务,块存储提供一个基础设施为了管理卷,以及和OpenStack计算服务交互,为实例提供卷.此服务也会激活管理卷的快照和卷类型的功能,块存储服务通常包含下列组件:cin ...

  8. Ceph RBD CephFS 存储

    Ceph RBD  CephFS 存储 环境准备: (这里只做基础测试, ceph-manager , ceph-mon, ceph-osd 一共三台) 10.6.0.140 = ceph-manag ...

  9. Exadata X2-2 更换 存储节点Flash卡电池(ESM)

    Exadata X2-2中的F20 Flash卡含有电源存储模块ESM(Energy Storage Module ), 也就是我们常说的电池,当主机异常断电时,ESM给Flash模块提供备用电源.实 ...

随机推荐

  1. H5系列之contenteditable

    其实这个属性很简单,既然把它放到一个单独的文章来说,他肯定有一些注意点要讲 兼容性很好,兼容所有主流浏览器. 用法很简单,只需要给你需要的标签填上即可. <div contenteditable ...

  2. 【Redis】【报错】redis.exceptions.ResponseError: DENIED Redis is running in protected mode

    (一)报错前提 写flask 项目的时候,因为连接了私有云中的redis地址指定了IP host,启动项目的时候报错 (二)解决方法 首先要切换到root用户 root@:/etc/redis# pw ...

  3. Unity减少构建安装包的体积(210MB减小到7MB)

    概述 项目简介 由于是公司内做的项目,不方便开源,就只分享优化过程吧. 项目信息 逐日是一个移动端单机小游戏,使用Unity开发,目前已将项目使用的Unity升级到2019.4.14f1c1 (3e5 ...

  4. 2019-2020 ICPC Asia Hong Kong Regional Contest J. Junior Mathematician 题解(数位dp)

    题目链接 题目大意 要你在[l,r]中找到有多少个数满足\(x\equiv f(x)(mod\; m)\) \(f(x)=\sum_{i=1}^{k-1} \sum_{j=i+1}^{k}d(x,i) ...

  5. 测试:DOCX

    先拿到的是需求文档和接口文档以及测试用例模块,[以及之前写好的测试用例]再根据分配的任务进行编写用例 [智能看懂业务需求]现有功能点,在编写用例 [项目介绍]: 辽阳农商惠生活项目是作为一个农户和银行 ...

  6. DataTable 将一列转为List

    c# linq用起来特方便,因此我们习惯性的用list来操作. 这里我们将 DataTable 一列转为List: List<T> homeworkIdList = (from r in ...

  7. kafka监听出现的问题,解决和剖析

    问题如下: kafka为什么监听不到数据 kafka为什么会有重复数据发送 kafka数据重复如何解决 为什么kafka会出现俩个消费端都可以消费问题 kafka监听配置文件 一. 解决问题一(kaf ...

  8. 20200221_python虚拟环境在Windows下安装配置_virtualenv不是内部或外部命令也不是可运行的程序或批处理文件

    1. 使用管理员启动命令行; 2. 安装虚拟环境 a)      .\pip install virtualenv  -i https://pypi.douban.com/simple/ b)     ...

  9. 问题: 刚安装的PyCharm执行代码报“ModuleNotFoundError: No module named XXXX”错

    老猿刚安装好PyCharm后,直接新建了一个工程文件并导入了一个已有的爬虫程序文件,该文件原来在Python解释器下能执行,但在PyCharm下执行时报错: F:\学习\python\SRC\proj ...

  10. day105:Mofang:设置页面初始化&更新头像/上传头像&设置页面显示用户基本信息

    目录 1.设置页面初始化 2.更新头像 1.点击头像进入更新头像界面 2.更新头像页面初始化 3.更新头像页面CSS样式 4.头像上传来源选择:相册/相机 5.调用api提供的本地接口从相册/相机提取 ...