参考文档

https://blog.51cto.com/jerrymin/2140258

https://www.virtualtothecore.com/en/upgrade-ceph-cluster-luminous/
http://www.chinastor.com/distristor/11033L502017.html

官方文档:https://docs.ceph.com/docs/master/install/upgrading-ceph/

缘起

首先看之前安装版本链接及测试
https://blog.51cto.com/jerrymin/2139045
https://blog.51cto.com/jerrymin/2139046
mon ceph0、ceph2、cphe3
osd ceph0、ceph1、ceph2、ceph3
rgw ceph1
deploy ceph0
之前在Centos7.5上测试了Jewel版本的集群,随着对Ceph了解深入,计划线上运行比较新的LTS版本Ceph集群,最终选择了Luminous版本。
本来计划重新部署Luminous版本,看到这是测试环境数据丢失风险小就想尝试升级Jewel版本到Luminous版本,由于之前是Yum安装的根据之前经验原理是更新二进制文件,最后重启服务即可。看介绍文档升级步骤比较简单,但是测试中发现国内用户肯定会遇到一个坑,升级过程中会自动修改yum源到国外的站点,由于网络延迟大300s反应不及时会自动断开连接停止升级服务,故后续的操作我改成了国内源,并找出rpm包手工升级,由于本身有依赖关系故最终就收到yum install ceph ceph-radosgw即可全部升级所有的ceph包,最后重启相关服务即完成升级,最终数据没有丢失,各个功能也正常。

升级过程

参照官方的升级指南一步一步的小心操作,要注意,升级时候要确保系统是健康运行的状态。
1、登录,确认sortbitwise是enabled状态:

[root@idcv-ceph0 yum.repos.d]# ceph osd set sortbitwise
set sortbitwise

2、设置noout标志,告诉Ceph不要重新去做集群的负载均衡,虽然这是可选项,但是建议设置一下,避免每次停止节点的时候,Ceph就尝试通过复制数据到其他可用节点来重新平衡集群。

[root@idcv-ceph0 yum.repos.d]# ceph osd set noout
set noout

3、升级时,可以选择手工升级每个节点,也可以使用使用Ceph-deploy实现自动升级。如果选择手动升级,在CentOS系统里,你需要先编辑Ceph yum repo获取新的Luminous版本来替换老版本Jewel,这就需要一个简单的文本替换操作:

[root@idcv-ceph0 yum.repos.d]# sed -i 's/jewel/luminous/' /etc/yum.repos.d/ceph.repo

4、使用Ceph-deploy可以实现一个命令完成集群的自动升级

[root@idcv-ceph0 yum.repos.d]# yum install ceph-deploy python-pushy
Running transaction
Updating : ceph-deploy-2.0.0-0.noarch 1/2 
Cleanup : ceph-deploy-1.5.39-0.noarch 2/2 
Verifying : ceph-deploy-2.0.0-0.noarch 1/2 
Verifying : ceph-deploy-1.5.39-0.noarch 2/2 
Updated:
ceph-deploy.noarch 0:2.0.0-0 
Complete!
[root@idcv-ceph0 yum.repos.d]# rpm -qa |grep ceph-deploy
ceph-deploy-2.0.0-0.noarch

5、一旦Ceph-deploy升级完成,首先要做的是在同一台机器上升级Ceph。  执行这步之前先解决下面两个报错,先安装缺少的包,在执行ceph-deploy
发现从这个时候开始按照官网的步骤在国内行不通了,主要原因是yum源改成了国外的网速达不到,后续就是手工升级结合官网步骤了。

[root@idcv-ceph0 yum.repos.d]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_WARN
noout flag(s) set
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e49: 4 osds: 4 up, 4 in
flags noout,sortbitwise,require_jewel_osds
pgmap v53288: 272 pgs, 12 pools, 97496 MB data, 1785 kobjects
296 GB used, 84824 MB / 379 GB avail
272 active+clean
[root@idcv-ceph0 yum.repos.d]# ceph -v
ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
[root@idcv-ceph0 yum.repos.d]# cd /root/cluster/
[root@idcv-ceph0 cluster]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf
[root@idcv-ceph0 cluster]# ceph-deploy install --release luminous idcv-ceph0
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /usr/bin/ceph-deploy install --release luminous idcv-ceph0
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f38ae7a1d40>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f38ae9d8ed8>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['idcv-ceph0']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : luminous
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version luminous on cluster ceph hosts idcv-ceph0
[ceph_deploy.install][DEBUG ] Detecting platform for host idcv-ceph0 ...
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph0][INFO ] installing Ceph on idcv-ceph0
[idcv-ceph0][INFO ] Running command: yum clean all
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[idcv-ceph0][DEBUG ] Cleaning up everything
[idcv-ceph0][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[idcv-ceph0][DEBUG ] Cleaning up list of fastest mirrors
[idcv-ceph0][INFO ] Running command: yum -y install epel-release
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Determining fastest mirrors
[idcv-ceph0][DEBUG ] * base: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * epel: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * extras: mirrors.neusoft.edu.cn
[idcv-ceph0][DEBUG ] * updates: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] 8 packages excluded due to repository priority protections
[idcv-ceph0][DEBUG ] Package epel-release-7-11.noarch already installed and latest version
[idcv-ceph0][DEBUG ] Nothing to do
[idcv-ceph0][INFO ] Running command: yum -y install yum-plugin-priorities
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Loading mirror speeds from cached hostfile
[idcv-ceph0][DEBUG ] * base: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * epel: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * extras: mirrors.neusoft.edu.cn
[idcv-ceph0][DEBUG ] * updates: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] 8 packages excluded due to repository priority protections
[idcv-ceph0][DEBUG ] Package yum-plugin-priorities-1.1.31-45.el7.noarch already installed and latest version
[idcv-ceph0][DEBUG ] Nothing to do
[idcv-ceph0][DEBUG ] Configure Yum priorities to include obsoletes
[idcv-ceph0][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[idcv-ceph0][INFO ] Running command: rpm --import https://download.ceph.com/keys/release.asc
[idcv-ceph0][INFO ] Running command: yum remove -y ceph-release
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Resolving Dependencies
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be erased
[idcv-ceph0][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Dependencies Resolved
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Package Arch Version Repository Size
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Removing:
[idcv-ceph0][DEBUG ] ceph-release noarch 1-1.el7 installed 535
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Transaction Summary
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Remove 1 Package
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Installed size: 535
[idcv-ceph0][DEBUG ] Downloading packages:
[idcv-ceph0][DEBUG ] Running transaction check
[idcv-ceph0][DEBUG ] Running transaction test
[idcv-ceph0][DEBUG ] Transaction test succeeded
[idcv-ceph0][DEBUG ] Running transaction
[idcv-ceph0][DEBUG ] Erasing : ceph-release-1-1.el7.noarch 1/1
[idcv-ceph0][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[idcv-ceph0][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Removed:
[idcv-ceph0][DEBUG ] ceph-release.noarch 0:1-1.el7
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Complete!
[idcv-ceph0][INFO ] Running command: yum install -y https://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Examining /var/tmp/yum-root-dPpRu6/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[idcv-ceph0][DEBUG ] Marking /var/tmp/yum-root-dPpRu6/ceph-release-1-0.el7.noarch.rpm to be installed
[idcv-ceph0][DEBUG ] Resolving Dependencies
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed
[idcv-ceph0][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Dependencies Resolved
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Package Arch Version Repository Size
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Installing:
[idcv-ceph0][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 544
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Transaction Summary
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Install 1 Package
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Total size: 544
[idcv-ceph0][DEBUG ] Installed size: 544
[idcv-ceph0][DEBUG ] Downloading packages:
[idcv-ceph0][DEBUG ] Running transaction check
[idcv-ceph0][DEBUG ] Running transaction test
[idcv-ceph0][DEBUG ] Transaction test succeeded
[idcv-ceph0][DEBUG ] Running transaction
[idcv-ceph0][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1
[idcv-ceph0][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Installed:
[idcv-ceph0][DEBUG ] ceph-release.noarch 0:1-1.el7
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Complete!
[idcv-ceph0][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[idcv-ceph0][WARNIN] altered ceph.repo priorities to contain: priority=1
[idcv-ceph0][INFO ] Running command: yum -y install ceph ceph-radosgw
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Loading mirror speeds from cached hostfile
[idcv-ceph0][DEBUG ] * base: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * epel: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * extras: mirrors.neusoft.edu.cn
[idcv-ceph0][DEBUG ] * updates: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] 8 packages excluded due to repository priority protections
[idcv-ceph0][DEBUG ] Resolving Dependencies
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-osd = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-mon = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-mgr = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-mds = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-radosgw.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-radosgw.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-selinux = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librgw2 = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librados2 = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-common = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libibverbs.so.1()(64bit) for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libceph-common.so.0()(64bit) for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-common.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-common = 1:10.2.10-0.el7 for package: 1:ceph-base-10.2.10-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-common.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-rbd = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libcephfs2 = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-rgw = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-cephfs = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-rados = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librbd1 = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-prettytable for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libcephfs.so.2()(64bit) for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-mds.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-mds.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package ceph-mgr.x86_64 2:12.2.5-0.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-cherrypy for package: 2:ceph-mgr-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: pyOpenSSL for package: 2:ceph-mgr-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-pecan for package: 2:ceph-mgr-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-mon.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-mon.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package ceph-osd.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-osd.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package ceph-selinux.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-selinux.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package libibverbs.x86_64 0:15-7.el7_5 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: rdma-core(x86-64) = 15-7.el7_5 for package: libibverbs-15-7.el7_5.x86_64
[idcv-ceph0][DEBUG ] ---> Package librados2.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] --> Processing Dependency: librados2 = 1:10.2.10-0.el7 for package: 1:rbd-nbd-10.2.10-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librados2 = 1:10.2.10-0.el7 for package: 1:libradosstriper1-10.2.10-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package librados2.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package librgw2.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package librgw2.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-base.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-base.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package libcephfs1.x86_64 1:10.2.10-0.el7 will be obsoleted
[idcv-ceph0][DEBUG ] ---> Package libcephfs2.x86_64 2:12.2.5-0.el7 will be obsoleting
[idcv-ceph0][DEBUG ] ---> Package libradosstriper1.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package libradosstriper1.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package librbd1.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package librbd1.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package pyOpenSSL.x86_64 0:0.13.1-3.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-cephfs.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package python-cephfs.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package python-cherrypy.noarch 0:3.2.2-4.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-pecan.noarch 0:0.4.5-2.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-webtest >= 1.3.1 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-webob >= 1.2 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-simplegeneric >= 0.8 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-mako >= 0.4.0 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-singledispatch for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] ---> Package python-prettytable.noarch 0:0.7.2-3.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-rados.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package python-rados.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package python-rbd.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package python-rbd.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package python-rgw.x86_64 2:12.2.5-0.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package rbd-nbd.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package rbd-nbd.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package rdma-core.x86_64 0:15-7.el7_5 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: pciutils for package: rdma-core-15-7.el7_5.x86_64
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package pciutils.x86_64 0:3.5.1-3.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-mako.noarch 0:0.8.1-2.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-beaker for package: python-mako-0.8.1-2.el7.noarch
[idcv-ceph0][DEBUG ] ---> Package python-simplegeneric.noarch 0:0.8-7.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-singledispatch.noarch 0:3.4.0.2-2.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-webob.noarch 0:1.2.3-7.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-webtest.noarch 0:1.3.4-6.el7 will be installed
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package python-beaker.noarch 0:1.5.4-10.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-paste for package: python-beaker-1.5.4-10.el7.noarch
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package python-paste.noarch 0:1.7.5.1-9.20111221hg1498.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-tempita for package: python-paste-1.7.5.1-9.20111221hg1498.el7.noarch
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package python-tempita.noarch 0:0.5.1-6.el7 will be installed
[idcv-ceph0][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Dependencies Resolved
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Package Arch Version Repository
[idcv-ceph0][DEBUG ] Size
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Installing:
[idcv-ceph0][DEBUG ] libcephfs2 x86_64 2:12.2.5-0.el7 Ceph 432 k
[idcv-ceph0][DEBUG ] replacing libcephfs1.x86_64 1:10.2.10-0.el7
[idcv-ceph0][DEBUG ] Updating:
[idcv-ceph0][DEBUG ] ceph x86_64 2:12.2.5-0.el7 Ceph 3.0 k
[idcv-ceph0][DEBUG ] ceph-radosgw x86_64 2:12.2.5-0.el7 Ceph 3.8 M
[idcv-ceph0][DEBUG ] Installing for dependencies:
[idcv-ceph0][DEBUG ] ceph-mgr x86_64 2:12.2.5-0.el7 Ceph 3.6 M
[idcv-ceph0][DEBUG ] libibverbs x86_64 15-7.el7_5 updates 224 k
[idcv-ceph0][DEBUG ] pciutils x86_64 3.5.1-3.el7 base 93 k
[idcv-ceph0][DEBUG ] pyOpenSSL x86_64 0.13.1-3.el7 base 133 k
[idcv-ceph0][DEBUG ] python-beaker noarch 1.5.4-10.el7 base 80 k
[idcv-ceph0][DEBUG ] python-cherrypy noarch 3.2.2-4.el7 base 422 k
[idcv-ceph0][DEBUG ] python-mako noarch 0.8.1-2.el7 base 307 k
[idcv-ceph0][DEBUG ] python-paste noarch 1.7.5.1-9.20111221hg1498.el7 base 866 k
[idcv-ceph0][DEBUG ] python-pecan noarch 0.4.5-2.el7 epel 255 k
[idcv-ceph0][DEBUG ] python-prettytable noarch 0.7.2-3.el7 base 37 k
[idcv-ceph0][DEBUG ] python-rgw x86_64 2:12.2.5-0.el7 Ceph 73 k
[idcv-ceph0][DEBUG ] python-simplegeneric noarch 0.8-7.el7 epel 12 k
[idcv-ceph0][DEBUG ] python-singledispatch noarch 3.4.0.2-2.el7 epel 18 k
[idcv-ceph0][DEBUG ] python-tempita noarch 0.5.1-6.el7 base 33 k
[idcv-ceph0][DEBUG ] python-webob noarch 1.2.3-7.el7 base 202 k
[idcv-ceph0][DEBUG ] python-webtest noarch 1.3.4-6.el7 base 102 k
[idcv-ceph0][DEBUG ] rdma-core x86_64 15-7.el7_5 updates 48 k
[idcv-ceph0][DEBUG ] Updating for dependencies:
[idcv-ceph0][DEBUG ] ceph-base x86_64 2:12.2.5-0.el7 Ceph 3.9 M
[idcv-ceph0][DEBUG ] ceph-common x86_64 2:12.2.5-0.el7 Ceph 15 M
[idcv-ceph0][DEBUG ] ceph-mds x86_64 2:12.2.5-0.el7 Ceph 3.6 M
[idcv-ceph0][DEBUG ] ceph-mon x86_64 2:12.2.5-0.el7 Ceph 5.0 M
[idcv-ceph0][DEBUG ] ceph-osd x86_64 2:12.2.5-0.el7 Ceph 13 M
[idcv-ceph0][DEBUG ] ceph-selinux x86_64 2:12.2.5-0.el7 Ceph 20 k
[idcv-ceph0][DEBUG ] librados2 x86_64 2:12.2.5-0.el7 Ceph 2.9 M
[idcv-ceph0][DEBUG ] libradosstriper1 x86_64 2:12.2.5-0.el7 Ceph 330 k
[idcv-ceph0][DEBUG ] librbd1 x86_64 2:12.2.5-0.el7 Ceph 1.1 M
[idcv-ceph0][DEBUG ] librgw2 x86_64 2:12.2.5-0.el7 Ceph 1.7 M
[idcv-ceph0][DEBUG ] python-cephfs x86_64 2:12.2.5-0.el7 Ceph 82 k
[idcv-ceph0][DEBUG ] python-rados x86_64 2:12.2.5-0.el7 Ceph 172 k
[idcv-ceph0][DEBUG ] python-rbd x86_64 2:12.2.5-0.el7 Ceph 105 k
[idcv-ceph0][DEBUG ] rbd-nbd x86_64 2:12.2.5-0.el7 Ceph 81 k
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Transaction Summary
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Install 1 Package (+17 Dependent packages)
[idcv-ceph0][DEBUG ] Upgrade 2 Packages (+14 Dependent packages)
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Total download size: 57 M
[idcv-ceph0][DEBUG ] Downloading packages:
[idcv-ceph0][DEBUG ] Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
[idcv-ceph0][WARNIN] No data was received after 300 seconds, disconnecting...
[idcv-ceph0][INFO ] Running command: ceph --version
[idcv-ceph0][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

报错一

Delta RPMs disabled because /usr/bin/applydeltarpm not installed.

解决方案

[root@idcv-ceph0 cluster]# yum install deltarpm -y
Loaded plugins: fastestmirror, priorities
Existing lock /var/run/yum.pid: another copy is running as pid 90654.
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 132 M RSS (523 MB VSZ)
Started: Tue Jul 10 16:15:59 2018 - 10:22 ago
State : Sleeping, pid: 90654
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 132 M RSS (523 MB VSZ)
Started: Tue Jul 10 16:15:59 2018 - 10:24 ago
State : Sleeping, pid: 90654
^C
Exiting on user cancel.
[root@idcv-ceph0 cluster]# kill -9 90654
[root@idcv-ceph0 cluster]# yum install deltarpm -y

报错二

No data was received after 300 seconds, disconnecting...

解决方案

另外需要修改ceph源为国内yum源,比如阿里yum,否则会报错No data was received after 300 seconds, disconnecting...
[root@idcv-ceph0 cluster]# sed -i 's#download.ceph.com#mirrors.aliyun.com/ceph#g' /etc/yum.repos.d/ceph.repo
但是命令会自动改成国外源,这里的解决方案是rpm -qa |grep cphe相关,修改成国内yum源后直接手动安装yum install或者使用ceph-deploy install服务
[root@idcv-ceph0 ceph]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1 [Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1 [ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[root@idcv-ceph0 yum.repos.d]#yum -y install ceph ceph-radosgw
[root@idcv-ceph0 yum.repos.d]# rpm -qa |grep ceph
ceph-deploy-2.0.0-0.noarch
libcephfs2-12.2.5-0.el7.x86_64
python-cephfs-12.2.5-0.el7.x86_64
ceph-selinux-12.2.5-0.el7.x86_64
ceph-radosgw-12.2.5-0.el7.x86_64
ceph-release-1-1.el7.noarch
ceph-base-12.2.5-0.el7.x86_64
ceph-mon-12.2.5-0.el7.x86_64
ceph-osd-12.2.5-0.el7.x86_64
ceph-12.2.5-0.el7.x86_64
ceph-common-12.2.5-0.el7.x86_64
ceph-mds-12.2.5-0.el7.x86_64
ceph-mgr-12.2.5-0.el7.x86_64
[root@idcv-ceph0 yum.repos.d]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_WARN
noout flag(s) set
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e49: 4 osds: 4 up, 4 in
flags noout,sortbitwise,require_jewel_osds
pgmap v53473: 272 pgs, 12 pools, 97496 MB data, 1785 kobjects
296 GB used, 84819 MB / 379 GB avail
272 active+clean

6、在每一个监控节点,需要重启mon服务,命令如下:

[root@idcv-ceph0 cluster]# systemctl restart ceph-mon.target
[root@idcv-ceph0 cluster]# systemctl status ceph-mon.target
● ceph-mon.target - ceph target allowing to start/stop all ceph-mon@.service instances at once
Loaded: loaded (/usr/lib/systemd/system/ceph-mon.target; enabled; vendor preset: enabled)
Active: active since Tue 2018-07-10 17:27:39 CST; 11s agoJul 10 17:27:39 idcv-ceph0 systemd[1]: Reached target ceph target allowing to start/stop all ceph-mon@.service instances at once.
Jul 10 17:27:39 idcv-ceph0 systemd[1]: Starting ceph target allowing to start/stop all ceph-mon@.service instances at once.
[root@idcv-ceph0 cluster]# ceph -v
ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)
[root@idcv-ceph0 cluster]# ceph -s
cluster:
id: 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health: HEALTH_WARN
too many PGs per OSD (204 > max 200)
noout flag(s) set services:
mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3
mgr: no daemons active
osd: 4 osds: 4 up, 4 in
flags noout data:
pools: 12 pools, 272 pgs
objects: 1785k objects, 97496 MB
usage: 296 GB used, 84817 MB / 379 GB avail
pgs: 272 active+clean

7、在Kraken版本里,曾介绍过有一个Ceph管理器,在Luninous版本之后,这个ceph-mgr进程是日常操作必须的,而在Kraken版本时可选的。所以我的Jewel集群里没有这个管理区,在这里我们必须要安装它:(遇到一个报错执行:ceph-deploy -v gatherkeys ceph03)

[root@idcv-ceph0 cluster]# ceph-deploy mgr create idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /usr/bin/ceph-deploy mgr create idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [('idcv-ceph0', 'idcv-ceph0'), ('idcv-ceph1', 'idcv-ceph1'), ('idcv-ceph2', 'idcv-ceph2'), ('idcv-ceph3', 'idcv-ceph3')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f229723e320>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mgr at 0x7f2297b16a28>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts idcv-ceph0:idcv-ceph0 idcv-ceph1:idcv-ceph1 idcv-ceph2:idcv-ceph2 idcv-ceph3:idcv-ceph3
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph0][WARNIN] mgr keyring does not exist yet, creating one
[idcv-ceph0][DEBUG ] create a keyring file
[idcv-ceph0][DEBUG ] create path recursively if it doesn't exist
[idcv-ceph0][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph0 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-idcv-ceph0/keyring
[idcv-ceph0][INFO ] Running command: systemctl enable ceph-mgr@idcv-ceph0
[idcv-ceph0][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@idcv-ceph0.service to /usr/lib/systemd/system/ceph-mgr@.service.
[idcv-ceph0][INFO ] Running command: systemctl start ceph-mgr@idcv-ceph0
[idcv-ceph0][INFO ] Running command: systemctl enable ceph.target
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph1][WARNIN] mgr keyring does not exist yet, creating one
[idcv-ceph1][DEBUG ] create a keyring file
[idcv-ceph1][DEBUG ] create path recursively if it doesn't exist
[idcv-ceph1][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-idcv-ceph1/keyring
[idcv-ceph1][INFO ] Running command: sudo systemctl enable ceph-mgr@idcv-ceph1
[idcv-ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@idcv-ceph1.service to /usr/lib/systemd/system/ceph-mgr@.service.
[idcv-ceph1][INFO ] Running command: sudo systemctl start ceph-mgr@idcv-ceph1
[idcv-ceph1][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph2][WARNIN] mgr keyring does not exist yet, creating one
[idcv-ceph2][DEBUG ] create a keyring file
[idcv-ceph2][DEBUG ] create path recursively if it doesn't exist
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph2 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-idcv-ceph2/keyring
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph-mgr@idcv-ceph2
[idcv-ceph2][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@idcv-ceph2.service to /usr/lib/systemd/system/ceph-mgr@.service.
[idcv-ceph2][INFO ] Running command: sudo systemctl start ceph-mgr@idcv-ceph2
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph3][WARNIN] mgr keyring does not exist yet, creating one
[idcv-ceph3][DEBUG ] create a keyring file
[idcv-ceph3][DEBUG ] create path recursively if it doesn't exist
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph3 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-idcv-ceph3/keyring
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph-mgr@idcv-ceph3
[idcv-ceph3][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@idcv-ceph3.service to /usr/lib/systemd/system/ceph-mgr@.service.
[idcv-ceph3][INFO ] Running command: sudo systemctl start ceph-mgr@idcv-ceph3
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph.target
[root@idcv-ceph0 cluster]# ceph -s
cluster:
id: 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health: HEALTH_WARN
too many PGs per OSD (204 > max 200)
noout flag(s) set services:
mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3
mgr: idcv-ceph0(active), standbys: idcv-ceph1, idcv-ceph2, idcv-ceph3
osd: 4 osds: 4 up, 4 in
flags noout data:
pools: 12 pools, 272 pgs
objects: 1785k objects, 97496 MB
usage: 296 GB used, 84816 MB / 379 GB avail
pgs: 272 active+clean

8、重启osd
前题是所有节点都修改了国内L版本yum源第6步骤又介绍,执行了yum -y install ceph ceph-radosgw,他会升级二进制文件

[root@idcv-ceph0 ceph]#systemctl restart ceph-osd.target
[root@idcv-ceph0 ceph]# ceph versions
{
"mon": {
"ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)": 3
},
"mgr": {
"ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)": 4
},
"osd": {
"ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)": 4
},
"mds": {},
"overall": {
"ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)": 11
}
}

9、现在所有的组件都是最新的12.5版本了,我们可以禁止Luminous版本之前的OSD,运行Luminous的独有功能:

[root@idcv-ceph0 ceph]# ceph osd require-osd-release luminous
recovery_deletes is set

这也意味着现在只有Luminous节点才能加入这个集群了。

10、rgw服务也需要重启在ceph1上

[root@idcv-ceph1 system]# systemctl restart ceph-radosgw.target
[root@idcv-ceph1 system]# systemctl status ceph-radosgw.target
● ceph-radosgw.target - ceph target allowing to start/stop all ceph-radosgw@.service instances at once
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw.target; enabled; vendor preset: enabled)
Active: active since Tue 2018-07-10 18:02:25 CST; 6s ago
Jul 10 18:02:25 idcv-ceph1 systemd[1]: Reached target ceph target allowing to start/stop all ceph-radosgw@.service instances at once.
Jul 10 18:02:25 idcv-ceph1 systemd[1]: Starting ceph target allowing to start/stop all ceph-radosgw@.service instances at once.

11、启动dashboard

[root@idcv-ceph0 ceph]# rpm -qa |grep mgr
ceph-mgr-12.2.5-0.el7.x86_64
[root@idcv-ceph0 ceph]# ceph mgr module enable dashboard
[root@idcv-ceph0 ceph]# ceph mgr dump
{
"epoch": 53,
"active_gid": 34146,
"active_name": "idcv-ceph0",
"active_addr": "172.20.1.138:6804/95951",
"available": true,
"standbys": [
{
"gid": 44129,
"name": "idcv-ceph2",
"available_modules": [
"balancer",
"dashboard",
"influx",
"localpool",
"prometheus",
"restful",
"selftest",
"status",
"zabbix"
]
},
{
"gid": 44134,
"name": "idcv-ceph1",
"available_modules": [
"balancer",
"dashboard",
"influx",
"localpool",
"prometheus",
"restful",
"selftest",
"status",
"zabbix"
]
},
{
"gid": 44135,
"name": "idcv-ceph3",
"available_modules": [
"balancer",
"dashboard",
"influx",
"localpool",
"prometheus",
"restful",
"selftest",
"status",
"zabbix"
]
}
],
"modules": [
"balancer",
"dashboard",
"restful",
"status"
],
"available_modules": [
"balancer",
"dashboard",
"influx",
"localpool",
"prometheus",
"restful",
"selftest",
"status",
"zabbix"
],
"services": {
"dashboard": "http://idcv-ceph0:7000/"
}
}

浏览器访问http://172.20.1.138:7000

12、最后一步,禁止noot,以后集群就可以在需要的时候自己做负载均衡了:

[root@idcv-ceph0 ceph]# ceph osd unset noout
noout is unset
[root@idcv-ceph0 ceph]# ceph -s
cluster:
id: 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health: HEALTH_WARN
application not enabled on 1 pool(s) services:
mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3
mgr: idcv-ceph0(active), standbys: idcv-ceph2, idcv-ceph1, idcv-ceph3
osd: 4 osds: 4 up, 4 in
rgw: 1 daemon active data:
pools: 12 pools, 272 pgs
objects: 1785k objects, 97496 MB
usage: 296 GB used, 84830 MB / 379 GB avail
pgs: 272 active+clean io:
client: 0 B/s rd, 0 op/s rd, 0 op/s wr 报错
health: HEALTH_WARN
application not enabled on 1 pool(s) 解决方案
[root@idcv-ceph0 ceph]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool 'test_pool'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
[root@idcv-ceph0 ceph]# ceph osd pool application enable test_pool
Invalid command: missing required parameter app(<string(goodchars [A-Za-z0-9-_.])>)
osd pool application enable <poolname> <app> {--yes-i-really-mean-it} : enable use of an application <app> [cephfs,rbd,rgw] on pool <poolname>
Error EINVAL: invalid command
[root@idcv-ceph0 ceph]# ceph osd pool application enable test_pool rbd
enabled application 'rbd' on pool 'test_pool'
[root@idcv-ceph0 ceph]# ceph -s
cluster:
id: 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health: HEALTH_OK services:
mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3
mgr: idcv-ceph0(active), standbys: idcv-ceph2, idcv-ceph1, idcv-ceph3
osd: 4 osds: 4 up, 4 in
rgw: 1 daemon active data:
pools: 12 pools, 272 pgs
objects: 1785k objects, 97496 MB
usage: 296 GB used, 84829 MB / 379 GB avail
pgs: 272 active+clean io:
client: 0 B/s rd, 0 op/s rd, 0 op/s wr

总结

整个升级过程大概1个多小时,主要是结合国内情况否则升级还是比较简单的,另外L版本新增了mgr和dashboard功能,升级完成后测试了对象存储功能和块存储功能都正常。

升级ceph的更多相关文章

  1. 如何升级Ceph版本及注意事项

    升级软件版本在日常运维中是一个常见操作. 本文分享一下Ceph版本升级的一些经验. 一般升级流程和注意如下: 1.  关注社区Release notes 和 ceph-user邮件订阅列表,获取社区发 ...

  2. Ceph 客户端的 RPM 包升级问题

    问题 最近想把一个现有的 Ceph 客户端升级为最新的 M 版: [root@overcloud-ovscompute-0 ~]# rpm -qa | grep ceph puppet-ceph-2. ...

  3. 我们为什么选择Ceph来建立块存储

    我们为什么选择Ceph来建立块存储?国内知名黑客组织东方联盟是这样回答的,卷管理器的大小和增长受到管理程序的驱动器补充的限制,与其他Droplet共享.一旦Droplet被摧毁,储存就会被释放.术语“ ...

  4. Ceph 存储集群7-故障排除

    Ceph 仍在积极开发中,所以你可能碰到一些问题,需要评估 Ceph 配置文件.并修改日志和调试选项来纠正它. 一.日志记录和调试 般来说,你应该在运行时增加调试选项来调试问题:也可以把调试选项添加到 ...

  5. mon的稳定性问题

    MON的稳定性问题: mon的选举风暴影响客户端IO LevelDB的暴涨 频繁的客户端请求的DDOS mon选举风暴: monmap会因为mon之间或者mon与客户端之间网络的影响或者消息传递的异常 ...

  6. Ceph的客户端安装

    Contents [hide] 1 参考 1.1 ceph端口访问控制 1.2 用Kernel方式挂载 1.2.1 安装ELRepo及kernel-lt 1.2.2 修改Grub引导顺序并重启动 1. ...

  7. Ceph Jewel 10.2.3 环境部署

    Ceph 测试环境部署 本文档内容概要 测试环境ceph集群部署规划 测试环境ceph集群部署过程及块设备使用流程 mon节点扩容及osd节点扩容方法 常见问题及解决方法 由于暂时没有用到对象存储,所 ...

  8. 海量小文件存储与Ceph实践

    海量小文件存储(简称LOSF,lots of small files)出现后,就一直是业界的难题,众多博文(如[1])对此问题进行了阐述与分析,许多互联网公司也针对自己的具体场景研发了自己的存储方案( ...

  9. ceph主要数据结构解析3-Ceph_fs.h文件

    (1)集群内部子版本协议类型宏定义:与公共协议保持独立性,以便消息类型和协议升级受影响 #define CEPH_OSDC_PROTOCOL   24 /* server/client */OSD服务 ...

随机推荐

  1. struts2---访问WEB

    一:在Action中,可以通过以下方式访问WEB的HttpSession,HttpServletRequest,HttpServletResponse等资源 与Servlet API解耦的访问方式 通 ...

  2. php chop()函数 语法

    php chop()函数 语法 chop()函数是什么意思? php chop函数是rtrim函数的别名,作用与rtrim函数是相同的,删除字符串右边的空格或其他预定义字符,语法是chop(strin ...

  3. 【Java】使用Lambda排序集合

    下面是Java lambda表达式的简单例子: // 1. 不需要参数,返回值为 5 () -> 5 // 2. 接收一个参数(数字类型),返回其2倍的值 x -> 2 * x // 3. ...

  4. 后端技术杂谈3:Lucene基础原理与实践

    本系列文章将整理到我在GitHub上的<Java面试指南>仓库,更多精彩内容请到我的仓库里查看 https://github.com/h2pl/Java-Tutorial 喜欢的话麻烦点下 ...

  5. ACM中java的使用 (转)

    ACM中java的使用 这里指的java速成,只限于java语法,包括输入输出,运算处理,字符串和高精度的处理,进制之间的转换等,能解决OJ上的一些高精度题目. 1. 输入: 格式为:Scanner ...

  6. 关于Http请求Cookie问题

    在Http请求中,很多时候我们要设置Cookie和获取返回的Cookie,在这个问题上踩了一个很大的坑,主要是两个问题: 1.不能获取到重定向返回的Cookie: 2.两次请求返回的Cookie是相同 ...

  7. VC++DLL动态链接库程序

    VC++DLL动态链接库程序 VC++DLL动态链接库程序 C++ DLL 导出函数 使用VS2017等IDE生成dll程序,示例如下: C++ DLL 导出类 1.导出类中第一种方法:简单导出类(不 ...

  8. linux中的常用信号

    linux中的常用信号,见如下列表: 信号名 值 标注 解释 ------------------------------------------------------------------ HU ...

  9. python学习第三天-元组、列表及字典

    元组 # 元组() 关键字:tuple# 元组的值一旦确定,不可更改,包括增.删.改都不行# 1.元组只有一个数据时,加逗号在后面,不然就不是元组类型的数据tuple_1 = ("hello ...

  10. FZU 2059 MM

    Description There is a array contain N(1<N<=100000) numbers. Now give you M(1<M<10000) q ...