本例子ceph L版本采用的是filestore,而不是bluestore.
一、查看class类型,只有一个hdd,。Luminous 为每个OSD添加了一个新的属性:设备类。默认情况下,OSD将根据Linux内核公开的硬件属性自动将其设备类设置为HDD、SSD或NVMe(如果尚未设置)。这些设备类在ceph osd tree 中列出(实验环境无ssd硬盘,生产环境有ssd可以直接认到并自动创建ssd class,不需要第二步到第四步) , 修改前集群拓扑:
[root@ceph1 ceph-install]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.76163 root default
-9 0.25388 rack rack01
-3 0.25388 host ceph1
0 hdd 0.07809 osd.0 up 1.00000 1.00000
1 hdd 0.07809 osd.1 up 1.00000 1.00000
6 hdd 0.09769 osd.6 up 1.00000 1.00000
-10 0.25388 rack rack02
-5 0.25388 host ceph2
2 hdd 0.07809 osd.2 up 1.00000 1.00000
3 hdd 0.07809 osd.3 up 1.00000 1.00000
7 hdd 0.09769 osd.7 up 1.00000 1.00000
-11 0.25388 rack rack03
-7 0.25388 host ceph3
4 hdd 0.07809 osd.4 up 1.00000 1.00000
5 hdd 0.07809 osd.5 up 1.00000 1.00000
8 hdd 0.09769 osd.8 up 1.00000 1.00000
二、将osd.6 osd.7 osd.8 从class hdd解开
[root@ceph1 ceph-install]# ceph osd crush rm-device-class osd.6
done removing class of osd(s): 6
[root@ceph1 ceph-install]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.76163 root default
-9 0.25388 rack rack01
-3 0.25388 host ceph1
6 0.09769 osd.6 up 1.00000 1.00000
0 hdd 0.07809 osd.0 up 1.00000 1.00000
1 hdd 0.07809 osd.1 up 1.00000 1.00000
-10 0.25388 rack rack02
-5 0.25388 host ceph2
2 hdd 0.07809 osd.2 up 1.00000 1.00000
3 hdd 0.07809 osd.3 up 1.00000 1.00000
7 hdd 0.09769 osd.7 up 1.00000 1.00000
-11 0.25388 rack rack03
-7 0.25388 host ceph3
4 hdd 0.07809 osd.4 up 1.00000 1.00000
5 hdd 0.07809 osd.5 up 1.00000 1.00000
8 hdd 0.09769 osd.8 up 1.00000 1.00000
 
[root@ceph1 ceph-install]# ceph osd crush rm-device-class osd.7
[root@ceph1 ceph-install]# ceph osd crush rm-device-class osd.8
三、将osd.6 osd.7 osd.8 加入到class ssd
[root@ceph1 ceph-install]# ceph osd crush set-device-class ssd osd.6
[root@ceph1 ceph-install]# ceph osd crush set-device-class ssd osd.7
[root@ceph1 ceph-install]# ceph osd crush set-device-class ssd osd.8
 
[root@ceph1 ceph]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.76163 root default
-9 0.25388 rack rack01
-3 0.25388 host ceph1
0 hdd 0.07809 osd.0 up 1.00000 1.00000
1 hdd 0.07809 osd.1 up 1.00000 1.00000
6 ssd 0.09769 osd.6 up 1.00000 1.00000
-10 0.25388 rack rack02
-5 0.25388 host ceph2
2 hdd 0.07809 osd.2 up 1.00000 1.00000
3 hdd 0.07809 osd.3 up 1.00000 1.00000
7 ssd 0.09769 osd.7 up 1.00000 1.00000
-11 0.25388 rack rack03
-7 0.25388 host ceph3
4 hdd 0.07809 osd.4 up 1.00000 1.00000
5 hdd 0.07809 osd.5 up 1.00000 1.00000
8 ssd 0.09769 osd.8 up 1.00000 1.00000
四、查看class类型, 已经有2个class
[root@ceph1 ceph]# ceph osd crush class ls
[
"hdd",
"ssd"
]
五、创建个ssd规则
[root@ceph1 ceph]#ceph osd crush rule create-replicated rule-ssd default host ssd
[root@ceph1 ceph]# ceph osd crush rule ls
replicated_rule
rule-ssd
六、创建一个使用该rule-ssd规则的存储池:
[root@ceph1 ceph]#ceph osd pool create ssdpool 64 64 rule-ssd
查看pool
[root@ceph1 ceph]# ceph osd pool ls detail | grep ssdpool
pool 15 'ssdpool' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 316 flags hashpspool stripe_width 0
更新client.cinder权限
 
[root@ceph1 ceph]#ceph auth caps client.cinder mon 'allow r' osd 'allow rwx pool=ssdpool,allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
查看认证账号
[root@ceph1 ceph]# ceph auth list
installed auth entries:
 
mds.ceph1
key: AQDvL21d035tKhAAg6jY/iSoo511H+Psbp8xTw==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
osd.0
key: AQBzKm1dmT3FNhAAmsEpJv9I6CkYmD2Kfk3Wrw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQCxKm1dfLZdIBAAVD/B9RdlTr3ZW7d39PuZ4g==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQCKK21dKPAbFhAA8yQ8v3/+kII5gAsNga/M+w==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.3
key: AQCtK21dHMZiBBAAoz7thWgs4sFHgPBTkd4pGw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.4
key: AQDEK21dKL4XFhAAsx39rOmszOtVHfx/W/UMQQ==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.5
key: AQDZK21duaoQBBAAB1Vu1c3L8JNGj6heq6p2yw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.6
key: AQAqG7Nd1dvbGxAA/H2w7FAVSWI2wSaU2TSCOw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.7
key: AQCnIrRdAJHSFRAA+oDUal2jQR5Z3OxlB2UjZw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.8
key: AQC8IrRdJb8ZMhAAm1SSjGFhl2PuwwpGaIdouQ==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQC6mmJdfBzyHhAAE1GazlHqH2uD35vpL6Do1w==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQC7mmJdCG1wJBAAVmRYWiDqFSRCHVQhEUdGqQ==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
key: AQC8mmJdVUCSIhAA8foLa1zmMmzNyBAkregvBw==
caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
key: AQC9mmJd+n5JIxAAYpyAJRVbRnZBJBdpSPCAAA==
caps: [mon] allow profile bootstrap-osd
client.bootstrap-rgw
key: AQC+mmJdC+mxIBAAVVDJiKRyS+4vdX2r8nMOLA==
caps: [mon] allow profile bootstrap-rgw
client.cinder
key: AQDOdW5do2jzEhAA/v/VYEBHOUk440mpP6GMBg==
caps: [mon] allow r
caps: [osd] allow rwx pool=ssdpool,allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images
client.glance
key: AQAVdm5dojfsLxAAAtt+eX7psQC7pXpisqsvBg==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images
mgr.ceph1
key: AQAjMG1deO05IxAALhbrB66XWKVCjWXraUwL0w==
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *
mgr.ceph2
key: AQAkMG1dhl5COBAALHSHl0MXA5xvrQCCXzBR0g==
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *
mgr.ceph3
key: AQAmMG1dJ1fJFBAAF0is+UiuKZjwGRkBWg6W4A==
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *
七 修改openstack cinder-volume增加配置,并创建volume
在/etc/cinder/cinder.conf添加以下内容,调用ceph2个pool,一个hdd,一个ssd
[DEFAULT]
enabled_backends = lvm,ceph,ssd
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = fcb30733-4a1a-4635-ba07-9d89cf54a530
volume_backend_name=ceph
[ssd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = ssdpool
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = fcb30733-4a1a-4635-ba07-9d89cf54a530
volume_backend_name=ssd
 
重启cinder-volume服务
systemctl restart openstack-cinder-volume.service
创建新的cinder-type
cinder type-create ssd
cinder type-key ssd set volume_backend_name=ssd
查询cinder-volume 是否启动成功
[root@controller cinder]# openstack volume service list
+------------------+-----------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-----------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2019-10-26T15:16:16.000000 |
| cinder-volume | block1@lvm | nova | enabled | down | 2019-03-03T09:20:58.000000 |
| cinder-volume | controller@lvm | nova | enabled | up | 2019-10-26T15:16:19.000000 |
| cinder-volume | controller@ceph | nova | enabled | up | 2019-10-26T15:16:19.000000 |
| cinder-volume | controller@ssd | nova | enabled | up | 2019-10-26T15:16:14.000000 |
+------------------+-----------------+------+---------+-------+----------------------------+
创建volume
[root@controller cinder]# openstack volume create --type ssd --size 1 disk20191026
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2019-10-26T15:17:46.000000 |
| description | None |
| encrypted | False |
| id | ecff02cc-7d5c-42cc-986e-06e9552426db |
| migration_status | None |
| multiattach | False |
| name | disk20191026 |
| properties | |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | ssd |
| updated_at | None |
| user_id | f8b392b9ca95447c91913007d05ccc4f |
+---------------------+--------------------------------------+
 
[root@controller cinder]# openstack volume list | grep disk20191026
| ecff02cc-7d5c-42cc-986e-06e9552426db | disk20191026 | available | 1 | |
在ceph检查volume是否在ssdpool创建的
[root@ceph1 ceph]# rbd -p ssdpool ls
volume-ecff02cc-7d5c-42cc-986e-06e9552426db
以上编号UID对应的
 
备注:
 
修改ceph配置就创建新的osd会用到以下命令:
ceph-deploy --overwrite-conf config push ceph1 ceph2 ceph3
ceph-deploy osd create ceph1 --data /dev/sde --journal /dev/sdf1
 
本例子的ceph.conf如下
[root@ceph1 ceph]# cat /etc/ceph/ceph.conf
[global]
fsid = 6bbab2f3-f90c-439d-86d7-9c0f3603303c
mon_initial_members = ceph1, ceph2, ceph3
mon_host = 172.16.3.61,172.16.3.62,172.16.3.63
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon clock drift allowed = 10
mon clock drift warn backoff = 30
osd pool default pg num = 64
osd pool default pgp num = 64
osd_crush_update_on_start = false

ceph SSD HDD分离与openstack调用的更多相关文章

  1. 理解 OpenStack + Ceph (5):OpenStack 与 Ceph 之间的集成 [OpenStack Integration with Ceph]

    理解 OpenStack + Ceph 系列文章: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 (5)Ceph 与 OpenS ...

  2. 配置 Ceph 内外网分离

    https://www.jianshu.com/p/42ab1f6dc6de 1. 为什么要做内外网分离   先明确一下这么做的必要性.Ceph 的客户端,如 RADOSGW,RBD 等,会直接和 O ...

  3. SSD+HDD 安装ubuntu16.04+win7双系统

    本人电脑是联想天逸100  前段时间把光驱拆了加了一个128G的SSD 顺便把SSD装上了win7  机械硬盘500G放资料和一般软件之类的   后来想要用到ubuntu  就在官网下载ubuntu1 ...

  4. WPF选项卡页面分离之Page调用Window类

    此项目源码下载地址:https://github.com/lizhiqiang0204/WPF_PageCallWindow 如果Page与Window直接没有任何调用就用这种方法https://ww ...

  5. Win10+Ubuntu18.04 UEFI启动模式SSD+HDD

    新手操作徒手安装Ubuntu,踩坑无数. 分享一篇好的经验:https://blog.csdn.net/xrinosvip/article/details/80428133(下附原博客) 踩坑大集合: ...

  6. 即用了 测试脚本里面的 类的值,又继承了 unittest类 使用他的断言方法 (接口自动化 数据分离 变量相互调用 看这里)

  7. 基于ceph快照快速回滚openstack上的虚拟机

    查看虚拟机ID 1 2 [root@node1 ~]# nova list --all | grep wyl | dc828fed-1c4f-4e5d-ae84-795a0e71eecc | wyl ...

  8. ceph 对接openstack liberty

    Ceph 准备工作 官方文档:http://docs.ceph.com/docs/master/rbd/rbd-openstack/ 官方中文文档:http://docs.ceph.org.cn/rb ...

  9. 分布式存储ceph——(2)openstack对接ceph存储后端

    ceph对接openstack环境 一.使用rbd方式提供存储如下数据: (1)image:保存glanc中的image: (2)volume存储:保存cinder的volume:保存创建虚拟机时选择 ...

随机推荐

  1. Java实现蓝桥杯突击战

    突击战 你有n个部下,每个部下需要完成一项任务.第i个部下需要你花Bi分钟交待任务,然后他会立刻独立地. 无间断地执行Ji分钟后完成任务.你需要选择交待任务的顺序, 使得所有任务尽早执行完毕(即最后一 ...

  2. Java实现 LeetCode 221 最大正方形

    221. 最大正方形 在一个由 0 和 1 组成的二维矩阵内,找到只包含 1 的最大正方形,并返回其面积. 示例: 输入: 1 0 1 0 0 1 0 1 1 1 1 1 1 1 1 1 0 0 1 ...

  3. Java实现高效便捷还容易懂的排序算法

    PS:我现在越来越认为排序大法是,很深的算法了,就是简单的几个步骤,网上的大佬们能给你玩出花来(ง •_•)ง public class zimuzhenlie2 { public static vo ...

  4. java实现第三届蓝桥杯地址格式转换

    地址格式转换 [编程题](满分21分) Excel是最常用的办公软件.每个单元格都有唯一的地址表示.比如:第12行第4列表示为:"D12",第5行第255列表示为"IU5 ...

  5. 【Jquery】判断宽度跳转

    $(window).resize(function(){ var wWidth = screen.width; if( wWidth < 788 ){ window.location.href= ...

  6. Python API自动化测试实操

    废话不多说,直接上代码截图: 我们首先来看看整个工程的目录结构,这样以便于了解项目的调用关系:config   #这里是配置包 -- base_url.py 具体配置了被测系统的url and pat ...

  7. tensorflow2.0学习笔记第二章第二节

    2.2复杂度和学习率 指数衰减学习率可以先用较大的学习率,快速得到较优解,然后逐步减少学习率,使得模型在训练后期稳定指数衰减学习率 = 初始学习率 * 学习率衰减率^(当前轮数/多少轮衰减一次) 空间 ...

  8. 解Bug之路-记一次JVM堆外内存泄露Bug的查找

    解Bug之路-记一次JVM堆外内存泄露Bug的查找 前言 JVM的堆外内存泄露的定位一直是个比较棘手的问题.此次的Bug查找从堆内内存的泄露反推出堆外内存,同时对物理内存的使用做了定量的分析,从而实锤 ...

  9. android-sdk-window的环境搭建以及appium简单录制脚本的使用

    大家好,今天给大家带来的是appium的环境搭建以及简单的录制脚本,自学的过程中入了不少坑,下面给大家开始分享! 使用Appium录制脚本必备三大金刚:Appium-desktop(至于为什么用这个, ...

  10. 2020/06/05 JavaScript高级程序设计 函数表达式

    函数表达式 函数定义的两种方式: 函数声明(函数声明提升,非标准name属性可访问给函数指定的名字) 函数声明提升:执行代码前先读取函数声明 function functionName(arg0, a ...