7. Ceph 高级篇 - RBD块设备回收站、快照、克隆
RDB 回收站
官网:https://docs.ceph.com/en/latest/rbd/rados-rbd-cmds/
数据可以先保存在回收站里面,配置一个策略周期,当你不需要的时候,再去删除,当你需要时,可从回收站里面恢复出来,这就是简单的回收站实现机制;
创建镜像
[root@ceph-node01 ~]# rbd create ceph-demo/ceph-trash.img --size 10G
[root@ceph-node01 ~]# rbd -p ceph-demo ls
ceph-trash.img
demo.img
rbd-demo.img
rbd-demo2.img
[root@ceph-node01 ~]# rbd info ceph-demo/ceph-trash.img
rbd image 'ceph-trash.img':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 392c7a4143a46
block_name_prefix: rbd_data.392c7a4143a46
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Fri Oct 16 23:11:09 2020
access_timestamp: Fri Oct 16 23:11:09 2020
modify_timestamp: Fri Oct 16 23:11:09 2020
[root@ceph-node01 ~]#
未设置回机制前删除
[root@ceph-node01 ~]# rbd remove -p ceph-demo ceph-trash.img
Removing image: 100% complete...done.
[root@ceph-node01 ~]#
验证直接删除
[root@ceph-node01 ~]# rbd -p ceph-demo ls
demo.img
rbd-demo.img
rbd-demo2.img
[root@ceph-node01 ~]# rbd -p ceph-demo trash ls
[root@ceph-node01 ~]#
直接删除的话,直接就没有了,在对应的pool池和回收站里面都没有;
验证回收站删除
[root@ceph-node01 ~]# rbd create ceph-demo/ceph-trash.img --size 10G
[root@ceph-node01 ~]# rbd -p ceph-demo ls
ceph-trash.img
demo.img
rbd-demo.img
rbd-demo2.img
[root@ceph-node01 ~]# rbd trash move ceph-demo/ceph-trash.img --expires-at 20201020
[root@ceph-node01 ~]# rbd -p ceph-demo ls
demo.img
rbd-demo.img
rbd-demo2.img
[root@ceph-node01 ~]# rbd -p ceph-demo trash ls
3931bebcf0579 ceph-trash.img
[root@ceph-node01 ~]#
注意在回收站的时间,是根据你上面设置的回收周期而定的,到期后,会自动清理;
找回删除块设备
[root@ceph-node01 ~]# rbd trash restore -p ceph-demo 3931bebcf0579
[root@ceph-node01 ~]# rbd -p ceph-demo ls
ceph-trash.img
demo.img
rbd-demo.img
rbd-demo2.img
[root@ceph-node01 ~]#
这样就可以防止误删除;
RBD 快照
官网:https://docs.ceph.com/en/latest/rbd/rbd-snapshot/
A snapshot is a read-only logical copy of an image at a particular point in time: a checkpoint. One of the advanced features of Ceph block devices is that you can create snapshots of images to retain(保留、持有、保有) point-in-time state history. Ceph also supports snapshot layering(快照分层), which allows you to clone images (e.g., a VM image) quickly and easily. Ceph block device snapshots are managed using the rbd command and multiple higher level interfaces, including QEMU, libvirt, OpenStack and CloudStack.
Important
To use RBD snapshots, you must have a running Ceph cluster.
Note
Because RBD does not know about any filesystem within an image (volume), snapshots are not crash-consistent unless they are coordinated within the mounting (attaching) operating system. We therefore recommend that you pause or stop I/O before taking a snapshot. If the volume contains a filesystem, it must be in an internally consistent state before taking a snapshot. Snapshots taken at inconsistent points may need a fsck pass before subsequent mounting. To stop I/O you can use fsfreeze command. See fsfreeze(8) man page for more details. For virtual machines, qemu-guest-agent can be used to automatically freeze file systems when creating a snapshot.
如果在做快照时映像仍在进行 I/O 操作,快照可能就获取不到该映像准确的或最新的数据,并且该快照可能不得不被克隆到一个新的可挂载的映像中。所以,我们建议在做快照前先停止 I/O 操作。如果映像内包含文件系统,在做快照前请确保文件系统处于一致的状态。要停止 I/O 操作可以使用 fsfreeze 命令。详情可参考 fsfreeze(8) 手册页。对于虚拟机,qemu-guest-agent 被用来在做快照时自动冻结文件系统。
其实快照就是一种备份机制;
创建一块设备
[root@ceph-node01 ~]# rbd create ceph-demo/rbd-test.img --image-feature layering --size 10G
[root@ceph-node01 ~]# rbd -p ceph-demo ls
ceph-trash.img
demo.img
rbd-demo.img
rbd-demo2.img
rbd-test.img
[root@ceph-node01 ~]#
查看块设置信息
[root@ceph-node01 ~]# rbd info -p ceph-demo rbd-test.img
rbd image 'rbd-test.img':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 3936fc8add13c
block_name_prefix: rbd_data.3936fc8add13c
format: 2
features: layering
op_features:
flags:
create_timestamp: Fri Oct 16 23:52:31 2020
access_timestamp: Fri Oct 16 23:52:31 2020
modify_timestamp: Fri Oct 16 23:52:31 2020
[root@ceph-node01 ~]#
挂载
[root@ceph-node01 ~]# rbd device map ceph-demo/rbd-test.img
/dev/rbd1
[root@ceph-node01 ~]# mkfs.ext4 /dev/rbd1
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: 完成
文件系统标签=
OS type: Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
第一个数据块=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: 完成
正在写入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成
[root@ceph-node01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 50G 0 disk
├─vda1 252:1 0 500M 0 part /boot
└─vda2 252:2 0 49.5G 0 part
├─centos-root 253:0 0 44.5G 0 lvm /
└─centos-swap 253:1 0 5G 0 lvm [SWAP]
vdb 252:16 0 100G 0 disk
└─ceph--48dc8bff--6f69--41e1--9bfa--b188edc8f419-osd--block--909ff4ad--d05b--476f--b7e3--56d9e77004b4
253:2 0 100G 0 lvm
vdc 252:32 0 100G 0 disk
└─ceph--6f6493a0--ecaa--45de--8dcb--04f5b6f8e957-osd--block--ded9a04f--71b5--4088--87c3--0e71604c7d75
253:3 0 100G 0 lvm
rbd0 251:0 0 20G 0 disk /mnt/rbd-demo
rbd1 251:16 0 10G 0 disk
[root@ceph-node01 ~]#
[root@ceph-node01 ~]# mount /dev/rbd1 /media/
[root@ceph-node01 ~]# cd /media/
[root@ceph-node01 media]# ls
lost+found
[root@ceph-node01 media]# echo `date` > file.log
[root@ceph-node01 media]# ls
file.log lost+found
[root@ceph-node01 media]# sync
[root@ceph-node01 media]#
使用sync进行刷盘,把当前的状态保存下来,下面制作快照使用。
制作快照
[root@ceph-node01 media]# rbd snap create ceph-demo/rbd-test.img@snap_20201011
[root@ceph-node01 media]# rbd snap ls ceph-demo/rbd-test.img
SNAPID NAME SIZE PROTECTED TIMESTAMP
4 snap_20201011 10 GiB Sat Oct 17 00:02:00 2020
[root@ceph-node01 media]#
这里可以看到快照的名称及制作快照的时间;
快照恢复
# 1. 模拟文件删除
[root@ceph-node01 media]# rm -rf file.log
[root@ceph-node01 media]#
# 2. 查看要恢复镜像的快照名称
[root@ceph-node01 ~]# rbd snap ls ceph-demo/rbd-test.img
SNAPID NAME SIZE PROTECTED TIMESTAMP
4 snap_20201011 10 GiB Sat Oct 17 00:02:00 2020
# 3. 快照恢复
[root@ceph-node01 ~]# rbd snap rollback ceph-demo/rbd-test.img@snap_20201011
Rolling back to snapshot: 100% complete...done.
# 4. 注意这里需要重新挂载
[root@ceph-node01 ~]# umount /media/
[root@ceph-node01 ~]# mount /dev/rbd1 /media/
[root@ceph-node01 ~]# cd /media/
# 5. 查看恢复文件信息
[root@ceph-node01 media]# ls
file.log lost+found
[root@ceph-node01 media]# cat file.log
2020年 10月 16日 星期五 23:58:43 EDT
[root@ceph-node01 media]#
快照的一个目的,就是对镜像做备份,比如,当你做一些高风险的操作时,把镜像当时的状态先保留下来,当数据异常时,可以把数据进行一次回滚;
快照删除
# 1. 删除单个快照
[root@ceph-node01 media]# rbd snap remove ceph-demo/rbd-test.img@snap_20201011
Removing snap: 100% complete...done.
[root@ceph-node01 media]#
# 2. 验证删除所有快照
[root@ceph-node01 media]# rbd snap create ceph-demo/rbd-test.img@snap_20201011
[root@ceph-node01 media]# rbd snap create ceph-demo/rbd-test.img@snap_20201012
[root@ceph-node01 media]# rbd snap create ceph-demo/rbd-test.img@snap_20201013
[root@ceph-node01 media]# rbd snap ls ceph-demo/rbd-test.img
SNAPID NAME SIZE PROTECTED TIMESTAMP
6 snap_20201011 10 GiB Sat Oct 17 00:17:58 2020
7 snap_20201012 10 GiB Sat Oct 17 00:18:00 2020
8 snap_20201013 10 GiB Sat Oct 17 00:18:04 2020
[root@ceph-node01 media]#
[root@ceph-node01 media]# rbd snap purge ceph-demo/rbd-test.img
Removing all snapshots: 100% complete...done.
[root@ceph-node01 media]# rbd snap ls ceph-demo/rbd-test.img
[root@ceph-node01 media]#
镜像克隆机制
Ceph supports the ability to create many copy-on-write (COW) clones of a block device snapshot. Snapshot layering enables Ceph block device clients to create images very quickly. For example, you might create a block device image with a Linux VM written to it; then, snapshot the image, protect the snapshot, and create as many copy-on-write clones as you like. A snapshot is read-only, so cloning a snapshot simplifies semantics–making(克隆快照的语义) it possible to create clones rapidly.
The terms “parent” and “child” refer to a Ceph block device snapshot (parent), and the corresponding image cloned from the snapshot (child). These terms are important for the command line usage below.
这里的术语“父”和“子”指的是一个 Ceph 块设备快照(父),和从此快照克隆出来的对应映像(子)。这些术语对下列的命令行用法来说很重要。
Each cloned image (child) stores a reference to its parent image, which enables the cloned image to open the parent snapshot and read it.
每一个克隆出来的映像(子)都存储着对父映像的引用,这使得克隆出来的映像可以打开父映像并读取它。
A COW clone of a snapshot behaves exactly like any other Ceph block device image. You can read to, write from, clone, and resize cloned images. There are no special restrictions with cloned images. However, the copy-on-write clone of a snapshot depends on the snapshot, so you MUST protect the snapshot before you clone it. The following diagram depicts the process.
一个快照的 COW 克隆的行为更像其它任何 Ceph 块设备映像的一样。克隆出的映像没有特别的限制,你可以读出、写入、克隆、调整克隆映像的大小。然而快照的写时复制克隆引用了快照,所以你克隆快照前必须保护它。下图描述了此过程。
Ceph only supports cloning of RBD format 2 images (i.e., created with rbd create --image-format 2). The kernel client supports cloned images beginning with the 3.10 release.
Ceph 仅支持克隆 format 2 的映像(即用 rbd create --image-format 2 创建的)。内核客户端从 3.10 版开始支持克隆的映像。
GETTING STARTED WITH LAYERING
Ceph block device layering is a simple process. You must have an image. You must create a snapshot of the image. You must protect the snapshot. Once you have performed these steps, you can begin cloning the snapshot.
Ceph 块设备的分层是个简单的过程。你必须有个映像、必须为它创建快照、并且必须保护快照,执行过这些步骤后,你才能克隆快照。
The cloned image has a reference (引用)to the parent snapshot, and includes the pool ID, image ID and snapshot ID. The inclusion (包含)of the pool ID means that you may clone snapshots from one pool to images in another pool.
1.Image Template(镜像模板): A common use case for block device layering is to create a master image and a snapshot that serves as a template for clones. For example, a user may create an image for a Linux distribution(Linux 发行版的映像) (e.g., Ubuntu 12.04), and create a snapshot for it. Periodically(定期的), the user may update the image and create a new snapshot (e.g., sudo apt-get update, sudo apt-get upgrade, sudo apt-get dist-upgrade followed by rbd snap create). As the image matures(随着映像的成熟), the user can clone any one of the snapshots.
2.Extended Template(扩展模板): A more advanced use case includes extending a template image that provides more information than a base image. For example, a user may clone an image (e.g., a VM template) and install other software (e.g., a database, a content management system, an analytics system, etc.) and then snapshot the extended image, which itself may be updated just like the base image.
3.Template Pool(模板池): One way to use block device layering is to create a pool that contains master images that act as templates, and snapshots of those templates. You may then extend read-only privileges to users so that they may clone the snapshots without the ability to write or execute within the pool.
4.Image Migration/Recovery(镜像迁移和恢复): One way to use block device layering is to migrate or recover data from one pool into another pool.
镜像克隆用例
# 1. 查看 rbd 块设备
[root@ceph-node01 ~]# rbd ls -p ceph-demo rbd-test.img
ceph-trash.img
demo.img
rbd-demo.img
rbd-demo2.img
rbd-test.img
# 2. 查看快照
[root@ceph-node01 ~]# rbd snap ls -p ceph-demo rbd-test.img
# 3. 创建快照
[root@ceph-node01 ~]# rbd snap create ceph-demo/rbd-test.img@template
[root@ceph-node01 ~]# rbd snap ls -p ceph-demo rbd-test.img
SNAPID NAME SIZE PROTECTED TIMESTAMP
12 template 10 GiB Sat Oct 17 03:23:22 2020
# 4. 把快照保护起来,以免误删
[root@ceph-node01 ~]# rbd snap protect ceph-demo/rbd-test.img@template
# 5. 删除测试,发现根据删除不了
[root@ceph-node01 ~]# rbd snap rm ceph-demo/rbd-test.img@template
Removing snap: 2020-10-17 03:23:59.647 7fb6e3844c80 -1 librbd::Operations: snapshot is protected
0% complete...failed.
rbd: snapshot 'template' is protected from removal.
# 5. 基于模板进行 clone 镜像
[root@ceph-node01 ~]# rbd clone ceph-demo/rbd-test.img@template ceph-demo/vm1-clone.img
[root@ceph-node01 ~]# rbd clone ceph-demo/rbd-test.img@template ceph-demo/vm2-clone.img
[root@ceph-node01 ~]# rbd clone ceph-demo/rbd-test.img@template ceph-demo/vm3-clone.img
[root@ceph-node01 ~]# rbd -p ceph-demo ls
ceph-trash.img
demo.img
rbd-demo.img
rbd-demo2.img
rbd-test.img
vm1-clone.img
vm2-clone.img
vm3-clone.img
# 6. 查看镜像信息,可以看到继承信息;
[root@ceph-node01 ~]# rbd -p ceph-demo info vm3-clone.img
rbd image 'vm3-clone.img':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 394a48fcc1688
block_name_prefix: rbd_data.394a48fcc1688
format: 2
features: layering
op_features:
flags:
create_timestamp: Sat Oct 17 03:24:59 2020
access_timestamp: Sat Oct 17 03:24:59 2020
modify_timestamp: Sat Oct 17 03:24:59 2020
parent: ceph-demo/rbd-test.img@template
overlap: 10 GiB
[root@ceph-node01 ~]#
挂载使用
[root@ceph-node01 ~]# rbd device map ceph-demo/vm1-clone.img
/dev/rbd2
[root@ceph-node01 ~]# rbd device map -p ceph-demo vm2-clone.img
/dev/rbd4
[root@ceph-node01 ~]# mkdir abc
[root@ceph-node01 ~]# mount /dev/rbd4 /root/abc
[root@ceph-node01 ~]# cd /root/abc/
[root@ceph-node01 abc]# ls
file.log lost+found
[root@ceph-node01 abc]#
父子镜像剥离
Cloned images retain(保留) a reference to the parent snapshot. When you remove the reference from the child clone to the parent snapshot, you effectively “flatten” the image by copying the information from the snapshot to the clone. The time it takes to flatten a clone increases with the size of the snapshot. To delete a snapshot, you must flatten the child images first.
克隆出来的映像仍保留了对父快照的引用,要从子克隆删除这些到父快照的引用,你可以把快照的信息复制给子克隆,也就是“拍平”它,拍平克隆映像的时间随快照尺寸增大而增加,要删除快照,必须先拍平子映像。
Since a flattened image contains all the information from the snapshot, a flattened image will take up more storage space than a layered clone。
因为拍平的映像包含了快照的所有信息,所以拍平的映像占用的存储空间会比分层克隆要大。
查看父镜像 clone 了多少个子镜像
[root@ceph-node01 abc]# rbd children ceph-demo/rbd-test.img@template
ceph-demo/vm1-clone.img
ceph-demo/vm2-clone.img
ceph-demo/vm3-clone.img
[root@ceph-node01 abc]#
解除父子关系
[root@ceph-node01 abc]# rbd flatten ceph-demo/vm1-clone.img
Image flatten: 100% complete...done.
[root@ceph-node01 abc]# rbd flatten ceph-demo/vm2-clone.img
Image flatten: 100% complete...done.
[root@ceph-node01 abc]# rbd info ceph-demo/vm1-clone.img
rbd image 'vm1-clone.img':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 3948c22f5e5f
block_name_prefix: rbd_data.3948c22f5e5f
format: 2
features: layering
op_features:
flags:
create_timestamp: Sat Oct 17 03:24:52 2020
access_timestamp: Sat Oct 17 03:24:52 2020
modify_timestamp: Sat Oct 17 03:24:52 2020
[root@ceph-node01 abc]#
删除父镜像
# 1. 删除镜像模板,无法删除,原因是还有一个clone对象
[root@ceph-node01 abc]# rbd snap rm ceph-demo/rbd-test.img@template
Removing snap: 0% complete...failed.
rbd: snapshot 'template'2020-10-17 03:51:31.135 7f618c8d8c80 -1 librbd::Operations: snapshot is protected
is protected from removal.
# 2. 解除父子关系
[root@ceph-node01 abc]# rbd flatten ceph-demo/vm3-clone.img
Image flatten: 100% complete...done.
# 3. 查看父子镜像
[root@ceph-node01 abc]# rbd children ceph-demo/rbd-test.img@template
# 4. 删除父镜像
[root@ceph-node01 abc]# rbd snap rm ceph-demo/rbd-test.img@template
Removing snap: 2020-10-17 03:52:21.409 7f3141b96c80 -1 librbd::Operations: snapshot is protected0% complete...failed.
rbd: snapshot 'template' is protected from removal.
# 5. 取消保护机制
[root@ceph-node01 abc]# rbd snap unprotect ceph-demo/rbd-test.img@template
# 6. 删除原来的镜像模板(父镜像)
[root@ceph-node01 abc]# rbd snap rm ceph-demo/rbd-test.img@template
Removing snap: 100% complete...done.
[root@ceph-node01 abc]#
原来的父子镜像关系解除后,原来的子镜像与父镜像就没有任何关系了,镜像依然可以挂载映射使用。
[root@ceph-node01 abc]# rbd device ls
id pool namespace image snap device
0 ceph-demo rbd-demo.img - /dev/rbd0
1 ceph-demo rbd-test.img - /dev/rbd1
2 ceph-demo vm1-clone.img - /dev/rbd2
3 ceph-demo vm1-clone.img - /dev/rbd3
4 ceph-demo vm2-clone.img - /dev/rbd4
[root@ceph-node01 abc]#
RBD 备份与恢复
当集群功能不可用时,哪么块设备还能正常使用吗,很显然是不可以的,但在不可抗因素下,很有可能造成整个集群挂掉,哪有什么办法呢,我们可以使用离线的备份与恢复机制来实现,可以把镜像导入到磁带库或者别外的Ceph 集群;
# 1. 创建一个快照
[root@ceph-node01 abc]# rbd snap create ceph-demo/rbd-test.img@snap-demo
# 2. 查看快照
[root@ceph-node01 abc]# rbd snap ls ceph-demo/rbd-test.img
SNAPID NAME SIZE PROTECTED TIMESTAMP
14 snap-demo 10 GiB Sat Oct 17 04:09:23 2020
[root@ceph-node01 abc]#
备份 (导出)
[root@ceph-node01 abc]# rbd export ceph-demo/rbd-test.img@snap-demo /root/rbd-test.img
Exporting image: 100% complete...done.
[root@ceph-node01 abc]# ls /root/rbd-test.img -lrth
-rw-r--r-- 1 root root 10G 10月 17 04:13 /root/rbd-test.img
[root@ceph-node01 abc]#
恢复(导入)
[root@ceph-node01 abc]# rbd import /root/rbd-test.img ceph-demo/rbd-test-new.img
Importing image: 100% complete...done.
[root@ceph-node01 abc]#
使用
# 1. 导入后,发现有些feature 功能加了进来,我们需要关掉
[root@ceph-node01 abc]# rbd device map ceph-demo/rbd-test-new.img
rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
# 2. 查看info
[root@ceph-node01 abc]# rbd info ceph-demo/rbd-test-new.img
rbd image 'rbd-test-new.img':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 39594d499ff4f
block_name_prefix: rbd_data.39594d499ff4f
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Sat Oct 17 04:18:02 2020
access_timestamp: Sat Oct 17 04:18:02 2020
modify_timestamp: Sat Oct 17 04:18:02 2020
# 3. 关掉 feature
[root@ceph-node01 abc]# rbd feature disable ceph-demo/rbd-test-new.img exclusive-lock object-map fast-diff deep-flatten
[root@ceph-node01 abc]# rbd device map ceph-demo/rbd-test-new.img
/dev/rbd5
[root@ceph-node01 abc]# rbd device ls
id pool namespace image snap device
0 ceph-demo rbd-demo.img - /dev/rbd0
1 ceph-demo rbd-test.img - /dev/rbd1
2 ceph-demo vm1-clone.img - /dev/rbd2
3 ceph-demo vm1-clone.img - /dev/rbd3
4 ceph-demo vm2-clone.img - /dev/rbd4
5 ceph-demo rbd-test-new.img - /dev/rbd5
# 4. 挂载查看
[root@ceph-node01 abc]# mount /dev/rbd5 /data/abc
[root@ceph-node01 abc]# cd /data/abc/
[root@ceph-node01 abc]# cat file.log
2020年 10月 16日 星期五 23:58:43 EDT
[root@ceph-node01 abc]#
增量备份演示
# 1. 创建块设备
[root@ceph-node01 ~]# rbd create ceph-demo/rbd-test-k8s.img --image-feature layering --size 10G
# 2. 映射块设备使用
[root@ceph-node01 ~]# rbd device map ceph-demo/rbd-test-k8s.img
/dev/rbd5
# 3. 格式化
[root@ceph-node01 ~]# mkfs.ext4 /dev/rbd5
mke2fs 1.42.9 (28-Dec-2013)
。。。
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成
# 4. 挂载
[root@ceph-node01 ~]# mount /dev/rbd5 /data/abe/
[root@ceph-node01 ~]# cd /data/abe/
[root@ceph-node01 abe]# ls
lost+found
[root@ceph-node01 abe]# echo `date` >> a
[root@ceph-node01 abe]# echo `date` >> aa
[root@ceph-node01 abe]# sync
[root@ceph-node01 abe]#
# 5. 创建快照
[root@ceph-node01 abe]# rbd snap create ceph-demo/rbd-test-k8s.img@v1
# 6. 备份
[root@ceph-node01 abe]# rbd export ceph-demo/rbd-test-k8s.img@v1 /root/rbd-test-k8s.img
Exporting image: 100% complete...done.
# 7. 增量写入
[root@ceph-node01 abe]# ls
a aa lost+found
[root@ceph-node01 abe]# echo `date` >>a
[root@ceph-node01 abe]# echo `date` >>a
[root@ceph-node01 abe]# echo `date` >>a
[root@ceph-node01 abe]# echo `date` >>a
[root@ceph-node01 abe]# echo `date` >>aa
[root@ceph-node01 abe]# echo `date` >>aa
[root@ceph-node01 abe]# echo `date` >>bb
[root@ceph-node01 abe]# sync
# 8. 再次创建快照
[root@ceph-node01 abe]# rbd snap create ceph-demo/rbd-test-k8s.img@v2
# 9. 导出备份
[root@ceph-node01 abe]# rbd export-diff ceph-demo/rbd-test-k8s.img@v2 /root/rbd-test-k8s_v2.img
Exporting image: 100% complete...done.
[root@ceph-node01 abe]#
# 10. 删除所有快照
[root@ceph-node01 ~]# rbd snap purge ceph-demo/rbd-test-k8s.img
Removing all snapshots: 100% complete...done.
[root@ceph-node01 ~]#
# 11. 取消设备映射
[root@ceph-node01 ~]# rbd device unmap ceph-demo/rbd-test-k8s.img
# 12. 删除镜像
[root@ceph-node01 ~]# rbd rm ceph-demo/rbd-test-k8s.img
Removing image: 100% complete...done.
[root@ceph-node01 ~]#
恢复
# 1. 备份恢复原始镜像
[root@ceph-node01 ~]# rbd import /root/rbd-test-k8s.img ceph-demo/rbd-test-k8s.img
Importing image: 100% complete...done.
[root@ceph-node01 ~]#
# 2. 映射原始镜像
[root@ceph-node01 ~]# rbd device map ceph-demo/rbd-test-k8s.img
rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
[root@ceph-node01 ~]# rbd info ceph-demo/rbd-test-k8s.img
rbd image 'rbd-test-k8s.img':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 3988541adbc3b
block_name_prefix: rbd_data.3988541adbc3b
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Sat Oct 17 05:43:24 2020
access_timestamp: Sat Oct 17 05:43:24 2020
modify_timestamp: Sat Oct 17 05:43:24 2020
[root@ceph-node01 ~]#
[root@ceph-node01 ~]# rbd feature disable ceph-demo/rbd-test-k8s.img exclusive-lock object-map fast-diff deep-flatten
[root@ceph-node01 ~]#
[root@ceph-node01 ~]# rbd device map ceph-demo/rbd-test-k8s.img
/dev/rbd5
# 3. 挂载查看
[root@ceph-node01 ~]# mount /dev/rbd5 /data/abe/
[root@ceph-node01 ~]# cd /data/abe/
[root@ceph-node01 abe]# ls
a aa lost+found
[root@ceph-node01 abe]# cat a
2020年 10月 17日 星期六 05:24:57 EDT
[root@ceph-node01 abe]# cat aa
2020年 10月 17日 星期六 05:25:00 EDT
# 4. 导入增量备份
[root@ceph-node01 abe]# rbd import-diff /root/rbd-test-k8s_v2.img ceph-demo/rbd-test-k8s.img
Importing image diff: 100% complete...done.
[root@ceph-node01 abe]#
# 5. 重新挂载并验证
[root@ceph-node01 ~]# umount /data/abe/
[root@ceph-node01 ~]# mount /dev/rbd5 /data/abe/
[root@ceph-node01 ~]# cd /data/abe/
[root@ceph-node01 abe]# ls
a aa bb lost+found
[root@ceph-node01 abe]# cat a
2020年 10月 17日 星期六 05:24:57 EDT
2020年 10月 17日 星期六 05:27:04 EDT
2020年 10月 17日 星期六 05:27:05 EDT
2020年 10月 17日 星期六 05:27:05 EDT
2020年 10月 17日 星期六 05:27:06 EDT
[root@ceph-node01 abe]# cat aa
2020年 10月 17日 星期六 05:25:00 EDT
2020年 10月 17日 星期六 05:27:12 EDT
2020年 10月 17日 星期六 05:27:13 EDT
[root@ceph-node01 abe]# cat bb
2020年 10月 17日 星期六 05:27:17 EDT
[root@ceph-node01 abe]#
清理主机映射
[root@ceph-node01 ~]# rbd device ls
id pool namespace image snap device
0 ceph-demo rbd-demo.img - /dev/rbd0
1 ceph-demo rbd-test.img - /dev/rbd1
2 ceph-demo vm1-clone.img - /dev/rbd2
3 ceph-demo vm1-clone.img - /dev/rbd3
4 ceph-demo vm2-clone.img - /dev/rbd4
5 ceph-demo rbd-test-new.img - /dev/rbd5
6 ceph-demo rbd-test-new2.img - /dev/rbd6
[root@ceph-node01 ~]# rbd device unmap ceph-demo/rbd-test-new2.img
[root@ceph-node01 ~]# rbd device unmap ceph-demo/rbd-test-new.img
总结
创建存储池
# 1. 创建存储池
[root@ceph-node01 ~]# ceph osd pool create kube 64 64
pool 'kube' created
# 2. 查看
[root@ceph-node01 ~]# ceph osd pool ls
。。。。
kube
# 3. 设置存储池类型为rbd
[root@ceph-node01 ~]# ceph osd pool application enable kube rbd
enabled application 'rbd' on pool 'kube'
# 4. 初始化
[root@ceph-node01 ~]# rbd pool init kube
[root@ceph-node01 ~]#
创建rbd块设备
[root@ceph-node01 ~]# rbd create -p kube k8s01 --size 2G
[root@ceph-node01 ~]# rbd create --pool kube --image k8s02 --size 2G
[root@ceph-node01 ~]# rbd create kube/k8s03 --size 2G
查看rbd信息
[root@ceph-node01 ~]# rbd -p kube ls
[root@ceph-node01 ~]# rbd -p kube ls -l
[root@ceph-node01 ~]# rbd info kube/k8s03
[root@ceph-node01 ~]# rbd -p kube ls -l --format json --pretty-format
rbd 特性
layering:磁盘分层技术;exclusive-lock:排它锁,单独读写操作;object-map:对象位图功能;fast-diff:快照对比使用的;deep-flatten:多层快照之间分离时使用的一种特性;
[root@ceph-node01 ~]# rbd feature disable kube/k8s03 object-map fast-diff deep-flatten
客户端配置(注意yum源)
[root@ceph-node01 ~]# yum -y install ceph-common
客户端需要/etc/ceph/ceph.conf和密钥环文件文件;
[root@ceph-node01 ceph-deploy]# ceph --user kube -s
客户端使用rbd map,然后可以格式化并且mount挂载即可;
[root@ceph-node01 ceph-deploy]# rbd map kube/k8s01
/dev/rbd6
[root@ceph-node01 ceph-deploy]# rbd --user kube map kube/k8s02
/dev/rbd7
[root@ceph-node01 ceph-deploy]# lsblk
...
rbd6 251:96 0 2G 0 disk
rbd7 251:112 0 2G 0 disk
[root@ceph-node01 ceph-deploy]#
查看映射、删除映射信息
# 1. 查看映射
[root@ceph-node01 ceph-deploy]# rbd showmapped
id pool namespace image snap device
....
6 kube k8s01 - /dev/rbd6
7 kube k8s02 - /dev/rbd7
# 2. 在集群上面查看哪些被挂载了,注意LOCK
[root@ceph-node01 ceph-deploy]# rbd ls -p kube -l
NAME SIZE PARENT FMT PROT LOCK
k8s01 2 GiB 2
k8s02 2 GiB 2
k8s03 2 GiB 2
[root@ceph-node01 ceph-deploy]#
# 3. 删除映射关系
[root@ceph-node01 ceph-deploy]# rbd --user kube unmap kube/k8s02
[root@ceph-node01 ceph-deploy]# rbd showmapped
id pool namespace image snap device
。。。
6 kube k8s01 - /dev/rbd6
[root@ceph-node01 ceph-deploy]#
调整镜像空间容量
[root@ceph-node01 ceph-deploy]# rbd resize -s 5G kube/k8s02
Resizing image: 100% complete...done.
[root@ceph-node01 ceph-deploy]# rbd ls -p kube -l
NAME SIZE PARENT FMT PROT LOCK
k8s01 2 GiB 2
k8s02 5 GiB 2
k8s03 2 GiB 2
[root@ceph-node01 ceph-deploy]#
删除镜像
[root@ceph-node01 ceph-deploy]# rbd rm kube/k8s03
Removing image: 100% complete...done.
[root@ceph-node01 ceph-deploy]# rbd ls -p kube -l
NAME SIZE PARENT FMT PROT LOCK
k8s01 2 GiB 2
k8s02 5 GiB 2
[root@ceph-node01 ceph-deploy]#
删除到回收站并恢复
[root@ceph-node01 ceph-deploy]# rbd trash move kube/k8s01
[root@ceph-node01 ceph-deploy]# rbd trash list kube
3edbd828dee7c k8s01
[root@ceph-node01 ceph-deploy]# rbd -p kube ls -l
NAME SIZE PARENT FMT PROT LOCK
k8s02 5 GiB 2
[root@ceph-node01 ceph-deploy]# rbd trash restore -p kube 3edbd828dee7c
[root@ceph-node01 ceph-deploy]# rbd -p kube ls -l
NAME SIZE PARENT FMT PROT LOCK
k8s01 2 GiB 2
k8s02 5 GiB 2
[root@ceph-node01 ceph-deploy]#
快照总结
Ceph 服务端创建快照:rbd snap create kube/k8s01@k8s01snap01
Ceph 服务端查看快照:rbd snap list kube/k8s01
要想恢复快照,需要先把磁盘挂载删除掉,绑定着的镜像是不允许进行回滚的,并且需要把映射也需要删除掉;
Ceph 客户端:先 umount, 然后rbd unmap /dev/rbd0;
Ceph 服务端回滚快照:rbd snap rollback kube/k8s01@k8s01snap01
Ceph 客户端再mount,查看数据,可以从快照中恢复;
Ceph 服务端删除快照:rbd snap rm kube/k8s01@k8s01snap01
Ceph 服务端限制快照数量:rbd snap limit set kube/k8s01 --limit 10
Ceph 服务端清除快照限制:rbd snap limit clear kube/k8s01
克隆(多重快照)总结
对原始的磁盘镜像数据做第一次快照,然后对快照置于保护模式下,此时即使你修改了原始磁盘镜像中的数据,你也无法改变第一次快照的磁盘数据,因为他被设置为保护模式下了,接下来,我们基于第一次的快照,再进行一次快照,二次快照,这也就是克隆技术;
镜像克隆的话,可以跨存储池进行;
Ceph 服务端创建一个镜像:略
Ceph 服务端创建一个快照:rbd snap create kube/k8s01@clone01
Ceph 服务端查看快照:rbd snap ls kube/k8s01
Ceph 服务端设置快照为保护模式: rbd snap protect kube/k8s01@clone01
Ceph 服务端基于保持的快照进行克隆镜像:rbd clone kube/k8s01@clone01 kube/cloneimg01(注意可以跨存储池)
Ceph 服务端查看镜像:rbd -p kube ls -l
Ceph 客户端直接映射:rbd --user kube map kube/cloneimg01
Ceph 客户端直接挂载:略;
Ceph 服务端查看快照克隆了多少镜像:rbd children kube/k8s01@clone01
Ceph 服务端删除克隆的镜像:rbd rm kube/cloneimg02
如果此时,我们想把原数据kube/k8s01删除,此时应该如何做?删除后,刚才克隆的两个镜像就无根了,因为两个克隆镜像的快照无根了,这个时候,就需要做展平;此时只需要指定把谁flatten(展平)操作即可,它的意思就是把引用别人的,都复制过来,不再依赖他人;
Ceph 服务端进行flatten操作:rbd flatten kube/cloneimg01,此时它已经不需要依赖任何人了;
Ceph 服务端可以把快照取消保护:rbd snap unprotect kube/k8s01@clone01,此时快照删除也没有关系,因为其它人也不依赖你了;
Ceph 服务端删除快照:rbd snap rm kube/k8s01@clone01
Ceph 服务端可以导入外部image:rbd import cenots.img kube/centos7
7. Ceph 高级篇 - RBD块设备回收站、快照、克隆的更多相关文章
- 3.Ceph 基础篇 - RBD 块存储使用
文章转载自:https://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247485253&idx=1&sn=24d9b06a ...
- 006 管理Ceph的RBD块设备
一, Ceph RBD的特性 支持完整和增量的快照 自动精简配置 写时复制克隆 动态调整大小 二.RBD基本应用 2.1 创建RBD池 [root@ceph2 ceph]# ceph osd pool ...
- 基于go-ceph创建CEPH块设备及快照
一.代码执行前准备 1.系统中安装了CEPH集群 2.GOPATH目录下存在src/github.com/noahdesu/go-ceph代码库 3.在ubuntu 14.04下还需apt-get l ...
- Ceph14.2.5 RBD块存储的实战配置和详细介绍,不看后悔! -- <3>
Ceph RBD介绍与使用 RBD介绍 RBD即RADOS Block Device的简称,RBD块存储是最稳定且最常用的存储类型.RBD块设备类似磁盘可以被挂载. RBD块设备具有快照.多副本.克隆 ...
- 理解 QEMU/KVM 和 Ceph(2):QEMU 的 RBD 块驱动(block driver)
本系列文章会总结 QEMU/KVM 和 Ceph 之间的整合: (1)QEMU-KVM 和 Ceph RBD 的 缓存机制总结 (2)QEMU 的 RBD 块驱动(block driver) (3)存 ...
- ceph官网的ceph块设备(二)——快照相关
一)快照基础命令 网址:http://ceph.sptty.com/rbd/rbd-snapshot/ 1. 创建快照 # rbd snap create yhcpool/yhctest@yhctes ...
- CEPH块设备创建及快照
1.创建image rbd create foo --size 1024 {--image-format 2}//创建一个名为foo的image,大小为1024M,当需要克隆快照时,需要添加大括号中的 ...
- Ceph块设备
Ceph块设备 来自这里. 块是一个字节序列(例如,一个512字节的数据块).基于块的存储接口是最常见的存储数据的方法,它通常基于旋转介质,像硬盘.CD.软盘,甚至传统的9道磁带. 基本的块设备命令 ...
- ceph 块设备
数据的存储设备? 数据的存储有3种形式,1种是直接以二进制数据的形式存储在裸设备(包括块设备)上,另外一种是以文件的形式经过文件系统管理进行存储.第三种就是以对象的形式进行对象存储.本篇讨论围绕着块设 ...
随机推荐
- Linux操作系统(5):网络命令
Linux 网络环境配置①自动获取 缺点: linux 启动后会自动获取 IP,缺点是每次自动获取的 ip 地址可能不一样.这个不适用于做服务器,因为我们的服务器的 ip 需要时固定的. ②直 接 修 ...
- Vite+TS带你搭建一个属于自己的Vue3组件库
theme: nico 前言 随着前端技术的发展,业界涌现出了许多的UI组件库.例如我们熟知的ElementUI,Vant,AntDesign等等.但是作为一个前端开发者,你知道一个UI组件库是如何被 ...
- 【python】下载中国大学MOOC的视频
[python]下载中国大学MOOC的视频 脚本目标: 输入课程id和cookie下载整个课程的视频文件,方便复习时候看 网站的反爬机制分析: 分析数据包的目的:找到获取m3u8文件的路径 1. 从第 ...
- 160_技巧_Power BI 新函数-计算工作日天数
160_技巧_Power BI 新函数-计算工作日天数 一.背景 Power BI 2022 年 7 月 14 日更新了最新版本的,版本号为:2.107.683.0 . 更多更新内容可以查看官方博客: ...
- led跑马灯(模糊时钟ambiguous color,非法字符 non printable character,寄存器初值,计数器计数注意事项)
1.设计定义 让8个led以100ns的速度循环闪烁. 2.设计输入 循环闪烁,还是周期问题,用时钟驱动,所以需要一个时钟信号clk.再给一个复位输入reset,八个输出led信号. 每100ns只有 ...
- angular变更检测相关文章
你需要了解的关于Angular 变更检测的一切 If you think `ngDoCheck` means your component is being checked - read this a ...
- SpringCloud微服务实战——搭建企业级开发框架(四十五):【微服务监控告警实现方式二】使用Actuator(Micrometer)+Prometheus+Grafana实现完整的微服务监控
无论是使用SpringBootAdmin还是使用Prometheus+Grafana都离不开SpringBoot提供的核心组件Actuator.提到Actuator,又不得不提Micrometer ...
- RabbitMQ延迟消息:死信队列 | 延迟插件 | 二合一用法+踩坑手记+最佳使用心得
前言 前段时间写过一篇: # RabbitMQ:消息丢失 | 消息重复 | 消息积压的原因+解决方案+网上学不到的使用心得 很多人加了我好友,说很喜欢这篇文章,也问了我一些问题. 因为最近工作比较忙, ...
- 【Java线程池】 java.util.concurrent.ThreadPoolExecutor 分析
线程池概述 线程池,是指管理一组同构工作线程的资源池. 线程池在工作队列(Work Queue)中保存了所有等待执行的任务.工作者线程(Work Thread)会从工作队列中获取一个任务并执行,然后返 ...
- WPF 截图控件之文字(七)「仿微信」
前言 接着上周写的截图控件继续更新添加 文字. 1.WPF实现截屏「仿微信」 2.WPF 实现截屏控件之移动(二)「仿微信」 3.WPF 截图控件之伸缩(三) 「仿微信」 4.WPF 截图控件之绘制方 ...