http://kashyapc.com/

Raw image is a blob of data exposed directly in VM as block device, it can't snapshot. qemu-img is able to import data from raw image into qcow2 image, and save it as snapshot

Previously, I posted about snapshots here , which briefly discussed different types of snapshots. In this post, let’s explore how external snapshots work. Just to quickly rehash, external snapshots are a type of snapshots where, there’s a base image(which is the original disk image), and then its difference/delta (aka, the snapshot image) is stored in a new QCOW2 file. Once the snapshot is taken, the original disk image will be in a ‘read-only’ state, which can be used as backing file for other guests.

It’s worth mentioning here that:

  • The original disk image can be either in RAW format or QCOW2 format. When a snapshot is taken, ‘the difference’ will be stored in a different QCOW2 file
  • The virtual machine has to be running, live. Also with Live snapshots, no guest downtime is experienced when a snapshot is taken.
  • At this moment, external(Live) snapshots work for ‘disk-only’ snapshots(and not VM state). Work for both disk and VM state(and also, reverting to external disk snapshot state) is in-progress upstream(slated for libvirt-0.10.2).

Before we go ahead, here’s some version info, I’m testing on Fedora-17(host), and the guest(named ‘daisy’) is running Fedora-18(Test Compose):


[root@moon ~]# rpm -q libvirt qemu-kvm ; uname -r
libvirt-0.10.1-3.fc17.x86_64
qemu-kvm-1.2-0.2.20120806git3e430569.fc17.x86_64
3.5.2-3.fc17.x86_64
[root@moon ~]#

External disk-snapshots(live) using QCOW2 as original image:
Let’s see an illustration of external(live) disk-only snapshots. First, let’s ensure the guest is running:


[root@moon qemu]# virsh list
Id Name State
----------------------------------------------------
3 daisy running [root@moon qemu]#

Then, list all the block devices associated with the guest:


[root@moon ~]# virsh domblklist daisy --details
Type Device Target Source
------------------------------------------------
file disk vda /export/vmimgs/daisy.qcow2 [root@moon ~]#

Next, let’s create a snapshot(disk-only) of the guest this way, while the guest is running:


[root@moon ~]# virsh snapshot-create-as daisy snap1-daisy "snap1 description" \
--diskspec vda,file=/export/vmimgs/snap1-daisy.qcow2 --disk-only --atomic

Some details of the flags used:
- Passing a ‘–diskspec’ parameter adds the ‘disk’ elements to the Snapshot XML file
- ‘–disk-only’ parameter, takes the snapshot of only the disk
- ‘–atomic’ just ensures either the snapshot is run completely or fails w/o making any changes

Let’s check the information about the just taken snapshot by running qemu-img:


[root@moon ~]# qemu-img info /export/vmimgs/snap1-daisy.qcow2
image: /export/vmimgs/snap1-daisy.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 2.5M
cluster_size: 65536
backing file: /export/vmimgs/daisy.qcow2
[root@moon qemu]#

Apart from the above, I created 2 more snapshots(just the same syntax as above) for illustration purpose. Now, the snapshot-tree looks like this:


[root@moon ~]# virsh snapshot-list daisy --tree snap1-daisy
|
+- snap2-daisy
|
+- snap3-daisy [root@moon ~]#

For the above example image file chain[ base<-snap1<-snap2<-snap3 ], it has to be read as – snap3 has snap2 as its backing file, snap2 has snap1 as its backing file, and snap1 has the base image as its backing file. We can see the backing file info from qemu-img:


#--------------------------------------------#
[root@moon ~]# qemu-img info /export/vmimgs/snap3-daisy.qcow2
image: /export/vmimgs/snap3-daisy.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 129M
cluster_size: 65536
backing file: /export/vmimgs/snap2-daisy.qcow2
#--------------------------------------------#
[root@moon ~]# qemu-img info /export/vmimgs/snap2-daisy.qcow2
image: /export/vmimgs/snap2-daisy.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 3.6M
cluster_size: 65536
backing file: /export/vmimgs/snap1-daisy.qcow2
#--------------------------------------------#
[root@moon ~]# qemu-img info /export/vmimgs/snap1-daisy.qcow2
image: /export/vmimgs/snap1-daisy.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 2.5M
cluster_size: 65536
backing file: /export/vmimgs/daisy.qcow2
[root@moon ~]#
#--------------------------------------------#

Now, if we do not need snap2 any more, and want to pull all the data from snap1 into snap3, making snap1 as snap3′s backing file, we can do a virsh blockpull operation as below:


#--------------------------------------------#
[root@moon ~]# virsh blockpull --domain daisy --path /export/vmimgs/snap3-daisy.qcow2 \
--base /export/vmimgs/snap1-daisy.qcow2 --wait --verbose
Block Pull: [100 %]
Pull complete
#--------------------------------------------#

Where, –path = path to the snapshot file, and –base = path to a backing file from which the data to be pulled. So from above example, it’s evident that we’re pulling the data from snap1 into snap3, and thus flattening the backing file chain resulting in snap1 as snap3′s backing file, which can be noticed by running qemu-img again.
Thing to note here,


[root@moon ~]# qemu-img info /export/vmimgs/snap3-daisy.qcow2
image: /export/vmimgs/snap3-daisy.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 145M
cluster_size: 65536
backing file: /export/vmimgs/snap1-daisy.qcow2
[root@moon ~]#

A couple of things to note here, after discussion with Eric Blake(thank you):

- If we do a listing of the snapshot tree again(now that ‘snap2-daisy.qcow2′ backing file is no more in use),


[root@moon ~]# virsh snapshot-list daisy --tree
snap1-daisy
|
+- snap2-daisy
|
+- snap3-daisy
[root@moon ~]#

one might wonder, why is snap3 still pointing to snap2? Thing to note here is, the above is the snapshot chain, which is independent from each virtual disk’s backing file chain. So, the ‘virsh snapshot-list’ is still listing the information accurately at the time of snapshot creation(and not what we’ve done after creating the snapshot). So, from the above snapshot tree, if we were to revert to snap1 or snap2 (when revert-to-disk-snapshots is available), it’d still be possible to do that, meaning:

It’s possible to go from this state:
base <- snap123 (data from snap1, snap2 pulled into snap3)

we can still revert to:

base<-snap1 (thus undoing the changes in snap2 & snap3)

External disk-snapshots(live) using RAW as original image:
With external disk-snapshots, the backing file can be RAW as well (unlike with ‘internal snapshots’ which only work with QCOW2 files, where the snapshots and delta are all stored in a single QCOW2 file)

A quick illustration below. The commands are self-explanatory. It can be noted the change(from RAW to QCOW2) in the block disk associated with the guest, before & after taking the disk-snapshot (when virsh domblklist command was executed)


#-------------------------------------------------#
[root@moon ~]# virsh list | grep f17btrfs2
7 f17btrfs2 running
[root@moon ~]#
#-------------------------------------------------#
[root@moon ~]# qemu-img info /export/vmimgs/f17btrfs2.img
image: /export/vmimgs/f17btrfs2.img
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 1.5G
[root@moon ~]#
#-------------------------------------------------#
[root@moon qemu]# virsh domblklist f17btrfs2 --details
Type Device Target Source
------------------------------------------------
file disk hda /export/vmimgs/f17btrfs2.img [root@moon qemu]#
#-------------------------------------------------#
[root@moon qemu]# virsh snapshot-create-as f17btrfs2 snap1-f17btrfs2 "snap1-f17btrfs2-description" \
--diskspec hda,file=/export/vmimgs/snap1-f17btrfs2.qcow2 --disk-only --atomic
Domain snapshot snap1-f17btrfs2 created
[root@moon qemu]#
#-------------------------------------------------#
[root@moon qemu]# qemu-img info /export/vmimgs/snap1-f17btrfs2.qcow2
image: /export/vmimgs/snap1-f17btrfs2.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 196K
cluster_size: 65536
backing file: /export/vmimgs/f17btrfs2.img
[root@moon qemu]#
#-------------------------------------------------#
[root@moon qemu]# virsh domblklist f17btrfs2 --details
Type Device Target Source
------------------------------------------------
file disk hda /export/vmimgs/snap1-f17btrfs2.qcow2
[root@moon qemu]#
#-------------------------------------------------#

Also note: All snapshot XML files, where libvirt tracks the metadata of snapshots are are located under /var/lib/libvirt/qemu/snapshots/$guestname (and the original libvirt xml file is located under /etc/libvirt/qemu/$guestname.xml)

[转] External(and Live) snapshots with libvirt的更多相关文章

  1. External (and Live) snapshots with libvirt

    list all the block devices associated with the guest $ virsh domblklist testvm --details Type Device ...

  2. 别以为真懂Openstack: 虚拟机创建的50个步骤和100个知识点(3)

    四.Nova-compute 步骤17:nova-compute接收到请求后,通过Resource Tracker将创建虚拟机所需要的资源声明占用 步骤18:调用Neutron API配置Networ ...

  3. QEMU KVM Libvirt手册(5) – snapshots

    前面讲了QEMU的qcow2格式的internal snapshot和external snapshot,这都是虚拟机文件格式的功能. 这是文件级别的. 还可以是文件系统级别的,比如很多文件系统支持s ...

  4. KVM 介绍(7):使用 libvirt 做 QEMU/KVM 快照和 Nova 实例的快照 (Nova Instances Snapshot Libvirt)

    学习 KVM 的系列文章: (1)介绍和安装 (2)CPU 和 内存虚拟化 (3)I/O QEMU 全虚拟化和准虚拟化(Para-virtulizaiton) (4)I/O PCI/PCIe设备直接分 ...

  5. kvm+libvirt虚拟机快照浅析[转]

    浅析snapshots, blockcommit,blockpull 作者:Kashyap Chamarthy <kchamart#redhat.com> Date: Tue, 23 Oc ...

  6. External Snapshot management

    External Snapshot management Symptom As of at least libvirt 1.1.1, external snapshot support is inco ...

  7. QEMU KVM libvirt手册(4) – images

    RAW raw是默认的格式,格式简单,容易转换为其他的格式.需要文件系统的支持才能支持sparse file 创建image # qemu-img create -f raw flat.img 10G ...

  8. [转] Snapshotting with libvirt for qcow2 images

    http://kashyapc.com/2011/10/04/snapshotting-with-libvirt-for-qcow2-images/ Libvirt 0.9.6 was recentl ...

  9. KVM(七)使用 libvirt 做 QEMU/KVM 快照和 Nova 实例的快照

    本文将梳理 QEMU/KVM 快照相关的知识,以及在 OpenStack Nova 中使用 libvirt 来对 QEMU/KVM 虚机做快照的过程. 1. QEMU/KVM 快照 1.1 概念 QE ...

随机推荐

  1. IDEA cannot resolve symbol “xxxx”

    有缓存 多试两次就可以了. 技术交流群:816227112

  2. Visual Studio Code 如何将新项目发布到GIT服务器

    1.在VSCode中新建或打开未添加源码管理的文件夹 2.按Ctrl+Shift+G切换到"源控件"视图,点击右上方的[初始化储存库]按钮 3.输入消息内容,然后点击右上方的[提交 ...

  3. Luogu3579 Solar Panels

    整除分块枚举... 真的没有想到会这么简单. 要使一个数 \(p\) 满足 条件, 则 存在\(x, y\), \(a<=x \times p<=b\ \&\&\ c< ...

  4. 设计模式学习心得<装饰器模式 Decorator>

    装饰器模式(Decorator Pattern)允许向一个现有的对象添加新的功能,同时又不改变其结构.这种类型的设计模式属于结构型模式,它是作为现有的类的一个包装. 这种模式创建了一个装饰类,用来包装 ...

  5. mr实现pagerank

    PageRank计算什么是pagerankPageRank是Google专有的算法,用于衡量特定网页相对于搜索引擎索引中的其他网页而言的重要程度.是Google创始人拉里·佩奇和谢尔盖·布林于1997 ...

  6. 服务管理之samba

    目录 samba 1.samba的简介 2. samba访问 1.搭建用户认证共享服务器 2.搭建匿名用户共享服务器 samba 1.samba的简介 Samba是在Linux和UNIX系统上实现SM ...

  7. 洛谷 P1338 末日的传说

    题目链接:https://www.luogu.org/problemnew/show/P1338 题目描述 只要是参加jsoi活动的同学一定都听说过Hanoi塔的传说:三根柱子上的金片每天被移动一次, ...

  8. mysql 报错 Packets larger than max_allowed_packet are not allowed

    登录 mysql, 执行命令 : show variables like '%max_allowed_packet%'  重新设置: set global max_allowed_packet = 1 ...

  9. Android中 实现队列方式处理优先级信息

    需求:当界面在处理消息A时,突然接收到消息B,需要立马显示B的信息,然后再继续显示消息A,或者接收到消息C,再显示完消息A后再显示消息C: 原理很简单 在一个轮询中,查询消息列表中的元素,先处理优先级 ...

  10. 根据缺少的so,安装相关的软件

    http://blog.csdn.net/dianyueneo/article/details/8161350. ubuntu缺少libGL.so sudo apt-get install apt-f ...