How to migrate from VMware and Hyper-V to OpenStack
Introduction
I migrated >120 VMware virtual machines (Linux and Windows) from VMware ESXi to OpenStack. In a lab environment I also migrated from Hyper-V with these steps. Unfortunately I am not allowed to publish the script files I used for this migration, but I can publish the steps and commands that I used to migrate the virtual machines. With the steps and commands, it should be easy to create scripts that do the migration automatically.
Just to make it clear, these steps do not convert traditional (non-cloud) applications to cloud ready applications. In this case we started to use OpenStack as a traditional hypervisor infrastructure.
Update 9 September 2015: The newer versions of libguestfs-tools and qemu-img convert handle VMDK files very well (I had some issues with older versions of the tools), so the migration can be more efficient. I removed the conversion steps from VMDK to VMDK (single file) and from VMDK to RAW. The migration speed will be doubled by reducing these steps.
Disclaimer: This information is provided as-is. I will decline any responsibility caused by or with these steps and/or commands. I suggest you don’t try and/or test these commands in a production environment. Some commands are very powerful and can destroy configurations and data in Ceph and OpenStack. So always use this information with care and great responsibility.
Global steps
- Inject VirtIO drivers
- Expand partitions (optional)
- Customize the virtual machine (optional)
- Create Cinder volumes
- Convert VMDK to Ceph
- Create Neutron port (optional)
- Create and boot instance in OpenStack
Specifications
Here are the specifications of the infrastructure I used for the migration:
- Cloud platform: OpenStack Icehouse
- Cloud storage: Ceph
- Windows instances: Windows Server 2003 to 2012R2 (all versions, except Itanium)
- Linux instances: RHEL5/6/7, SLES, Debian and Ubuntu
- Only VMDK files from ESXi can be converted, I was not able to convert VMDK files from VMware Player with qemu-img
- I have no migration experience with encrypted source disks
- OpenStack provides VirtIO paravirtual hardware to instances
Requirements
A Linux ‘migration node’ (tested with Ubuntu 14.04/15.04, RHEL6, Fedora 19-21) with:
- Operating system (successfully tested with the following):
- RHEL6 (RHEL7 did not have the “libguestfs-winsupport” -necessary for NTFS formatted disks- package available at the time of writing)
- Fedora 19, 20 and 21
- Ubuntu 14.04 and 15.04
- Network connections to a running OpenStack environment (duh). Preferable not over the internet, as we need ‘super admin’ permissions. Local network connections are usually faster than connections over the internet.
- Enough hardware power to convert disks and run instances in KVM (sizing depends on the instances you want to migrate in a certain amount of time).
We used a server with 8x Intel Xeon E3-1230 @ 3.3GHz, 32GB RAM, 8x 1TB SSD and we managed to migrate >500GB per hour. However, it really depends on the usage of the disk space of the instances. But also my old company laptop (Core i5 and 4GB of RAM and an old 4500rmp HDD) worked, but obviously the performance was very poor.
- Local sudo (root) permissions on the Linux migration node
- QEMU/KVM host
- Permissions to OpenStack (via Keystone)
- Permissions to Ceph
- Unlimited network access to the OpenStack API and Ceph (I have not figured out the network ports that are necessary)
- VirtIO drivers (downloadable from Red Hat, Fedora, and more)
- Packages (all packages should be in the default distributions repository):
“python-cinderclient” (to control volumes)
“python-keystoneclient” (for authentication to OpenStack)
“python-novaclient” (to control instances)
“python-neutronclient” (to control networks)
“python-httplib2” (to be able to communicate with web service)
“libguestfs-tools” (to access the disk files)
“libguestfs-winsupport” (should be separately installed on RHEL based systems only)
“libvirt-client” (to control KVM)
“qemu-img” (to convert disk files)
“ceph” (to import virtual disk into Ceph)
“vmware-vdiskmanager” (to expand VMDK disks, downloadable from VMware)
Steps
1. Inject VirtIO drivers
1.1 Windows Server 2012
Since Windows Server 2012 and Windows 8.0, the driver store is
protected by Windows. It is very hard to inject drivers in an offline
Windows disk. Windows Server 2012 does not boot from VirtIO hardware by
default. So, I took these next steps to install the VirtIO drivers into
Windows. Note that these steps should work for all tested Windows
versions (2003/2008/2012).
- Create a new KVM instance. Make sure the Windows vmdk disk is created as IDE disk! The network card should be a VirtIO device.
- Add an extra VirtIO disk, so Windows can install the VirtIO drivers.
- Off course you should add a VirtIO ISO or floppy drive which
contains the drivers. You could also inject the driver files with
virt-copy-in and inject the necessary registry settings (see paragraph
4.4) for automatic installation of the drivers. - Start the virtual machine and give Windows about two minutes to find
the new VirtIO hardware. Install the drivers for all newly found
hardware. Verify that there are no devices that have no driver
installed. - Shutdown the system and remove the extra VirtIO disk.
- Redefine the Windows vmdk disk as VirtIO disk (this was IDE) and
start the instance. It should now boot without problems. Shut down the
virtual machine.
1.2 Linux (kernel 2.6.25 and above)
Linux kernels 2.6.25 and above have already built-in support for
VirtIO hardware. So there is no need to inject VirtIO drivers. Create
and start a new KVM virtual machine with VirtIO hardware. When LVM
partitions do not mount automatically, run this to fix:
(log in)
mount -o remount,rw /
pvscan
vgscan
reboot
(after the reboot all LVM partitions should be mounted and Linux should boot fine)
Shut down the virtual machine when done.
1.3 Linux (kernel older than 2.6.25)
Some Linux distributions provide VirtIO modules for older kernel versions. Some examples:
- Red Hat provides VirtIO support for RHEL 3.9 and up
- SuSe provides VirtIO support for SLES 10 SP3 and up
The steps for older kernels are:
- Create KVM instance:
- Linux (prior to kernel 2.6.25): Create and boot KVM instance with
IDE hardware (this is limited to 4 disks in KVM, as only one IDE
controller can be configured which results in 4 disks!). I have not
tried SCSI or SATA as I only had old Linux machines with no more than 4
disks. Linux should start without issues. - Load the virtio modules (this is distribution specific): RHEL (older versions): https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/ch10s04.html) and for SLES 10 SP3 systems: https://www.suse.com/documentation/opensuse121/book_kvm/data/app_kvm_virtio_install.htm
- Shutdown the instance.
- Change all disks to VirtIO disks and boot the instance. It should now boot without problems.
- Shut down the virtual machine when done.
For Red Hat, see: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/ch10s04.html
For SuSe, see: https://www.suse.com/documentation/opensuse121/book_kvm/data/app_kvm_virtio_install.htm
1.4 Windows Server 2008 (and older versions); deprecated
For Windows versions prior to 2012 you could also use these steps to
insert the drivers (the steps in 4.1 should also work for Windows
2003/2008).
- Copy all VirtIO driver files (from the downloaded VirtIO drivers) of
the corresponding Windows version and architecture to C:\Drivers\. You
can use the tool virt-copy-in to copy files and folders into the virtual
disk. - Copy *.sys files to %WINDIR%\system32\drivers\ (you may want to use
virt-ls to look for the correct directory. Note that Windows is not very
consistent with lower and upper case characters). You can use the tool
virt-copy-in to copy files and folders into the virtual disk. - The Windows registry should combine the hardware ID’s and drivers,
but there are no VirtIO drivers installed in Windows by default. So we
need to do this by ourselves. You could inject the registry file with
virt-win-reg. If you choose to copy all VirtIO drivers to an other
location than C:\Drivers, you must change the “DevicePath” variable in
the last line (the most easy way is to change it in some Windows machine
and then export the registry file, and use that line).
Registry file (I called the file mergeviostor.reg, as it holds the VirtIO storage information only):
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00000000]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00020000]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00021AF4]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00021AF4&rev_00]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1004&subsys_00081af&rev_00]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\viostor]
"ErrorControl"=dword:00000001
"Group"="SCSI miniport"
"Start"=dword:00000000
"Tag"=dword:00000021
"Type"=dword:00000001
"ImagePath"="system32\\drivers\\viostor.sys"
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion]
"DevicePath"=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,69,00,6e,00,66,00,3b,00,63,00,3a,00,5c,00,44,00,72,00,69,00,76,00,65,00,72,00,73,00,00,00
When these steps have been executed, Windows should boot from VirtIO disks without BSOD. Also all other drivers (network, balloon etc.) should install automatically when Windows boots.
See: https://support.microsoft.com/en-us/kb/314082 (written for Windows XP, but it is still usable for Windows 2003 and 2008).
See also: http://libguestfs.org/virt-copy-in.1.html and http://libguestfs.org/virt-win-reg.1.html
2. Expand partitions (optional)
Some Windows servers I migrated had limited free disk space on the Windows partition. There was not enough space to install new management applications. So, I used the vmware-vdiskmanager tool with the ‘-x’ argument (available from VMware.com) to increase the disk size. You then still need to expand the partition from the operating system. You can do that while customizing the virtual machine in the next step.
3. Customize the virtual machine (optional)
To prepare the operating system to run in OpenStack, you probably would like to uninstall some software (like VMware Tools and drivers), change passwords and install new management tooling etc.. You can automate this by writing a script that does this for you (those scripts are beyond the scope of this article). You should be able to inject the script and files with the virt-copy-in command into the virtual disk.
3.1 Automatically start scripts in Linux
I started the scripts within Linux manually as I only had a few Linux servers to migrate. I guess Linux engineers should be able to completely automate this.
3.2 Automatically start scripts in Windows
I choose the RunOnce method to start scripts at Windows boot as it works on all versions of Windows that I had to migrate. You can put a script in the RunOnce by injecting a registry file. RunOnce scripts are only run when a user has logged in. So, you should also inject a Windows administrator UserName, Password and set AutoAdminLogon to ‘1’. When Windows starts, it will automatically log in as the defined user. Make sure to shut down the virtual machine when done.
Example registry file to auto login into Windows (with user ‘Administrator’ and password ‘Password’) and start the C:\StartupWinScript.vbs.:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce]
"Script"="cscript C:\\StartupWinScript.vbs"
"Parameters"=""
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon]
"AutoAdminLogon"="1"
"UserName"="Administrator"
"Password"="Password"
4. Create Cinder volumes
For every disk you want to import, you need to create a Cinder volume. The volume size in the Cinder command does not really matter, as we remove (and recreate with the import) the Ceph device in the next step. We create the cinder volume only to create the link between Cinder and Ceph.
Nevertheless, you should keep the volume size the same as the disk you are planning to import. This is useful for the overview in the OpenStack dashboard (Horizon).
You create a cinder volume with the following command (the size is in GB, you can check the available volume types by cinder type-list):
cinder create --display-name <name_of_disk> <size> --volume-type <volumetype>
Note the volume id (you can also find the volume id with the following command) as we need the ids in the next step.
cinder list | grep <name_of_disk>
Cinder command information: http://docs.openstack.org/cli-reference/content/cinderclient_commands.html
5. Convert VMDK to Ceph
As soon as the Cinder volumes are created, we can convert the VMDK
disk files to RBD blocks (Ceph). But first we need to remove the actual
Ceph disk. Make sure you remove the correct Ceph block device!
In the first place you should know in which Ceph pool the disk
resides. Then remove the volume from Ceph (the volume-id is the volume
id that you noted in the previous step ‘Create Cinder volumes’):
rbd -p <ceph_pool> rm volume-<volume-id>
Next step is to convert the VMDK file into the volume on Ceph (all ceph* arguments will result in better performance. The vmdk_disk_file variable is the complete path to the vmdk file. The volume-id is the ID that you noted before).
qemu-img convert -p <vmdk_disk_file> -O rbd rbd:<ceph_pool>/volume-<volume-id>
Do this for all virtual disks of the virtual machine.
Be careful! The rbd command is VERY powerful (you could destroy more data on Ceph than intended)!
6. Create Neutron port (optional)
In some cases you might want to set a fixed IP-address or a MAC-address. You can do that by create a port with neutron and use that port in the next step (create and boot instance in OpenStack).
You should first know what the network_name is (nova net-list), you need the ‘Label’. Only the network_name is mandatory. You could also add security groups by adding
--security-group <security_group_name>
Add this parameter for each security group, so if you want to add i.e. 6 security-groups, you should add this parameter 6 times.
neutron port-create --fixed-ip ip_address=<ip_address> --mac-address <mac_address> <network_name> --name <port_name>
Note the id of the neutron port, you will need it in the next step.
7. Create and boot instance in OpenStack
Now we have everything prepared to create an instance from the Cinder volumes and an optional neutron port.
Note the volume-id of the boot disk.
Now you only need to know the id of the flavor you want to choose. Run nova flavor-list to get the flavor-id of the desired flavor.
Now you can create and boot the new instance:
nova boot <instance_name> --flavor <flavor_id> --boot-volume <boot_volume_id> --nic port-id=<neutron_port_id>
Note the Instance ID. Now, add each other disk of the instance by executing this command (if there are other volumes you want to add):
nova volume-attach <instance_ID> <volume_id>
http://www.npit.nl/blog/2015/08/13/migrate-to-openstack/
How to migrate from VMware and Hyper-V to OpenStack的更多相关文章
- Hyper V NAT 网络设置 固定IP / DHCP
Hyper V 默认的Default Switch同时支持了NAT网络以及DHCP,虚拟机能够访问外网. 但使用过程中发现这个IP网段经常变化,而且Hyper V没有提供管理其NAT网络与DHCP的图 ...
- windows server 2008 r2 企业版 hyper v做虚拟化的相关问题处理
windows server 2008 r2 企业版 hyper v做虚拟化的相关问题处理 今天在dell r710 上用windows server 2008 r2企业版hyper v 做虚拟化,添 ...
- 设置Hyper V
1.打开服务器管理器 2.添加角色和功能 3.安装类型 -> 基于角色或基于功能的安装 4.服务器选择 -> 下一步 5.服务器角色 勾选"Hyper V"
- HYPER -V 独立安装的 2016版本 中文版 下载好慢啊
HYPER -V 独立安装的 2016版本 中文版 下载好慢啊
- 自带hyper -v 或者 Vmware安装Linux centos
centos系统存在网盘,链接: https://pan.baidu.com/s/1A5ywyLjIegcftaT_xCvPbA 密码: n6v4 https://blog.csdn.net/nanc ...
- 在Windows 10 系统上启用Hyper V遇到的错误:0x800f0831
Hyper-V是微软的一款虚拟化技术,是微软第一个采用类似Vmware和Citrix开源Xen一样的基于hypervisor的技术. 在Windows 10的powershell命令里,输入如下的命令 ...
- HYPER V 文件共享 复制文件 共享硬盘 来宾服务
虚拟机的设置 --> 集成服务 –> 来宾服务 勾选 文件就可以在本地机器和虚拟机上来回复制了. 他可让 Hyper-V 管理员在运行虚拟机的同时将文件复制到虚拟机,且无需使 ...
- win10自带虚拟机Hyper V联网
在控制面板里打开程序和功能 打开启用或关闭windows 功能 勾选Hyper-V 在windows 管理工具打开Hyper-V 管理器 打开虚拟交换机管理器 ...
- Live Migrate 操作 - 每天5分钟玩转 OpenStack(42)
Migrate 操作会先将 instance 停掉,也就是所谓的“冷迁移”.而 Live Migrate 是“热迁移”,也叫“在线迁移”,instance不会停机. Live Migrate 分两种: ...
随机推荐
- biztalk中使用WCF-SQL接受传送数据【转】
接触biztalk时间不长,转载一篇学习教程: http://www.cnblogs.com/chnking/archive/2010/05/09/1731098.html chnking写的. 一. ...
- redhat enterprixe 5.0 下DHCP服务器rpm安装配置及其测试
一.了解DHCP DHCP服务提供动态指定IP地址和配置参数的机制.有动态和静态两种方式. 二.rpm安装 因为配过Samba,所以感觉挺简单. 首先找到主程序和几个附属程序的rpm的安装包.应该都是 ...
- SQL远程创建数据库
CREATE PROCEDURE [dbo].[p_CreateDB] @Des_DB sysname, @ServerName sysname=N'', @UserName sysname= ...
- hdu 4612 Warm up
http://acm.hdu.edu.cn/showproblem.php?pid=4612 将原图进行缩点 变成一个树 树上每条边都是一个桥 然后加一条边要加在树的直径两端才最优 代码: #incl ...
- jmeter内存溢出
当我用jmeter来测试elasticsearch性能的时候,发生过三种性质的内存溢出. 1. index 由于数据流过大,内存使用超过jmeter默认的上限,就溢出了. 用记事本打开jmeter.b ...
- 【STL】-Map/Multimap的用法
初始化: map<string,double> salaries; 算法: 1. 赋值.salaries[ "Pat" ] = 75000.00; 2. 无效的索引将自 ...
- wp8.1 Study6: App的生命周期管理
一.概述 应用程序的生命周期详解可以参照Windows8.1开发中msdn文档http://msdn.microsoft.com/library/windows/apps/hh464925.aspx ...
- exit(0)与exit(1)、return区别
exit(0):正常运行程序并退出程序: exit(1):非正常运行导致退出程序: return():返回函数,若在主函数中,则会退出函数并返回一值. 详细说: 1. return返回函数值,是关键字 ...
- JS小问题总结
1. 超链接中href=#与href=javascript:void(0) 的区别 #包含了一个位置信息.默认的锚是#top 也就是网页的上端:而javascript:void(0) 仅仅表示一 ...
- 免费获得NOD32 半年、1年 激活码-14.08.12到期
地址: http://nod32.ruanmei.com/ 活动时间: 2014年8月6日 - 8月12日(全部送完将提前终止). 活动规则: 1.每台电脑限领1枚NOD32激活码: 2.领到的NOD ...