Migrating Xenserver 6.1 to Vmware 5.1
Now this is the story all about how
Our life got flipped, turned upside down
And we’d like to take a minute, so just sit right there
And we’ll tell you how all how we moved to VMWare.
In Xenserver Enterprise born and raised
In the server room where we spent most of our days
Chilling out, maxing, relaxing all cool
And all shooting all the servers into the pool
When a couple of updates, they were up to no good
Started making trouble in our neighbourhood
We had numerous crashes and the users got scared
So we said “We’re moving all the servers onto VMware”
We asked for advice and it became clear
VMware is the game that we should play here
If anything I could say that this software was rare
But I thought nah, forget it, let’s get on VMware!
We started moving servers about seven or eight
And couple of days later we were almost straight
Looked at our kingdom we were finally there
Sitting all our servers on Vmware!
The real story of how and why we moved is slightly more complex.
We originally made the decision to invest in Xenserver in 2009 and by the end of that year had bought Xenserver Enterprise, 2 Dell R710 servers, a Dell AX4-5 SAN and also a QLogic 5602 fibre switch. We then had to wait for Xenserver 5.5 Update 2 to allow us to use our hardware. (Yeah I know we should have spent more time with the HCL.)
Well after waiting we soon had Xenserver up and running and over the next couple of month moved most of our infrastructure in Xen using the Xen convert tool. Things worked well and we had very little trouble. Towards the end of 2010 we put the 5.6 update on. Still no problems.
We then had a spate of issues with the hardware server mysteriously rebooting which we thought we nailed down to a faulty memory module and or a need hotfix. No big issue, we simply bought a new set of memory and applied the hotfixes. We then went and added a new HP P2000 SAN due to needing more storage. Again all was good. Update to Xenserver 6. Still all is good with the (virtual) world.
Fast forward to Feb 2013. Our virtual infrastructure needed expanding so we went and bought a spanking new HP DL380 hooked it into the infrastructure and then disappointment. We need the 6.1 update to add in our new server to the farm. No problem we think. One evening, shutdown the VMs, do a rolling upgrade, power up the VMs. 2-3 Hours work and then home.
Little did we know.
At first everything seemed to go well. The update went onto the server and the VM’s booted back up. The overall update time was about 5 Hours due to a small issue with a couple of VMs not wanting to power up. No biggy this kind of thing has happened before I know the fix.
However this was just the opening salvo in what would become a 2 week campaign of intimidation and fear from Xenserver. During the next 2 weeks I tried updating the Xen Tools on each VM (where it would let me), removing Xentools (again if it would let me), applying hotfixes to Xenserver. And throughout this VMs would hang, I would have to kill VMs and go through the destroy domain procedure, I would have to recover VM’s from snapshots, I would have to detach the VHDs rescan the SR and re-attach the VHD to recover the VM. The list of problems seemed endless.
After much research, reading of forums and speaking to people the only real solution seemed to be a move away from Xenserver. The question of which hypervisor to use was in no doubt – VMWare.
We went for VMware essentials as we couldn’t see us using more than 3 physical servers. Alongside this we decided to go with Veeam as our backup/DR solution.
So how did we do it.
We started off by trimming down the number of virtual boxes in our Xen environment so that they would run on just 2 older (Dell R710) physical servers. Then we took a server (HP DL380) that had been slated for Exchange (but not implemented) and upped its memory and rebuilt the RAID array. We then created our VMWare infrastructure using the 2 new HP DL 380s. This was nice and easy and didn’t cause any issues. Then came the most important part – moving the VMs.
This, as is turned out, was nice and straight forward. We just used the VMware converter and treated the Xen VMs as though they were physical boxes putting them onto the RAID array of one of the servers. Once we had converted about half the VMs we started on moving the remaining VHDs onto a single SAN (The older smaller Dell AX4-5).
Then came the fun. We removed the HP SAN from the Xenpool, reconfigured the zoning on the switch and then we reconfigured the P2000 from a 3.6 Tb RAID 10 to a 5.4 Tb RAID 5 config. There was a small issue installing the FC card and attaching the HBA in VMWare but a quick search online and a small update/hack later it was attached. We then moved all the VMDKs from the server to the SAN.
We moved the remaining VMs to the VMware infrastructure but left 1 of the Xenservers running because we had a completely screwed up Linux install that was happily running a webserver. We moved the last VHD to the local storage on the Xenserver and promptly put the whole nightmare out of our heads.
As for VMware – well what can I say 2 weeks later and everything is still running smoothly. We have Veeam doing a nightly back to a local server, our users aren’t complaining. The final stage of the move will involve removing the final Xenserver and then add the Dell into the mix. We will then use the AX4-5 as an offsite replica for Veeam to copy the essential VMs every night.
Bye bye Citrix XenServer
As we are in the week of the obituaries, let’s do another one. A few weeks ago when vSphere 5.5 was release I updated our Enterprise Hypervisor Comparison. As Citrix and Red Hat both had released a new version of their hypervisor product I also added those. Normally I only need to check for new features added or product limits which have been upgraded. But this time was different!
In the column with the new Citrix XenServer 6.2 I had to remove feature which were previously included in the product. WTF?
I rarely come across any XenServer deployments and when I speak to colleagues, customers, etc. I often hear Citrix XenServer is dead. Based on the number of XenServer deployments I see and the number of customers changing to Hyper-V or vSphere this seems to support this theory. Instead of adding new features and upgrading product limits, I had to retire numerous features.
Features retired in XenServer 6.2:
- Workload Balancing and associated functionality (e.g. power-consumption based consolidation);
- XenServer plug-in for Microsoft’s System Center Operations Manager;
- Virtual Machine Protection and Recovery (VMPR);
- Web Self Service;
- XenConvert (P2V).
Features with no further development and removal in future releases:
- Microsoft System Center Virtual Machine Manager (SCVMM) support;
- Integrated StorageLink (iSL);
- Distributed Virtual Switch (vSwitch) Controller (DVSC). The Open vSwitch remains fully supported and developed.
It has never been a secret that Microsoft and Citrix joined forces but as expected Citrix XenServer had no place there as Microsoft invested big on Hyper-V. But now it seems that Citrix has killed XenServer. With version 6.2 they moved XenServer to a fully open source model essentially giving it back to the community. Of course much of XenServer already was open source, using code from the Xen Project, Linux kernel and the Cloud Platform (XCP) initiative. But with the retirement of many existing features it seems that Citrix is stripping XenServer from all Citrix add-ons before giving the basic core back to the open source community.
Citrix still delivers a supported commercial distribution of XenServer but when an identical free version is available …… At the feature and functionality level, the only difference is that the free version of XenServer will not be able to use XenCenter for automated installation of security fixes, updates and maintenance releases. Free Citrix XenServer does include XenCenter for server management, but not patch management. I doubt many customers will buy a version of XenServer for patch management alone.
It’s interesting to see Gartner has moved Citrix outside the leaders Quadrant and placed it in the visionaries Quadrant. Visionaries in the x86 server virtualization infrastructure market have a differentiated approach or product, but they aren’t meeting their potential from an execution standpoint.
So it looks like Citrix has given up on XenServer and is going to focus on their core business, the desktop and the ecosystem of products around it.
Within their partnership with Microsoft they cannot or may not compete with Hyper-V although XenServer has,in the past, always been a better product than Hyper-V. With the battle on application delivery intensifying, their focus needs to be on their main portfolio. VMware is targeting Citrix’s application delivery platform with VMware Horizon Workspace and on the desktop front Citrix faces two enemies. Where Microsoft Remote Desktop Services is targeting their Server BAsed Computing/XenApp platform and VMware Horizon View is battling Citrix XenDesktop.
I wonder when we will hear that Citrix finally killed XenServer …..
How To Convert A Xen Virtual Machine To VMware
This article explains how you can convert a Xen guest to a VMware guest. The steps descibed here assume advanced VMware and Xen knowledge.
Additional software requirements:
- VMware Server 1.xx
- VMware Converter
- Knoppix LiveCD or the distribution’s first CD
Xen -> VMware VM Migration Steps (Kernel Step)
The kernel on the VM to be migrated must support fully virtualized operation. The kernels used for para-virtulized machines using RHEL/Fedora/CentOS as a guest do not support fully virtualized operation by default. The best way to deal with this is to also install a standard kernel in the machine, port the machine and finally remove the Xen kernel.
1. Since this is a highly risky procedure, FIRST CREATE A BACK-UP OF YOUR VIRTUAL MACHINE!!!
2. Download a kernel with the same version number and architecture as the Xen kernel, except it should be the a generic one. Use the distribution CD/DVD or any other repository to get it.
3. Use RPM tools to install the kernel.
4. Modify /etc/modprobe.conf to add the proper SCSI and network card modules:
alias eth0 xennet
alias scsi_hostadapter xenblk
will be replaced by
alias eth0 pcnet32
alias scsi_hostadapter mptbase
alias scsi_hostadapter1 mptspi
alias scsi_hostadapter2 ata_piix
Modify /etc/inittab by removing the # in front of the getty and puting a comment in front of the line containg the xen console:
1:2345:respawn:/sbin/mingetty --noclear tty1
This is a one way action. Once modified the kernel modules, you won’tbe able to properly start the machine, and you will receive a Kernel panic error message.
Xen – > VMware VM Migration Steps (Disk Step)
To convert a XEN machine in a .vmdk format to be used with VMware, a tool called qemu will be used. QEMU is a generic and open source machine emulator and virtualizer. It is also a fast processor emulator using dynamic translation to achieve good emulation speed.
1. Download qemu from DAG repository. Use the EL5 package for any Fedora/RHEL5/CentOS5 that you use.
2. Convert the XEN machine to VMware:
qemu-img convert <source_xen_machine> -O vmdk <destination_vmware.vmdk>
3. At this point, we have a valid VMware Server 1.xx disk image. This can be powered on onto any VMware Server. We need to do it anyway in order to build a .VMX file that will be later used. This stage also confirms whether the newly machine runs properly.
3.1 Create a new virtual machine. Do not create a new HDD, but use the previously created vmdk.
3.2 Power it on in order to validate that it is usable and to allow the machine to reconfigure itself.
4. Move the VMware Server virtual machine to a Windows workstation running VMwareConverter.
5. Using VMware Converter, convert the VMware Server virtual machine to VMware ESXi.
Xen -> VMware VM Migration Steps (ESX Step)
1. Configure the virtual machine to boot first from CD-ROM drive.
2. Modify the machine’s HDD SCSI controller type from BUS Logic to LSI Logic.
Edit Virtual Machine Settings > SCSI Controller 0 > Change type > LSI Logic.
3. Boot using Knoppix or the distribution’s first CD.
4. Mount the VM’s disk and chroot to it.
5. Get the disk architecture using fdisk -l, and modify /etc/fstab accordingly.
6. Create a new initrd image. You also must know the version of the running kernel. For example, if you are running kernel 2.6.18-1234, then the initrd command would look like this:
# mkinitrd -v -f /boot/initrd-2.6.18-1234.img 2.6.18-1234
7. Edit /boot/grub/menu.lst to boot from this initrd.
8. Keep your fingers crossed and reboot the machine.
Don’t forget to re-configure your network card.