Migrating to a new array can be daunting, but with technology like VMware’s storage vMotion it’s become a lot easier. However, if you’re booting your VMware servers from a LUN on the SAN, it can add a bit of complexity.
In this article I go through the migration process we followed. Please read all of the steps I describe before starting the process, and check vendor documentation for best practices for your environment.
The environment pre-migration
We moved from an old CX4 to a new VNX, and also had new fibre channel switches. The new switches were connected to the old and were replicating zoning information. We already had some LUNs on the new VNX that were presenting shared LUNs as datastores to the VMware environment, but the boot LUNs were still on the CX4.
We also decided to do a fresh install of ESXi 5.1 on the new boot LUNs once everything was configured. You could decide to migrate them, but it was easier to do a fresh install and ensure we were getting a clean OS. The Cisco UCS servers were staying the same.
The step-by-step process
- Put your first VMware host in maintenance mode and shut it down.
- On the new array create a new boot LUN and a new storage group for the VMware host.
- Put the host in the storage group with the boot LUN. Everything was already zoned in our environment, because we were already presenting LUNs to the hosts. If you’re using fibre channel, you’ll need to do some zoning for the host to show up in the array.
- In the storage group, specify the Host LUN ID for the boot LUN as 0; this is a best practice for all operating systems when booting from SAN, although, VMware will also accept the lowest number in the storage group. The array LUN ID doesn’t matter in this case.
- Remove all zoning/paths to the old array, so you don’t get confused about which boot LUN you’re going to use. This step isn’t absolutely necessary, but it makes it cleaner.
- In the UCS Manager (UCSM), you need to unbind this server from any service profile templates. You will not be able to modify the boot policy if you don’t.
- While still in UCSM, create a new boot policy.
- a. Add the SAN boot primary and secondary information.
- b. Get the World Wide Port Number (WWPN) from the storage array by looking at the host initiator’s properties — it will be the last half of the World Wide Number (WWN) or unique ID they have listed. The first half is the WWN of the array, which you can find under the hardware system properties (applies to EMC VNX).
- c. Make sure when you add the WWNs to the boot policy you are using the correct numbers depending on your fabrics and pathing.
- d. Make sure you add CD-ROM if you plan on booting from a CD or ISO.
- Click the server you’re working on in UCSM.
- Go to the Boot Order tab.
- Click the Modify Boot Policy link and select the new boot policy you created earlier.
- Add the ISO image to the Virtual Media from the KVM console (if you’re installing from an ISO).
- Reboot the server and press F6 to specify how to boot, and you should see both a vDVD option and the LUN (e.g., DGC naa.xxxxxxxxxxxxx) if you’ve done all the previous steps correctly. If you don’t see the boot LUN, make sure your zoning and boot policy are correct.
- Choose to boot from the vDVD option, and you should see the ESXi installer. Make sure you’re using the Cisco custom ESXi image to get all the drivers you need.
- Eventually you’ll get to a screen where you can specify that you want to install the ESXi OS to the boot LUN.
- If you’d like to add the servers back to the service profile template, then change the template to have the new boot policy. If this is an updating template, it will automatically update any service profiles that are bound to it, so you may wish to do this during a maintenance period (if you’re doing all of it during planned downtime, then you don’t have to bother unbinding the templates — you can just change the service profile template in the beginning).
Completing the configuration
If everything in the step-by-step process goes well, you will need to reconfigure your management network and other network settings since this is a fresh install. You can add the host back into your VMware cluster. You’ll need to manually connect it since VMware sees it as a new host with a different security fingerprint, as well as reconfigure your vSwitches if you’re not using the distributed vSwitch.
Now you can add any LUNs you need to add to the storage group for this host and rescan your HBAs to see the datastores. I recommend doing testing such as rebooting the host a few times to make sure it sees the boot LUN. You also may want to vMotion a test VM to it to make sure your networking is configured correctly.
Have you done this a different way? Any other suggestions? If you have any comments or questions, please leave those in the comments section.