VMware vSphere 5.5 – Citrix Known IssuesCTX140135

  • CTX140135
  • Created onMar 26, 2014
  • Updated onJan 16, 2015

Symptoms or Error

Citrix is committed to ensuring compatibility with the latest VMware products. Citrix supports VMware vSphere 5.5 and VMware vSphere 5.5 Update 1, and VMware vSphere 5.5 Update 2.

This article outlines issues and their known solutions that users of vSphere 5.5 vSphere 5.5 Update 1, and vSphere 5.5 Update 2 must be aware of in conjunction with the following Citrix products:

  • Citrix XenDesktop 5.0 Service Pack 1
  • Citrix XenDesktop 5.5
  • Citrix XenDesktop 5.6
  • Citrix XenDesktop 5.6 Feature Pack 1
  • Citrix XenDesktop 7.0, including App Edition
  • Citrix XenDesktop 7.1, including App Edition
  • Citrix XenDesktop 7.5

Solution

The following information outlines the issues and their solutions:

Issue 1

Adding VMware vSphere Server 5.5 host using HTTP connection fails with an Error ID: XDDS:09246D12. VMware vSphere 5.0 U3 and vSphere 5.5 require HTTPS connection. XenDesktop will no longer be able to connect to vSphere 5.0 U3 and vSphere 5.5 using HTTP, and HTTPS would be required.

Solution 1

The workaround in CTX125578 – XenDesktop Error: The hosting infrastructure could not be reached at the specified address will not be applicable for vSphere 5.5 and 5.0 U3.

Issue 2

Machine Creation fails intermittently using vSphere 5.5 and XenDesktop 7.1. The issue is because of network latency.

Solution 2

Set the DiskUploadReponseTimeout to 8:0:0.

  1. Go to Hypervisor Connection Properties located under Citrix Studio > Hosting.
  2. Select the VMware vCenter 5.5 connection and choose Edit Connection.
  3. Go to Advanced > Connection options, type the value DiskUploadReponseTimeout=8:0:0.
    This sets the timeout to 8 hours.
    DiskUploadReponseTimeout=8:0:0

Issue 3

The display of HDX 3D Pro enabled VDA hosted on ESXi 5.5 with vShared Graphics Acceleration fails to connect from a Multi Monitor Setup.

Solution 3

Only Single monitor configuration is currently supported.

Issue 4

Citrix Display Adapter is not installing on Microsoft Windows Vista Service pack 2 after installing Citrix Virtual Desktop Agent for XenDesktop 7.1 on a machine with the VMware Tools installed.

Solution 4

This issue has been resolved in Hotfix Rollup XD560VDAWX64400 or XD560VDAWX86400 (Version 5.6.300). Refer to one of the following:

Replace XdsAgent_x(86/64).msi (Version 5.6.200) present in XenDesktop 7.1 DVD with 5.6.300’s XdsAgent_x(86/64).msi, then run themetainstaller on a machine with the VMware Tools installed.

Issue 5

The CPU usage reaches 100% upon desktop launch in Citrix VDA client with HDX 3D Pro and VMware Dedicated Graphics Acceleration (vDGA) feature enabled.

Solution 5

The issue is observed if only one CPU core is assigned to the Citrix VDA client. The recommended minimum is to have two or more cores assigned to Citrix VDA client.

Issue 6

Virtual Machine creation fails intermittently while creating Streamed VDA Catalog using Provisioning Services (PVS) 7.1.

Solution 6

The behavior has been resolved in Provisioning Services 7.1 Hotfix CPVS71003.
By default, PVS 7.1.3 provides 3 seconds of wait time during target creation using the wizard. If the target virtual machine fails to create, adjust the wait timeout by changing the following registry in small intervals:

Caution! Refer to the Disclaimer at the end of this article before using Registry Editor.HKEY_LOCAL_MACHINESOFTWARECitrixProvisioningServices
Name: ESXVMCreationInterval
Type: DWORD
Value: <range of 0 to 60 seconds>

Issue 7

Deleting Provisioned Virtual Desktop Agent Catalog from XenDesktop 5.6 Delivery Controller (Controller) fails.

Solution 7

This issue has been resolved in XenDesktop Hotfixes Update 9. Refer to one of the following:

Issue 8

VMware SVGA adapter is preferred instead of NVIDIA display adapter, which results in NVIDIA display adapter not being used in HDX 3D Pro VDA 5.6 session.
The default display adapter can be changed to use NVIDIA adapter within the VDA session but it would result in Virtual Machine Console display becoming blank.

Solution 8

Apply one of the following:

Disclaimer

Caution! Using Registry Editor incorrectly can cause serious problems that might require you to reinstall your operating system. Citrix cannot guarantee that problems resulting from the incorrect use of Registry Editor can be solved. Use Registry Editor at your own risk. Be sure to back up the registry before you edit it.

Citrix Image creation

Automated Master Image Creation


//www.facebook.com/plugins/like.php?href=http%3A%2F%2Fxenappblog.com%2F2014%2Fautomated-master-image-creation%2F&send=false&layout=button_count&width=120&show_faces=false&action=like&colorscheme=light&font&height=21“>

WSUS

When deploying a new computer or server image using MDT / SCCM, one of the most time consuming parts is Windows Update. So if the reference image isn’t getting updated on a regular basis, the deployment time will increase each and every month.

To achieve the fastest deployment the reference image should be automatically updated with the recent Windows Updates at least once a month.

Automated Master Image Creation-02

The recommended platform for building references images is the free Microsoft Deployment Toolkit, even if the images is going to be used with SCCM. So let’s get started with Automated Master Image Creation.

Open CustomSettings.ini for your Build Deployment share and add the following to the top:

And at the bottom you add a section for each and every OS type you want to automate with the correct MAC Address:

If you’re using Windows 2008 R2 or Windows 7 you should add the ProductKey= as well to skip the licensing screen.

Automated Master Image Creation-03

To be able to move a captured image to the correct location in a Task Sequence you need to create a script in the scripts catalog, in my case mdt-01.ctxlab.localMDTBuildLab$Scripts.

Call the script Realocate.cmd and add the following:

And then add a Command Line task at the end with this command:

Automated Master Image Creation-04

So in the example above, if a computer with the MAC Address 00:15:5D:01:FB:32 boots on the ISO it will automatically start the Task Sequence W81, capture the image, move it to the Production Deployment Share and shutdown the computer.

All this without lifting a finger, a real zero touch deployment. Relax and check theAutomated Master Image Creation status now and then from you mobile device.

Automated Master Image Creation-05

Wouldn’t it be cool if we could leverage PowerShell to automatically build the master images as a Schedule Task? The following script will create and start a Hyper-V VM with the correct MAC Address and delete it when the capture is finished.

To further improve the building time of the reference images, the captured images should be copied back to the MDT Build Deployment Share once in a while. This way you will update the latest version instead of the base Microsoft ISO image with hundreds of Windows Updates.

So how long does it take to deploy Citrix XenDesktop 7.1 totally automated which Gunnar Berger referred to above? 28 minutes from start to finish, including GPMC and the SCVMM Console.

To further reduce time and space take a look at this article from fellow CTP Aaron Parker : Cleaning up and Reducing the Size of your Master Image.

To learn more check my free training on Automated Master Image Creation.

Source: http://stealthpuppy.com/cleaning-up-and-reducing-the-size-of-your-master-image/

Cleaning up and Reducing the Size of your Master Image

Compressed Car

There’s typically not too much that you can do to reduce the size of your master image. You might use application virtualization or layering solutions to reduce the number of master images, but once you work out what needs to go into the core image, that’s going to dictate the size of the image.

Reducing the size of your master image can help reduce the capacity required to store master images and virtual machines (dedupe helps, of course) and spend less cycles transmitting an image across the network for physical PCs.

An easy win, is running the Disk Clean-up tool included in Windows (since Windows XP) and fortunately this tool can be automated to run at the end of a build (e.g from MDT or ConfigMgr). For physical PCs or persistent desktops, this tool could even be run as a scheduled task.

Microsoft released an important update for Windows 7 last year that can result in a significant reduction in disk space: Disk Clean-up Wizard addon lets users delete outdated Windows updates on Windows 7 SP1. The same feature was originally delivered with Windows 8. (Windows 8.1 Update 1 is expected to reduce disk space requirements again).

Here’s an example system where I’ve run the Disk Clean-up tool that has resulted in a 3.4 GB reduction in disk usage – on the left is the before image, on the right is after the cleanup. (I’m cheating a bit here, this is a system that has gone from Windows 7 to Windows 7 SP1, hence the reason for such a large change).

Compare Disk Cleanup Before and After

Disk Clean-up can remove a number of interesting items, most of which will actually be applicable for PCs and persistent desktops post-deployment. Here’s the items that Disk Clean-up can manage on Windows 8.1:

Disk Cleanup options

To automate Disk Clean-up, use the following steps:

  1. Elevate a command prompt
  2. Run CLEANMGR /SAGESET:<number> (where <number is any number between 1 and 65535)
  3. Select each of the items to clean up
  4. Click OK

To run Disk Clean-up from a script run CLEANMGR /SAGERUN:<number> (where <number> is the same number use with SAGESET.

To automate the process of running Disk Cleanup, the following script can be used to enable all of the tool’s options in the registry and then execute the cleanup process. This script should work for both Windows 7 and Windows 8 and would be useful to run at the end of a deployment. This example uses 100 as the configuration setting, so you would run CLEANMGR /SAGERUN:100 to run Disk Cleanup with these settings.

It’s important to note that while Disk Clean-up exists on Windows 7 and Windows 8, it does not exist on Windows Server unless the Desktop Experience feature is installed (i.e. a Remote Desktop Session Host).

If your image is Windows 8.1 or Windows Server 2012 R2, then the following command is available to perform an even more in depth cleanup of the WinSXS folder, making some additional space available:

Running Disk Clean-up and the above DISM command in a script to clean up your master image should result in a smaller image. Don’t forget that this approach is also useful for persistent desktops – unless you’re using some type of dedupe solution, then there’s potentially gigabytes per desktop that can be removed.

There is one more method worth for reducing space worth mentioning – the Uninstall-WindowsFeature PowerShell cmdlet in Windows Server 2012 and Windows Server 2012 R2. This can go a long way too to reducing the disk footprint by completely removing features from Windows (making them unavailable for install).

For instance, if you’re deploying a Remote Desktop Session Host, there’s no need for IIS or Hyper-V to be in the component store. See this blog post article for full details: How to Reduce the Size of the Winsxs directory and Free Up Disk Space on Windows Server 2012 Using Features on Demand

[May 14 2014] Microsoft has released this update for Windows Server 2008 R2, which can be on the original KB article here: Disk Cleanup Wizard add-on lets users delete outdated Windows updates on Windows 7 SP1 or Windows Server 2008 R2 SP1. The update is available from Windows Update and WSUS.

BEST PRACTICES PREPARING A PROVISIONING SERVICES VDISK – Repost XenAPPBlog.com

Best Practices Preparing a Provisioning Services vDisk

Let’s say you’re going to run Windows Updates. Well since you’ve already launched the Target Optimizer tool, that services is disabled and you need to head into services, enable and start it. Run Windows update and all good. When the update is finished you shutdown the machine and switch from Private to Standard Mode.

What you didn’t remember was to reboot the server for Windows Update to complete it’s updates. What happens now, is that every time your servers reboot, Windows Update will kick in and finish it’s things.

So being only one administrator doing all the procedures is one thing, but when you hand over the solution to your customer or maintenance team, everybody would probably do this differently, forgetting to flush DNS and so on.

So this script will do all these thing for you. Just teach your staff to always run the script after maintenance.

Prerequisites :

Copy Wuinstall to C:Windows. Run XenAppCloning tool, add your free license and then configure your settings and save the settings to the configuration file.

Extract the XAUpdate script to C:XA65Update, rename the script to XA65Update.cmd and configure the settings required inside that script.

Copy the content below into C:Program FilesCitrixPrepare for PVS.cmd

To get ride of this annoying boot screen

you just added HKLMSoftwareCitrixProvisioningServicesSkipBootMenu to your PVS servers. Now your maintenance / test machine will automatically boot to the newest version of the vDisk.

Please be aware that this doesn’t work on Provisioning Services 6.2 hosted on Windows 2012. The good news though, the KMS bug has finally been fixed in Provisioning Services 6.2 (part of Project Excalibur Tech Preview).

So when you e.g. want to patch your Provisioning Services image with the latest Citrix Hotfixes, you just select that option and off you go. The server is rebooted automatically and when it comes back online you just run through the script another time and this time the server will be shutdown ready for you switching from private to standard mode.

If you have any other Best Practices for Provisioning Services, please leave a comment below and share with the community.

Citrix Provisioning Services 7.6 (Repost from CarlWebster.com)

Citrix XenDesktop 7.6, Provisioning Services 7.6 and the XenDesktop Setup Wizard with Write Cache and Personal vDisk Drives

December 24, 2014

Source: carlwebster.com

About Carl Webster

Webster is an independent consultant in the Nashville, TN area and specializes in Citrix, Active Directory and Technical Documentation. Webster has been working with Citrix products for many years starting with Multi-User OS/2 in 1990.

Active Directory, PVS, XenDesktop, XenServer

The original articles I wrote for XenDesktop 7.1 and PVS 7.1 and XenDesktop 7.5 and PVS 7.1 have proven to be extremely popular. This article will show the same process as the original articles but use XenDesktop 7.6 and PVS 7.6 and show what differences XenDesktop 7.6 and PVS 7.6 bring to the process.

Introduction

A while back, I worked on a project where the customer required the use of a Write Cache drive and a Personal vDisk (PvD) drive with XenDesktop 7.1 using Provisioning Services (PVS) 7.1. Getting information on the process to follow was not easy and, as usual, the Citrix documentation was sorely lacking in details. As with most things involving XenDesktop and or PVS, there is NO one way or one right way to do anything. This article will give you detailed information on the process I worked out and documented and now updated for XenDesktop 7.6 and PVS 7.6.

Assumptions:

  1. PVS 7.6 is installed, configured and a farm created.
  2. XenDesktop 7.6 is installed and a Site created and configured.
  3. Hosting resources are configured in Studio.
  4. PXE, TFTP and DHCP are configured as needed.

This article is not about the pros and cons of PvD. It is simply about what process can be used to create virtual desktops that require the use of a Write Cache drive and PvD. I will not be discussing the overhead of PvD or the delay it brings to the startup, shutdown and restart processes or the I/O overhead, the storage impact or the storage I/O requirements or what is needed for High Availability or Disaster Recovery needs for PvD.

Lab Setup

All servers in my lab are running Microsoft Windows Server 2012 R2 fully patched. The lab consists of:

  • 1 PVS 7.6 server
  • 1 XenDesktop 7.6 Controller running Studio
  • 1 SQL 2012 SP1 Server
  • 1 Windows 7 SP1 VM

I am using XenServer 6.2 fully patched for my hosting environment. There are separate Storage Repositories for the Virtual Machines (VM), PvD and Write Cache as shown in Figure 1.

Update: This has been tested with XenServer 6.5 with no changes or issues.

Figure 1

The Hosting Resources are configured in Studio as shown in Figure 2.

Figure 2

To start off, in my lab I created my Organization Unit (OU) structure in Active Directory (AD) for my domain, WebstersLab.com, as shown in Figure 3.

Figure 3

One of the reasons to use PvD is to allow users to install applications. In order to do this I created an AD security group, shown in Figure 4, that will contain the AD user accounts and that AD security group will be made a member of the local Administrators security group.

Figure 4

Three AD user accounts were created, shown in Figure 5, for the three different PvD users for this article.

Figure 5

Those three test user accounts were placed in the LocalAdmins AD security group as shown in Figure 6.

Figure 6

Most organizations that use XenDesktop to serve virtual desktops or servers require that Event Logs persist between reboots or the security team sits in the corner crying. Other items that may need to persist between desktop/VM reboots are antivirus definition files and engine updates. To accomplish these a Group Policy with Preferences is used. Why not manually change the file system and registry? Because the XenDesktop setup wizard completely ignores all the careful work done by creating folders on the Write Cache drive. When the Write Cache and PvD drives are created, they are empty and will NOT carry over ANY of the manual work done before hand. So just forget about doing any of the items usually done by pre creating a Write Cache drive. The Write Cache drive is always created as Drive D and the PvD is created with the drive letter assigned during the Wizard. My Group Policy with Preferences is linked at the OU that will contain the computer accounts created by the XenDesktop Setup Wizard. These are the settings in the policy used for this lab.

  • Computer ConfigurationPoliciesAdministrative TemplatesWindows ComponentsEvent Log ServiceApplicationControl the location of the log file – Enabled with a value of D:EventLogsApplication.evtx
  • Computer ConfigurationPoliciesAdministrative TemplatesWindows ComponentsEvent Log ServiceSecurityControl the location of the log file – Enabled with a value of D:EventLogsSecurity.evtx
  • Computer ConfigurationPoliciesAdministrative TemplatesWindows ComponentsEvent Log ServiceSystemControl the location of the log file – Enabled with a value of D:EventLogsSystem.evtx
  • Computer ConfigurationPreferencesFolder – Action: Update, Path: D:EventLogs
  • Computer ConfigurationPreferencesControl Panel SettingsLocal Users and Groups – Action: Update, Group name: Administrators (built-in), Members: ADD, <DomainName><Security Group Name>
  • User ConfigurationPoliciesAdministrative TemplatesStart Menu and TaskbarRemove the Action Center icon – Enabled

These settings will:

  • Keep the user from getting popups from the Action Center
  • Create the EventLogs folder on drive D (the Write Cache drive)
  • Redirect the Application, Security and System event logs to the new D:EventLogs folder
  • Add the domain security group that contains use accounts who should be local admins to the desktop’s local Administrators group

Create the Virtual Machine

Next up is to create a Windows 7 VM to be used as the Master or Golden image. Do just basic configuration of the VM at this time. Do not install any applications at this time.

Citrix provides a PDF explaining how to optimize a Windows 7 image.http://support.citrix.com/servlet/KbServlet/download/25161-102-648285/XD%20-%20Windows%207%20Optimization%20Guide.pdf

Once the basic VM is built there are four things that need done before joining the VM to the domain.

  1. Fix the WMI error that is the Application event log. I know it is not a critical error but I am OCD and simply must have error free event logs. Run the Mr. FixIt (this one actually works) from http://support.microsoft.com/kb/2545227.
  2. Install the hotfix for using a VMXNet3 network card in ESXi. Request and install the hotfix from http://support.microsoft.com/kb/2550978.
  3. From an elevated command prompt, run WinRM QuickConfig. This allows the desktops to work with Citrix Director.
  4. Disable Task Offload by creating the following registry key:
    1. HKLMSystemCurrentControlSetServicesTCPIPParameters
    2. Key: “DisableTaskOffload” (dword)
    3. Value: 1

The Write Cache drive will become drive D when it is created so before installing any software change the CD drive letter from D to another letter. I use Z.

The VM is ready to join the domain. After joining the domain, shutdown the VM.

Now two hard drives need to be added to the VM. One for the Write Cache drive and the other for the PvD drive. NOTHING will be done to these drives, they are just stub holders so Windows knows there should be two additional drives. The Write Cache and PvD drive must be different sizes or strange things can happen. If they are the same size, it is possible the write cache file and page file can be placed on the PvD drive and not the Write Cache drive. To make your life easier, keep the drives different sizes with the PvD drive being larger. For this article, I will use a 10GB Write Cache drive and a 20GB PvD drive. Make sure the new drives are created in the proper storage locations as shown in Figures 7 through 9.

Figure 7

Figure 8

Figure 9

Power on the VM, login with a domain account, start Computer Management and click on Disk Management as shown in Figure 10.

Figure 10

Click OK to initialize the two new drives as shown in Figure 11.

Figure 11

The two new drives appear in Disk Management as shown in Figure 12.

Figure 12

Leave the drives unformatted and exit Computer Management.

Install PVS Target Device Software

At this time, any software and updates needed can be installed. After all software and updates are installed, mount the PVS 7.6 ISO to the VM, open My Computer and double-click the CD.

When the PVS installer starts, click Target Device Installation on both screens as shown in Figures 13 and 14.

Figure 13

Figure 14

Follow the Installation Wizard to install the PVS Target Device Software. On the last page of the Installation Wizard, leave Launch Imaging Wizard selected and clickFinish as shown in Figure 15.

Figure 15

You can exit the PVS Installer screen and unmount/disconnect the PVS 7.6 ISO from the VM’s CD drive.

Click Next on the Imaging Wizard as shown in Figure 16.

Figure 16

Enter the name or IP address of a PVS Server, select the option for Credentials and click Next as shown in Figure 17.

Figure 17

To Create new vDisk, click Next as shown in Figure 18.

Figure 18

Enter a vDisk name, Store, vDisk type and click Next .as shown in Figure 19.

Figure 19

Select the licensing type and click Next as shown in Figure 20.

Figure 20

Verify only the C drive is selected and click Next as shown in Figure 21.

Figure 21

Enter a Target device name, select the MAC address, select the target deviceCollection and click Next as shown in Figure 22.

Figure 22

Click Optimize for Provisioning Services as shown in Figure 23.

Figure 23

Verify all checkboxes are selected and click OK as shown in Figure 24.

Figure 24

Depending on the .Net Framework versions installed on the VM, the optimization process could take from less than a second to over an hour.

Once the process has completed click Finish as shown in Figure 25.

Figure 25

The vDisk is created.

Once the vDisk is created, a Reboot popup appears as shown in Figure 26. DO NOTreboot at this time. Depending on your hypervisor, you may need to shutdown to make the next change. The VM needs to be configured to boot from the network first and the hard drive second. If this change can be made while the VM is running, make the change and click Yes. If not, click No, shutdown the VM, make the change and power the VM on to continue.

Figure 26

Before we continue, what did the Imaging Wizard do inside of PVS? First, a vDisk was created as shown in Figure 27.

Figure 27

Second, a Target Device was created, as shown in Figure 28, with the MAC address of the VM, linked to the vDisk just created and the Target Device is configured to boot from its hard disk because the vDisk is empty right now.

Figure 28

Once the VM has been configured to boot from the network first and the hard drive second, either power on the VM or click Yes to reboot the VM as previously shown in Figure 26. When the VM is at the logon screen, logon with the same domain account and the Imaging Wizard process continues as shown in Figure 29.

Figure 29

When the Imaging Wizard process is complete, click Finish, as shown in Figure 30, and shutdown the VM.

Note: If there are any errors, click Log, review the log, correct any issues and rerun the Imaging Wizard.

Figure 30

Configure the vDisk in PVS

What has happened is that the Imaging Wizard has now copied the contents of the VM’s C drive into the vDisk. That means the C drive attached to the VM is no longer needed. Detach the C drive from the VM as shown in Figures 31 and 32. DO NOT DELETE the C drive, just detach it.

Figure 31

Figure 32

Now that the VM has no C drive, how will it boot? In the PVS console, go to the Target Device, right-click and select Properties as shown in Figure 33.

Figure 33

Change the Boot from to vDisk as shown in Figure 34.

Figure 34

The vDisk contains everything that was on the original C drive and the vDisk is still set to Private Image mode. That means everything that is done to the vDisk is the same as making changes on the original C drive. Any changes made now will persist. When the vDisk is changed to Standard Image mode, the vDisk is placed in read-only mode and no changes can be made to it. Before the VM is powered on, an AD Machine Account must be created. Right-click the target device, select Active Directory and then Create Machine Account… as shown in Figure 35.

Figure 35

Select the Organization unit from the dropdown list as shown in Figure 36.

Figure 36

Once the correct Organization unit has been selected, click Create Account as shown in Figure 37.

Figure 37

When the machine account is created, click Close as shown in Figure 38. If there is an error reported, resolve the error and rerun the process.

Figure 38

Power on the VM and logon with domain credentials. Open Computer Management and click on Disk Management. Here you can see the holders for the 10GB Write Cache and 20GB PvD drives and the C drive (which is the vDisk) as shown in Figure 39.

Figure 39

Exit Computer Management.

You can also verify the VM has booted from the vDisk by checking the Virtual Disk Status icon in the Notification Area as shown in Figure 40.

Figure 40

As shown in Figure 41, the Virtual Disk Status shows:

  • The vDisk status is Active,
  • The IP address of the PVS server streaming the vDisk,
  • That the Target Device is booting from the vDisk,
  • The name of the vDisk, and
  • The vDisk is in Read/Write mode.

Figure 41

Exit the Virtual Disk Status.

Install the Virtual Delivery Agent

The XenDesktop 7.6 Virtual Delivery Agent (VDA) needs to be installed. Mount the XenDesktop 7.6 ISO to the CD. Double-click the CD drive and the XenDesktop installation wizard starts. Click Start for XenDesktop as shown in Figure 42.

Note: At this time, PvD is only supported for desktop operating systems. PvD will not work and is not supported for XenApp 7.6.

Figure 42

Select Virtual Delivery Agent for Windows Desktop OS as shown in Figure 43.

Figure 43

Select Create a Master Image and click Next as shown in Figure 44.

Figure 44

Select the appropriate HDX 3D Pro option and click Next as shown in Figure 45.

Figure 45

Verify Citrix Receiver is selected and click Next as shown in Figure 46.

Figure 46

Enter the Fully Qualified Domain Name of a XenDesktop 7.6 Controller, click Test connection and, if the test is successful (a green check mark is displayed), click Addas shown in Figures 47 and 48. Repeat until all XenDesktop 7.6 Controllers are entered. Click Next when all Controllers are added.

Figure 47

Figure 48

Verify all options are selected and click Next as shown in Figure 49.

Figure 49

Select the appropriate firewall rules option and click Next as shown in Figure 50.

Figure 50

Click Install as shown in Figure 51.

Figure 51

The VDA installation starts as shown in Figure 52.

Figure 52

When the VDA installation completes, verify Restart machine is selected and clickFinish as shown in Figure 53.

Figure 53

Disconnect/unmount the XenDesktop 7.6 ISO from the VM.

Update Virtual Delivery Agent Software

Citrix updates the VDA software often. At the time this article was released, 23-Dec-2014, there was one Public update to the VDA software (ICAWS760WXnn005 where nn is either 32 or 64 for the bitness of your desktop OS).

To check for recommended available updates, in your browser, go to XenDesktop 7.6 Recommended Updates.

Click on Support, select XenDesktop from the dropdown. Change All Versions to XenDesktop 7.6, click on Software Updates and then Public. See if there is any update for XenDesktop 7.6. If there is, download and install the VDA update.

After the VM restarts, log back in to the desktop with domain credentials.

Update Personal vDisk Software

Citrix updates the Personal vDisk software often. At the time this article was released, 23-Dec-2014, there was no update to the Personal vDisk software.

To check for an available update, in your browser, go to http://www.mycitrix.com and logon with MyCitrix.com credentials.

Click on Downloads, select XenDesktop and Components from the two dropdowns. See if there is any update for XenDesktop 7.6. If there is, download and install the Personal vDisk update.

Log back in to the desktop with domain credentials.

Configure Personal vDisk

By default, PvD uses two drive letters: V and P. V is hidden and is a merged view of the C drive with the PvD drive. If drive V is already used, the drive letter can be changed.

If needed, change the hidden PvD drive letter:

  • Key : HKEY_LOCAL_MACHINESoftwareCitrixpersonal vDiskConfig
  • Value : VHDMountPoint [REG_SZ]
  • Set this to the drive letter of your choice. Ensure that “:” is appended to the end of your entry (Example: X: )

Both user profile data and applications and machine settings are stored in the PvD. By default, this is a 50/50 split if the PvD size is at least 4GB or larger.  The percent to be allocated for applications and machine settings can be configured by setting the following registry value:

  • KEY: HKEY_LOCAL_MACHINESoftwareCitrixpersonal vDiskConfig
  • VALUE: PercentOfPvDForApps
    • By default, this value is set to 50
    • Changing this to 80 will result in the V: drive being allocated 80% of the PvD disk

Note: This value must be changed before the PvD is placed into production.

Everything is now complete. Before running the PvD Inventory, follow your standard procedure for sealing the image. This process is unique to every environment. For my lab, I have no antivirus software and I am not using WSUS so I have no registry keys to clear out. Manually run the PvD Inventory. Click Start, All Programs, Citrix, Update personal vDisk as shown in Figure 54.

Figure 54

The PvD inventory starts. Leave Shut down the system when update is completeselected as shown in Figure 55.

Figure 55

After the inventory completes, the VM is shutdown.

PVS XenDesktop Setup Wizard

Make a copy of the VM and create a template of the copy. That way the original VM is still available just in case.

When making the template, make sure the template is stored on a storage location that is available when running the XenDesktop Setup Wizard. Change the template to boot from network only.

Since the C drive was detached, that leaves the Write Cache and PvD storage locations. If you do not, an error “<host resource> has no available templates defined that are fully accessible by all hosts” is displayed during the XenDesktop Setup Wizard. In the PVS console, click on the vDisk Pool node, right-click the vDisk and select Properties as shown in Figure 56.

Figure 56

Change the Access mode to Standard image and Cache type to Cache on device hard drive as shown in Figure 57.

Note: If you leave the Cache type at the default of Cache on server, when you run the XenDesktop Setup Wizard there will not be an option to configure the Write Cache drive size.

Note: I am using Cache on device hard drive for this article. With PVS 7.6, Cache in device RAM with overflow on hard disk is now the popular option. I highly recommend you read the following two articles by Dan Allen before making a decision on the Cache Type to use:

  1. Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One
  2. Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part Two

Figure 57

Right-click the Site and select XenDesktop Setup Wizard as shown in Figure 58.

Figure 58

Note: If you get an error popup that states “No Standard Image vDisk exists in this Site”, that simply means the vDisk is still in Private Image mode.

Click Next as shown in Figure 59.

Figure 59

Enter the name of a XenDesktop 7.6 Controller and click Next as shown in Figure 60.

Figure 60

Select the host resource from those configured in Citrix Studio and click Next as shown in Figure 61.

Figure 61

Enter the logon credentials for the host resource and click OK as shown in Figure 62.

Figure 62

Select the appropriate template and VDA version and or functionality desired and click Next as shown in Figure 63.

Figure 63

Select the vDisk and click Next as shown in Figure 64.

Figure 64

Select whether to Create a new catalog or Use an existing catalog and click Next as shown in Figure 65. If you Create a new catalog, enter a Catalog name andDescription.

Note: The wizard creates a Machine Catalog in XenDesktop and a Device Collection in PVS with the Catalog name entered here.

Figure 65

Select Windows Desktop Operating System and click Next as shown in Figure 66.

Figure 66

Since we are using PvD, select The same (static) desktop, also select Save changes and store them on a separate personal vDisk and click Next as shown in Figure 67.

Figure 67

Make the appropriate choices.

For this lab, I am creating 3 VMs (desktops) with 2 vCPUs, 2 GB RAM, a 10GB write cache disk, a 20 GB PvD disk and changing the PvD drive to Y. Click Next as shown in Figure 68.

Note: If you do not see the option Local write cache disk that means you left the vDisk at the default of Cache on server. Exit this wizard, correct the vDisk properties and rerun the wizard.

Figure 68

Select Create new accounts to have new AD computer accounts created and clickNext as shown in Figure 69.

Figure 69

Select the Domain, OU, Account naming scheme and click Next as shown in Figure 70.

Figure 70

Verify the Summary information, click Finish, as shown in Figure 71, and the wizard will begin creating the following:

  • Virtual Machines
  • AD computer accounts
  • Target Devices
  • Machine Catalog in XenDesktop Studio

Figure 71

When the wizard is complete, click Done as shown in Figure 72.

Figure 72

Looking at the Device Collection in the PVS console (you may need to right-click the Site and select Refresh) shows the three target devices with only one powered on at this time as seen in Figure 73.

Figure 73

Looking in Active Directory Users and Computers shows the new computer accounts as seen in Figure 74.

Figure 74

Create XenDesktop Delivery Group

In Citrix Studio, right-click on the Machine Catalogs node and select Refresh. The new Machine Catalog created by the XenDesktop Setup Wizard is shown in Figure 75.

Figure 75

Currently there is no Delivery Group to deliver the desktops. Right-click the Delivery Groups node in Citrix Studio and select Create Delivery Group as shown in Figure 76.

Figure 76

Click Next as shown in Figure 77.

Figure 77

Select the Machine Catalog and the number of machines to be added from the catalog to this delivery group and click Next as shown in Figure 78.

Figure 78

Select Desktops and click Next as shown in Figure 79.

Figure 79

Click Add… as shown in Figure 80.

Figure 80

Use the Select Users or Groups dialog to add users and click OK as shown in Figure 81.

Figure 81

Click Next as shown in Figure 82.

Figure 82

Select the appropriate StoreFront option and click Next as shown in Figure 83.

Figure 83

Enter a Delivery Group name, Display name, an optional Delivery Group description for users and click Finish as shown in Figure 84.

Figure 84

From here, there are many options that can be configured. For this lab, I edited the Delivery Group and set both Weekdays and Weekend peak hours to 24 hours as shown in Figure 85.

Figure 85

Every XenDesktop project I have been on, the customer wants all desktops powered on at all times. To do this, on a Controller start a PowerShell session and enter the following commands as shown in Figure 86:

add-pssnapin *citrix*

Get-brokerdesktopgroup | set-brokerdesktopgroup -PeakBufferSizePercent 100

Note: I had a reader leave me a comment on the original article that said this setting does not apply to user assigned desktops. But, I never got more than one desktop to start (out of the three in my lab) until I set the PeakBufferSizePercent. As soon as I entered that command, within a few seconds the other two desktops powered on.

Figure 86

Exit the PowerShell session. After a few minutes, all the desktops will power on. The desktops will reboot, I think, two times before they are ready for users to login. Back in the PVS console, the vDisk will show three connections and all three target devices will be powered on as shown in Figures 87 and 88.

Figure 87

Figure 88

Understanding How Personal vDisk Works

Now let us look at how the Write Cache and PvD drives work.

All three desktops are powered on. I will log in as a different user into each desktop.

All three users are presented with the standard Windows 7 desktop configured during the creation of the master image VM as shown in Figure 89.

Figure 89

Before we take a look at user customization and personalization, let’s see what is on the Write Cache and PvD drives. I had to show system and hidden files and operating system files. Figures 90 and 91 show the Write Cache drive which shows the write cache file, page file and the EventLogs folder.

Figure 90

Figure 91

Figure 92 shows there is not much of anything useful to see on the PvD drive.

Figure 92

Back in Citrix Studio, refresh the Delivery Group and you will see there are nowSessions in use with no Unregistered or Disconnected machines as shown in Figure 93.

Figure 93

Double-click the Delivery Group to see detailed information as shown in Figure 94.

Figure 94

The first user is Ms. Know-It-All who probably knows Windows 7 better than the helpdesk team. She configures her desktop to get all the Windows 7 “frilly” stuff out of her way as shown in Figure 95.

Figure 95

The second user is Ms. Tree Hugger who wants a pretty cool picture for her background as shown in Figure 96.

Figure 96

The third user is Ms. Astrophysicist who needs a picture of her Tesla as her background as shown in Figure 97.

Figure 97

Now that each user has customized their desktop, reboot each desktop, log back in to each desktop and verify the user’s customizations persisted.

User Installed Software

What about installing software? User1 installed NotePad++ since she knows more than you do anyways, User2 installed Google Chrome to save the world from Internet Exploder and User3 installed Mathematica so she could do some physics work. The three desktops are shown in Figures 98 through 100.

Figure 98

Figure 99

Figure 100

Now that each user has installed an application, reboot each desktop, log back in to each desktop and verify the user’s installed application persisted. Since we are using PvD to allow users to install applications, where are the applications installed? Looking at User1, we can see that Notepad++ was installed to c:Program FilesNotepad++ as shown in Figure 101.

Figure 101

User2’s Google Chrome is installed to C:Program FilesGoogleChromeApplication as shown in Figure 102.

Figure 102

User3’s Mathematica is installed to C:Program FilesWolfram ResearchMathematica10.0 as shown in Figure 103.

Figure 103

The C drive view is a combination of the hidden drive, V by default, and C. When users install applications they will install as usual to the C drive. There is no need to install to the visible PvD drive, P by default.

Updating the Master Image

How is the master image updated if an application needs to be installed that all users need? Simple, in the PVS console create a Maintenance version, update it, test it and then make it available to users. In the PVS console, right-click the vDisk and selectVersions as shown in Figure 104.

Figure 104

Click New as shown in Figure 105.

Figure 105

A new Maintenance version of the vDisk is created as shown in Figure 106. ClickDone.

Figure 106

In the PVS console, go to the Device Collection the original master target device is in, right-click the target device and click Properties as shown in Figure 107.

Figure 107

Change the Type from Production to Maintenance and click OK as shown in Figure 108.

Note: In a production environment, you would have a dedicated Target Device to use for Maintenance versions of vDisks.

Figure 108

In the hypervisor, start that VM and open the VM’s console. An option to boot into either the Production version or the Maintenance version is shown. Select the Maintenance version as shown in Figure 109.

Figure 109

What has happened is that the target device has been configured to boot from a Maintenance image and during the bootup communication, the PVS server recognized the MAC address and offered the target device the maintenance vDisk to boot from. The maintenance vDisk is in Read/Write mode so changes can be made to the vDisk. Login to the desktop with domain credentials. I installed Adobe Acrobat Reader as shown in Figure 110.

Note: Whatever software is installed, verify that any license agreements and popups are acknowledged and any other configurations needed are done before sealing the image and running the PvD Inventory. For example, in Acrobat Reader I acknowledged the license agreement and disabled updater.

Figure 110

Before running the PvD Inventory, follow your standard procedure for sealing the image. This process is unique to every environment. For my lab, I have no antivirus software and I am not using WSUS so I have no registry keys to clear out. Manually run the PvD Inventory. Click Start, All Programs, Citrix, Update personal vDisk as shown in Figure 111.

Figure 111

The PvD inventory starts. Leave Shut down the system when update is completeselected as shown in Figure 112.

Figure 112

After the inventory completes, the VM is shutdown. Once the VM has shut down, in the PVS console, right-click the vDisk and select Versions as shown in Figure 113.

Figure 113

Select the Maintenance version and click Promote as shown in Figure 114.

Figure 114

PVS 7.6 adds the ability to now have a Test version for a vDisk that uses PvD.  This was not possible prior to version 7.6.

Select Test and click OK as shown in Figure 115.

Figure 115

The vDisk version is promoted to Test, as shown in Figure 116. Click Done.

Figure 116

In the PVS console, go to the Device Collection the original master target device is in, right-click the target device and click Properties as shown in Figure 117.

Figure 117

Change the Type from Maintenance to Test and click OK as shown in Figure 118.

Note: In a production environment, you would have dedicated Target Devices to use for Test versions of vDisks.

Figure 118

In the hypervisor, start that VM and open the VM’s console. An option to boot into either the Production version or the Test version is shown. Select the Test version as shown in Figure 119.

Figure 119

What has happened is that the target device has been configured to boot from a Maintenance image and during the bootup communication, the PVS server recognized the MAC address and offered the target device the maintenance vDisk to boot from. The maintenance vDisk is in Read/Write mode so changes can be made to the vDisk. Login to the desktop with domain credentials.

There are several things to notice with the Test version of the vDisk:

  1. The application that was installed for all users is there (Figure 120),
  2. The vDisk is in Read-only mode (Figure 121), but
  3. The write cache is located on the PVS server (Figure 122) because,
  4. There is no Write Cache drive (Figure 123),
  5. There is no PvD drive attached (also Figure 123), but
  6. The stub holders for the write cache and PvD drives are still there (Figure 124).

Figure 120

Figure 121

Figure 122

Figure 123

Figure 124

Once testing is completed, shutdown the VM.

Once the VM has shut down, in the PVS console, right-click the vDisk and selectVersions as shown in Figure 125.

Figure 125

Select the Test version and click Promote as shown in Figure 126.

Figure 126

Select Immediate and click OK as shown in Figure 127.

Figure 127

The updated vDisk is now available for use as shown in Figure 128. Click Done.

Figure 128

Verify the Master Image Update

Restart the desktops for them to start using the updated vDisk. The desktops will automatically reboot after a few minutes. This is normal. Wait until this reboot is complete before allowing the users access to the desktop. Log in to each desktop and verify the new application is available and the user’s original customizations and installed applications persisted after the update. The three desktops are shown in Figures 129 through 131.

Figure 129

Figure 130

Figure 131

And there you have it, one way to do a XenDesktop 7.6 with Personal vDisk process.

Citrix lists four ways to do this process in eDocs, three with PVS and one with MCS.http://support.citrix.com/proddocs/topic/provisioning-7/pvs-inventory-vdisks-pvd.html

I think it is strange they have MCS listed as a process in the PVS documentation but that is beside the point.

I hope this detailed process explanation will help you in working with PvD with XenDesktop 7.6 and PVS 7.6.

There is a PDF available of this article for $1.99.

Thanks

Webster

CITRIX XENDESTOP AND PVS: A WRITE CACHE PERFORMANCE STUDY

Thursday, July 10, 2014   , , , , , , , , , , , ,   Source: Exit | the | Fast | Lane

image

If you’re unfamiliar, PVS (Citrix Provisioning Server) is a vDisk deployment mechanism available for use within a XenDesktop or XenApp environment that uses streaming for image delivery. Shared read-only vDisks are streamed to virtual or physical targets in which users can access random pooled or static desktop sessions. Random desktops are reset to a pristine state between logoffs while users requiring static desktops have their changes persisted within a Personal vDisk pinned to their own desktop VM. Any changes that occur within the duration of a user session are captured in a write cache. This is where the performance demanding write IOs occur and where PVS offers a great deal of flexibility as to where those writes can occur. Write cache destination options are defined via PVS vDisk access modes which can dramatically change the performance characteristics of your VDI deployment. While PVS does add a degree of complexity to the overall architecture, since its own infrastructure is required, it is worth considering since it can reduce the amount of physical computing horsepower required for your VDI desktop hosts. The following diagram illustrates the relationship of PVS to Machine Creation Services (MCS) in the larger architectural context of XenDesktop. Keep in mind also that PVS is frequently used to deploy XenApp servers as well.

image

PVS 7.1 supports the following write cache destination options (from Link):

  • Cache on device hard drive – Write cache can exist as a file in NTFS format, located on the target-device’s hard drive. This write cache option frees up the Provisioning Server since it does not have to process write requests and does not have the finite limitation of RAM.
  • Cache on device hard drive persisted (experimental phase only) – The same as Cache on device hard drive, except cache persists. At this time, this write cache method is an experimental feature only, and is only supported for NT6.1 or later (Windows 7 and Windows 2008 R2 and later).
  • Cache in device RAM – Write cache can exist as a temporary file in the target device’s RAM. This provides the fastest method of disk access since memory access is always faster than disk access.
  • Cache in device RAM with overflow on hard disk – When RAM is zero, the target device write cache is only written to the local disk. When RAM is not zero, the target device write cache is written to RAM first.
  • Cache on a server – Write cache can exist as a temporary file on a Provisioning Server. In this configuration, all writes are handled by the Provisioning Server, which can increase disk IO and network traffic.
  • Cache on server persistent – This cache option allows for the saving of changes between reboots. Using this option, after rebooting, a target device is able to retrieve changes made from previous sessions that differ from the read only vDisk image.

Many of these were available in previous versions of PVS, including cache to RAM, but what makes v7.1 more interesting is the ability to cache to RAM with the ability to overflow to HDD. This provides the best of both worlds: extreme RAM-based IO performance without the risk since you can now overflow to HDD if the RAM cache fills. Previously you had to be very careful to ensure your RAM cache didn’t fill completely as that could result in catastrophe. Granted, if the need to overflow does occur, affected user VMs will be at the mercy of your available HDD performance capabilities, but this is still better than the alternative (BSOD).

Results

Even when caching directly to HDD, PVS shows lower IOPS/ user numbers than MCS does on the same hardware. We decided to take things a step further by testing a number of different caching options. We ran tests on both Hyper-V and ESXi using our standard 3 user VM profiles against LoginVSI’s low, medium, high workloads. For reference, below are the standard user VM profiles we use in all Dell Wyse Datacenter enterprise solutions:

Profile Name Number of vCPUs per Virtual Desktop Nominal RAM (GB) per Virtual Desktop Use Case
Standard 1 2 Task Worker
Enhanced 2 3 Knowledge Worker
Professional 2 4 Power User

We tested three write caching options across all user and workload types: cache on device HDD, RAM + Overflow (256MB) and RAM + Overflow (512MB). Doubling the amount of RAM cache on more intensive workloads paid off big netting a near host IOPS reduction to 0. That’s almost 100% of user generated IO absorbed completely by RAM. We didn’t capture the IOPS generated in RAM here using PVS, but as the fastest medium available in the server and from previous work done with other in-RAM technologies, I can tell you that 1600MHz RAM is capable of tens of thousands of IOPS, per host. We also tested thin vs thick provisioning using our high end profile when caching to HDD just for grins. Ironically, thin provisioning outperformed thick for ESXi, the opposite proved true for Hyper-V. Toachieve these impressive IOPS number on ESXi it is important to enable intermediate buffering (see links at the bottom). I’ve highlighted the more impressive RAM + overflow results in red below. Note: IOPS per user below indicates IOPS generation as observed at the disk layer of the compute host. This does not mean these sessions generated close to no IOPS.

Hyper-visor PVS Cache Type Workload Density Avg CPU % Avg Mem Usage GB Avg IOPS/User Avg Net KBps/User
ESXi Device HDD only Standard 170 95% 1.2 5 109
ESXi 256MB RAM + Overflow Standard 170 76% 1.5 0.4 113
ESXi 512MB RAM + Overflow Standard 170 77% 1.5 0.3 124
ESXi Device HDD only Enhanced 110 86% 2.1 8 275
ESXi 256MB RAM + Overflow Enhanced 110 72% 2.2 1.2 284
ESXi 512MB RAM + Overflow Enhanced 110 73% 2.2 0.2 286
ESXi HDD only, thin provisioned Professional 90 75% 2.5 9.1 250
ESXi HDD only thick provisioned Professional 90 79% 2.6 11.7 272
ESXi 256MB RAM + Overflow Professional 90 61% 2.6 1.9 255
ESXi 512MB RAM + Overflow Professional 90 64% 2.7 0.3 272

For Hyper-V we observed a similar story and did not enabled intermediate buffering at the recommendation of Citrix. This is important! Citrix strongly recommends to not use intermediate buffering on Hyper-V as it degrades performance. Most other numbers are well inline with the ESXi results, save for the cache to HDD numbers being slightly higher.

Hyper-visor PVS Cache Type Workload Density Avg CPU % Avg Mem Usage GB Avg IOPS/User Avg Net KBps/User
Hyper-V Device HDD only Standard 170 92% 1.3 5.2 121
Hyper-V 256MB RAM + Overflow Standard 170 78% 1.5 0.3 104
Hyper-V 512MB RAM + Overflow Standard 170 78% 1.5 0.2 110
Hyper-V Device HDD only Enhanced 110 85% 1.7 9.3 323
Hyper-V 256MB RAM + Overflow Enhanced 110 80% 2 0.8 275
Hyper-V 512MB RAM + Overflow Enhanced 110 81% 2.1 0.4 273
Hyper-V HDD only, thin provisioned Professional 90 80% 2.2 12.3 306
Hyper-V HDD only thick provisioned Professional 90 80% 2.2 10.5 308
Hyper-V 256MB RAM + Overflow Professional 90 80% 2.5 2.0 294
Hyper-V 512MB RAM + Overflow Professional 90 79% 2.7 1.4 294

Implications

So what does it all mean? If you’re already a PVS customer this is a no brainer, upgrade to v7.1 and turn on “cache in device RAM with overflow to hard disk” now. Your storage subsystems will thank you. The benefits are clear in both ESXi and Hyper-V alike. If you’re deploying XenDesktop soon and debating MCS vs PVS, this is a very strong mark in the “pro” column for PVS. The fact of life in VDI is that we always run out of CPU first, but that doesn’t mean we get to ignore or undersize for IO performance as that’s important too. Enabling RAM to absorb the vast majority of user write cache IO allows us to stretch our HDD subsystems even further, since their burdens are diminished. Cut your local disk costs by 2/3 or stretch those shared arrays 2 or 3x. PVS cache in RAM + overflow allows you to design your storage around capacity requirements with less need to overprovision spindles just to meet IO demands (resulting in wasted capacity).

References:

DWD Enterprise Reference Architecture

http://support.citrix.com/proddocs/topic/provisioning-7/pvs-technology-overview-write-cache-intro.html

When to Enable Intermediate Buffering for Local Hard Drive Cache

New Cisco Validated Design for XenDesktop on VNX for 5000 users (Reblog from virtualgeek.typepad.com)

New Cisco Validated Design for XenDesktop on VNX for 5000 users

This new 5000-user CVD joins the VSPEX CVDs for both VMware View and Citrix XenDesktop in smaller increments – and also CVDs for general purpose cloud use cases based on Hyper-V and VMware.

BTW – you can of course use this to scale up even further in building blocks.

Mike Brennan, the Cisco guy who was one the folks leading this effort comments on his findings through the experience in his blog here.  It’s pretty amazing.   5000 users, up and running in 30 minutes.  The EMC VNX7500 in the test was used in a “unified way” – using block storage for UCS boot, and as part of a large pool for PVS boot vDisks, but also using NFS for PVS Write Caching.   EMC FAST Cache was used liberally to ensure a nice low-latency envelope all the way up to 39 UCS B230 M2 blades for the full 5000 user ramp up test.

Click on the below to download the PDF (warning – it’s a pretty hefty 20MB doc – like all CVDs it is very detailed, which is part of the charm!)

image

Count The Ways – Flash as Local Storage to an ESXi Host

Count The Ways – Flash as Local Storage to an ESXi Host
Posted: 21 Jul 2014   By: Joel Grace

When performance trumps all other considerations, flash technology is a critical component to achieve the highest level of performance. By deploying Fusion ioMemory, a VM can achieve near-native performance results. This is known as pass-through (or direct) I/O.

The process of achieving direct I/O involves passing the PCIe device to the VM, where the guest OS sees the underlying hardware as its own physical device. The ioMemory device is then formatted with “file system” by the OS, rather than presented as a virtual machine file system (VMFS) datastore. This provides the lowest latency, highest IOPS and throughput. Multiple ioMemory devices can also be combined to scale to the demands of the application.

Another option is to use ioMemory as a local VMFS datatstore. This solution provides high VM performance, while maintaining its ability to utilize features like thin provisioning, snapshots, VM portability and storage vMotion. With this configuration, the ioMemory can be shared by VMs on the same ESXi host and specific virtual machine disks (VMDK) stored here for application acceleration.

Either of these options can be used for each of the following design examples.

Benefits of Direct I/O:

Raw hardware performance of flash within a VM with Direct I/OProvides the ability to use RAID across ioMemory cards to drive higher performance within the VMUse of any file system to manage the flash storage

Considerations of Direct I/O:

ESXi host may need to be rebooted and CPU VT flag enabledFusion-io VSL driver will need to be install in the guest VM to manage deviceOnce assigned to a VM the PCI device cannot be share with any other VMs

Benefits Local Datastore:

High performance of flash storage for VM VMDKsMaintain VMware functions like snapshots and storage vMotion

Considerations Local Datastore:

Not all VMDKs for a given VM have to reside on local flash use shared storage for OS and flash for application DATA VMDKsSQL/SIOS

Many enterprise applications reveal their own high availability (HA) features when deployed in bare metal environments. These elements can be used inside VMs to provide an additional layer of protection to an application, beyond that of VMware HA.

Two great SQL examples of this are Microsoft’s Database Availability Groups and SteelEye DataKeeper. Fusion-io customers leverage these technologies in bare metal environments to run all-flash databases without sacrificing high availability. The same is true for virtual environments.

By utilizing shared-nothing cluster aware application HA, VMs can still benefit from the flexibility provided by virtualization (hardware abstraction, mobility, etc.), but also take advantage of local flash storage resources for maximum performance.

Benefits:

Maximum application performanceMaximum application availabilityMaintains software defined datacenter

Operational Considerations:

100% virtualization is a main goal, but performance is criticalDoes virtualized application have additional HA features?SAN/NAS based datastore can be used for Storage vMotion if hosts needs to be taken offline for maintenanceCITRIX

The Citrix XenDesktop and XenApp application suites also present interesting use cases for local flash in VMWare environments. Often times these applications are deployed in a stateless fashion via Citrix Provisioning Services, where several desktop clones or XenApp servers are booting from centralized read-only golden images. Citrix Provisioning Services stores all data changes during the users’ session in a user-defined write cache location.  When a user logs off or the XenApp server is rebooted, this data is flushed clean. The write cache location can be stored across the network on the PVS servers, or on local storage devices. By storing this data on a local Fusion-io datastore on the ESXi host, it drastically reduces access time to active user data making for a better Citrix user experience and higher VM density.

Benefits:

Maximum application performanceReduced network load between VM’s and Citrix PVS ServerAvoids slow performance when SAN under heavy IO pressureMore responsive applications for better user experience

Operational Considerations

Citrix Personal vDisks (persistent desktop data) should be directed to the PVS server storage for resiliency.PVS vDisk Images can also be stored on ioDrives in the PVS server further increasing performance while eliminating the dependence on SAN all together.ioDrive capacity determined by Citrix write cache sizing best practices, typically a 5GB .vmdk per XenDekstop instance.

70 desktops x 5GB write cache = 350GB total cache size (365GB ioDrive could be used in this case).

The Citrix XenDesktop and XenApp application suites also present interesting use cases for local flash in VMWare environments. Often times these applications are deployed in a stateless fashion via Citrix Provisioning Services, where several desktop clones or XenApp servers are booting from centralized read-only golden images. Citrix Provisioning Services stores all data changes during the users’ session in a user-defined write cache location.  When a user logs off or the XenApp server is rebooted, this data is flushed clean. The write cache location can be stored across the network on the PVS servers, or on local storage devices. By storing this data on a local Fusion-io datastore on the ESXi host, it drastically reduces access time to active user data making for a better Citrix user experience and higher VM density.

VMware users can boost their system to achieve maximum performance and acceleration using flash memory. Flash memory will maintain maximum application availability during heavy I/O pressure, and makes your applications more responsive, providing a better user experience. Flash can also reduce network load between VMs and Citrix PVS Server.Click here to learn more about how flash can boost performance in your VMware system.

Joel Grace Sales Engineer
Source: http://www.fusionio.com/blog/count-the-ways.
Copyright © 2014 SanDisk Corporation. All rights reserved.Terms of UseTrademarksPrivacyCookies

Best Practices for Configuring Provisioning Services Server on a Network

Source: Citrix KB: Best Practices for Configuring Provisioning Services Server on a Network

  • CTX117374
  • Created onMar 26, 2014
  • Updated onOct 21, 2014
Article Topic : Configuration, Performance

Information

This article explains the best practices when configuring Citrix Provisioning Server on a network.Use these best practices when troubleshooting issues such as slow performance, image build failures, lost connections to the streaming server, or excessive retries from the target device.

Disabling Spanning Tree or Enabling PortFast

With Spanning Tree Protocol (STP) or Rapid Spanning Tree Protocol, the ports are placed into a Blocked state while the switch transmits Bridged Protocol Data Units (BPDUs) and listens to ensure the BPDUs are not in a loopback configuration.

The amount of time it takes to complete this convergence process depends on the size of the switched network, which might allow the Pre-boot Execution Environment (PXE) to time out.

To resolve this issue, disable STP on edge-ports connected to clients or enable PortFast or Fast Link depending on the managed switch brand. Refer to the following table:

Switch Manufacturer Fast Link Option Name
Cisco PortFast or STP Fast Link
Dell Spanning Tree FastLink
Foundry Fast Port
3COM Fast Start

Large Send Offload

  • The TCP Large Send Offload option allows the AIX TCP layer to build a TCP message up to 64 KB long and send it in one call down the stack through IP and the Ethernet device driver. The adapter then re-segments the message into multiple TCP frames to transmit on the wire. The TCP packets sent on the wire are either 1500-byte frames for a Maximum Transmission Unit (MTU) of 1500 or up to 9000-byte frames for a MTU of 9000 (jumbo frames).
  • Re-segmenting and storing packets to send in large frames causes latency and timeouts to the Provisioning Server. This should be disabled on all Provisioning Servers and clients.
  • To disable Large Send Offload, open the Network Interface Card (NIC) properties and select the Advanced tab.
    Some NICs do not offer this setting in the properties page. In this case, you must change a registry key change to disable Large Send Offload. To disable, add the following entry to the registry:Caution! Refer to the Disclaimer at the end of this article before using Registry Editor.
  • Target DeviceNote: For Provisioning Server 6.0 and beyond, BNNS driver is no longer used for Windows 7 and 2008, so this registry key is not applicable. However, BNNS is still used for windows XP and 2003.HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesBNNSParameters
    DWORD = EnableOffload
    Value: “0”
  • Provisioning Server and Target DeviceHKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTCPIPParameters
    Key: “DisableTaskOffload” (dword)
    Value: “1”

Auto Negotiation

Auto Negotiation can cause long starting times and PXE timeouts, especially when starting multiple target devices. Citrix recommends hard coding all Provisioning Server ports (server and client) on the NIC and on the switch.

Stream Service Isolation

New advancements in network infrastructure, such as 10 Gb networking, may not require the stream service to be isolated from other traffic. If security is of primary concern, then Citrix recommends isolating or segmenting the PVS stream traffic from other production traffic. However, in many cases, isolating the stream traffic can lead to a more complicated networking configuration and actually decrease network performance in some cases. For more information on whether the streaming traffic should be isolated, refer the following article:

Is Isolating the PVS Streaming Traffic Really a Best Practice?

Firewall and Server to Server Communication Ports

Open the following ports in both directions:

  • UDP 6892 and 6904 (For Soap to Soap communication – MAPI and IPC)
  • UDP 6905 (For Soap to Stream Process Manager communication)
  • UDP 6894 (For Soap to Stream Service communication)
  • UDP 6898 (For Soap to Mgmt Daemon communication)
  • UDP 6895 (For Inventory to Inventory communication)
  • UDP 6903 (For Notifier to Notifier Communication)

Disclaimer

Caution! Using Registry Editor incorrectly can cause serious problems that might require you to reinstall your operating system. Citrix cannot guarantee that problems resulting from the incorrect use of Registry Editor can be solved. Use Registry Editor at your own risk. Be sure to back up the registry before you edit it.