App Layering Recipe for NVIDIA GPU

App Layering Recipe for NVIDIA GPU

book

Article ID: CTX289918

calendar_today

Updated On:

Description

Overview

vSphere supports several ways to use NVIDIA graphics cards to provide GPU functionality to virtual machines.  These include:

  • Virtual Shared Graphics Acceleration (vSGA)
  • Virtual Dedicated Graphics Acceleration (vDGA)
  • Virtual Shared Pass-Through Graphics Acceleration (vGPU)

These will be explained in more detail below.

vSGA

This provides the ability to share NVIDIA GPUs among many virtual desktops.  An NVIDIA driver is installed on the Hypervisor and the desktops use a proprietary VMware-developed driver that will access the shared GPU.  This option supports only up to DirectX 9 and OpenGL2.1. The main advantage of vSGA is that virtual machines can still be migrate when using this technology.

vDGA

This is a hardware pass-through mode where the GPU is not shared but accessed directly by the virtual machine.  This mode supports the real NVIDIA graphics driver and attaches directly to the GPU from the VM.  This option is very expensive if used for virtual desktops because each GRID card can only support a very limited number of desktops. This option is more viable for shared Citrix Virtual Apps Session Hosts as the GPU can then be shared by all user on the session host. This option supports the latest versions of DirectX and OpenGL and should have the graphical performance of a high-end graphical workstation.

vGPU

vGPU has many of the benefits of vDGA but can also share the NVIDIA GPUs.  An NVIDIA VIB driver is installed on the hypervisor and an NVIDIA driver is installed on the virtual machine.  vGPU supports DirectX 11,12,2D, Open CL 1.2 and OpenGL 4.6.  See NVIDIA GPU documentation for more details.

NVIDIA supports different GPU profiles for each type of GRID card.  The profiles change the size of the frame buffer from 512MB to 8 GB which in turn translates to the number of shared GPU sessions a card will support.  Different cards support different number of sessions from 2 to 64 per card.  See the NVIDIA GPU Reference for more information.  


Instructions

How to set up NVIDIA GPU cards on vSphere hosts is outside of the scope of this article. At a high level, the cards(s) must be installed in the host, a “Virtual GPU Manager” software driver must be installed on  the host, and the Graphics Device Setting must be set on each GPU to “Shared Direct” mode for vGPU.  More information on this process is available here: https://kb.vmware.com/s/article/2033434.
Before installing the NVIDIA drivers:

  • Create a new virtual machine for testing.
    1. Edit the settings within vCenter to add a GPU by selecting "Add new device" and then "Shared PCI device."
    2. Select the desired Grid profile
  • Configure Remote Desktop on the virtual machine because after installing the NVIDIA drivers, the VMware Remote Console will no longer function. Edit the remote settings in the system control panel to “allow remote connections to this computer”.  Ensure the NLA checkbox is unchecked.
User-added image
  • Make a snapshot of the virtual machine prior to installing the drivers.  

Recipe for NVIDIA Driver Installation

To prepare for the creation of new App Layering layers for NVIDIA:

  1. Create a new template: clone the virtual machine made to test NVIDIA, then remove the hard drive and delete it because App Layering Connector Templates will not use a hard drive
  2. Create a new connector: create an App Layering vSphere Connector using the template.  Use Offload composting; using a cache is not required as this Connector will not be used that often
  3. Create a new Platforrn Layer

Platform Layer

Create a new Platform Layer.  This will likely be required if you need to support layered images both with and without NVIDIA drivers.

  1. Boot the packaging machine using the new Connector.
  2. Install the NVIDIA Windows drivers
  3. Under c:\program files\unidesk\uniservice\userexclusions, create a file called NVIDIAExcludes.txt 
  4. Add 2 lines to that file

              C:\Program Files\NVIDIA Corporation\
              C:\Program Files (x86)\NVIDIA Corporation\

  1. Reboot
  2. Reconnect using RDP
  3. Run Shutdown for Finalize

Publish an Image

Publish a Master Image using an MCS or PVS Connector that includes the new Platform Layer. For the MCS Connector, make sure to use the same VMware template used for the packaging connector. This will ensure all of the hardware for the NVIDIA card is configured. For PVS, reference the same machine template in the XenDesktop Setup wizard when creating the targets.

Additional Information

https://support.citrix.com/article/CTX241448