PVS image process gets BSOD on boot up

PVS image process gets BSOD on boot up

book

Article ID: CTX339880

calendar_today

Updated On:

Description

Using the PVS Upgrade wizard to upgrade the PVS drivers failed. Because of this, we used reverse imaging to remove the PVS drivers and install the 1912 CU4 drivers. On the reboot, during creation of a new vdisk, BSOD encountered. ERROR: BNIStack faile. netork stack col not e initialie  <-This error is written in black across the BSOD, odd words are seen.

Resolution

The solution in this case was to create a NEW vm and target device for the imaging process. 
1.  While booted up to the new image, reset the machine account in AD, and joined the machine to the domain.
2.  Using the Provisioning Service Imaging Wizard, instead of selecting an existing target device, the "Add Target Device" dialogue will appear: 
AddNewTD.png

 

Problem Cause

In this case, the problem was a mismatch of the vm parameters on the maintenance VM being used in the process. The maintenance vm was the one that the administrator always used historically for imaging and updating images.  It was likely created before the Current Update (CU2 was applied). 

You can verify if this is the issue affecting your imaging process:
From an SSH prompt on a Citrix Hypervisor host, collect the UUID of an old VM that you might be using.  Remember when typing these commands, that xe is case sensitive. 
xe vm-list name-label=<VM Name>  
This will return the UUID of the vm.

Then, query for the vm parameters like this: 
xe vm-param-get param-name=platform uuid=<uuid from above> 

Here is a sample out put from a lab machine: 
xe vm-list name-label=WIN-Group-TD         
uuid ( RO)      : 6e3b4afc-7266-2f1d-b969-6b58e249f803
   name-label ( RW): WIN-Group-TD
  power-state ( RO): running

Then query for the vm parameters like this:
xe vm-param-get param-name=platform uuid=6e3b4afc-7266-2f1d-b969-6b58e249f803
timeoffset: -7443; device-model: qemu-upstream-compat; videoram: 8; hpet: true; acpi_laptop_slate: 1; secureboot: false; viridian_apic_assist: true; apic: true; device_id: 0002; cores-per-socket: 2; viridian_crash_ctl: true; pae: true; vga: std; nx: true; viridian_time_ref_count: true; viridian_stimer: true; viridian: true; acpi: 1; viridian_reference_tsc: true

Then perform the same steps for a NEW VM.   Copy the results out to a text file so that you can carefully compare the outputs. 
Keep an eye out for "secureboot: false".  This might be the cause of the BSOD when using an old VM.  

If there is any difference, you mit find that you need to create new vm's for imaging tasks, or possibly even for streaming if the old target devices are not working.  

 

Issue/Introduction

If your Citrix Hypervisor hosts have been upgraded, the vm parameters on old VM's might be missing some key values that cause BSOD on boot, especially during the imaging process.