Wednesday, January 23, 2013

VMware DirectPath I/O - adding passthrough PCI devices to VM

VMware DirectPath I/O allows guest OS to directly access physical PCI and PCIe devices connected to host. There several things to check before proceeding to configuration:
  • the maximum of PCI devices that can be attached to a VM is 6
  • the VM hardware must be 7 or later
  • Intel VT-d or AMD IOMMU must be enabled
  • PCI devices are connected and marked as available 
In vSphere Client  go to Configuration - Advanced Settings (Hardware) and see if the the device is Active (a green icon). If there is no device displayed, go to Edit and select your device. In some cases the host will need reboot - device will have an orange icon.

In /etc/vmware/esx.conf the modification is recorded as:
/device/000:000:27.0/owner = "passthru"

Go to VM - Edit Settings and add the device
A reservation equal to the memory of the VM will be automatically created. However, the reservation is not removed when the device is removed from the VM. So, be sure to clean up. If there is no memory reservation then powering on the VM will end in the following error:

Another way to do add/remove passthrough devices is using PowerCLI. The  add-passthroughdevice cmdlet does not create the memory reservation and it has to be done in a second step:
get-vmhost HostName | get-passthroughdevice | Where {$_.State -eq "Active"} | add-PassThroughDevice -VM VMName
foreach ($vm in (get-vm VMName)) {get-vmresourceconfiguration $vm | set-vmresourceconfiguration -MemReservationMB $vm.MemoryMB}

Removal of PCI device and memory reservation cleanup:
get-vm VMName | get-passthroughdevice | remove-passthroughdevice -Confirm:$false
get-vm VMName |get-vmresourceconfiguration | set-vmresourceconfiguration -MemReservationMB 0

A VM using DirectPath I/O does not support the following features:
  • snapshots
  • suspend and resume
  • HA
  • FT
  • DRS (the VM can be part of DRS cluster, but it cannot be migrated across hosts)
  • hot adding and removal of devices


Rob Migliore said...

When using the add-passthroughdevice command as you instruct, I get the following error when I turn the VM on:

Power On virtual machine:The systemId does not match the current system or the deviceId, and the vendorId does not match the device currently at 006:00.0

It works fine if I add the device via the GUI, so I inspected the .vmx files for 3 different cases:

VM E3D6 (host .62) has added via script add
ectswin7-e3D6/ectswin7-e3D6.vmx:pciPassthru0.deviceId = "1757"
ectswin7-e3D6/ectswin7-e3D6.vmx:pciPassthru0.vendorId = "10de"
ectswin7-e3D6/ectswin7-e3D6.vmx:pciPassthru0.systemId = "51150b2d-9a43-e109-7c74-78e7d163ee98"

VM E7 (host .59) has added via GUI
ectswin7-e7/ectswin7-e7.vmx:pciPassthru0.deviceId = "6dd"
ectswin7-e7/ectswin7-e7.vmx:pciPassthru0.vendorId = "10de"
ectswin7-e7/ectswin7-e7.vmx:pciPassthru0.systemId = "515dad5b-c1c3-d9d2-1da4-78e7d16480f0"

VM E8 (host .62) has added via GUI
ectswin7-e8/ectswin7-e8.vmx:pciPassthru0.deviceId = "6dd"
ectswin7-e8/ectswin7-e8.vmx:pciPassthru0.vendorId = "10de"
ectswin7-e8/ectswin7-e8.vmx:pciPassthru0.systemId = "51150b2d-9a43-e109-7c74-78e7d163ee98"

It seems to get the wrong deviceId when adding via the GUI. Have you seen this?

Rob Migliore said...

BTW, I'm using ESXi 5.1.0 patched through Mar 31 2013.

I can see my Quadro 4000 card in lspci -v, but don't see any devices with deviceId 1757

00:06:00.0 VGA compatible controller Display controller: nVidia Corporation GF100 [Quadro 4000]
Class 0300: 10de:06dd

none said...

Hey Rob,

ideed: this bug still seems to be around with the latest release of ESXi 5.5 / vCenter 5.5 (as of 26.10.2014).

Also I get different deviceIds when using the .Net Client against the ESXi Host directly or the vCenter.

vSphere Web Client does also produce crap in my tests. Best way of doing it seems to be editing the VMX file, unregister and re-register the VM.

There are also differences when looking up the device ID in the client and the cli. CLI seems to use hex converted to bin notation (0x######) while the client (and perhaps the API) use full hex IDs e.g. 68b8).

Another strange thing i noticed is that if one pci device has multiple passthrough components (e.g. a VGA that also has a audio chip) it will show up as one device with 2 child devices in the passthrough menu. The id of the second device (in my case the audio component) will end up with a negative deviceID (e.g. -256612) which seems kinda odd - maybe another bug?

Adding the component using different VM hardware compability versions (e.g. 8 vs 10) also seems to have different outcome.

oh boy - passthrough is a mess from what i've seen so far. out of 20 times it will work maybe 3 times - but once it works on a vm it works pretty stable.

Unknown said...

Take a look at my project on

Unknown said...

I had the exact same issues as RSZ mentioned, with the device ID being in decimal instead of hex. This caused the machines being unable to boot. Also the uuid was messed up. I also found out this by observing the differences between a vmx-file for a PowerCLI added PCI-device and a point-and-click added PCI-device.

Thru PowerCLI I'm configuring View VM's for GRID K1 PCI passthrough, and it works when manually defining the device id, and using esxcli to get the correct VMHost uuid.

Thanks for putting this information online.

Adam Rankin said...

Big thanks to this post, it lead me down the right path.

This issue still occurred in ESXi 6.0.0 (Build 3620759) and after some digging I saw that the Web UI was not correctly populated the device ID field (via VM->Edit Settings->VM Options->Advanced->Edit Configuration->pciPassthrough0.deviceId). Manually correcting it to the output from lspci -v allowed the machine to boot.

Hope this helps.

billy said...

yeah same problem here on 6, had to manually edit the deviceid to match what i saw when looking at the client software, not the web, then i could power the vm