Bug 1260180 - PCI passthrough leads to hang of KVM virtual machine
PCI passthrough leads to hang of KVM virtual machine
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
x86_64 Linux
unspecified Severity urgent
: rc
: ---
Assigned To: Alex Williamson
Virtualization Bugs
Depends On:
  Show dependency treegraph
Reported: 2015-09-04 12:23 EDT by Robert McSwain
Modified: 2016-05-29 16:09 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2016-01-07 16:12:45 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Robert McSwain 2015-09-04 12:23:08 EDT
What problem/issue/behavior are you having trouble with?  What do you expect to see?
One of our customers requires the use of a National Instruments PCIe-GPIB card. 

Until now, this card was used on hp z-series workstations under RHEL 5.8. 
We are currently converting to KVM virtualization and the new base is a RHEL 5.8 VM. PCIe devices are passed through to the VM.
This passthrough does not work with the GPIB card. The test program (gpibtsw) freezes the entire VM. The RHEL7 hypervisor keeps running, so we are able to kill the VM and start it again. But inspecting the logs does not give any insight into the problem.

We're  using only two PCI devices  in pass through mode:

- an self-developed  card  as interface to our  tester hardware
- the NI  GPIB card

Our card works  fine, just the NI card  (or better  the self-test program for this card)  is able to bring the VM in hang state.

I tried to get some more details from NI (National Instruments)  on what the self-test program does, but without success yet.

Files will be provided in a future update

Where are you experiencing the behavior?  What environment?
HP Z640 workstation with RHEL7.0 and 7.1  as hypervisor  and RHEL5.8  as guest.

When does the behavior occur? Frequently?  Repeatedly?   At certain times?
Comment 6 Robert McSwain 2015-09-23 17:47:48 EDT
Using the Serial Console, this now causes the VM to crash rather than simply hang, citing the following in the console

Unable to handle kernel paging request at ffffffffd8ad2760 RIP: 
 [<ffffffff88756f92>] :ni488k:ni488k-unversioned0000063+0x32/0x1a0
PGD 203067 PUD 115e48067 PMD 0 
Oops: 0000 [1] SMP 
last sysfs file: /power/state
CPU 5 
Modules linked in: ni488lock(PU) ni488k(PU) nipalk(PU) nikal(PU) nls_utf8 t82622(PU) nfs hidp rfcomm l2cap bluetooth nfsd exportfs nfs_acl auth_rpcgss ipv6 xfrm_nalgo crypto_api autofs4 lockd sunrpc dm_mirror dm_multipath scsi_dh video backlight sbs power_meter hwmon i2c_ec dell_wmi wmi button battery asus_acpi acpi_memhotplug ac parport_pc lp parport floppy ftdi_sio i2c_piix4 usbserial i2c_core ide_cd virtio_net pcspkr serio_raw PcieCID1100(PU) tpm_tis cdrom tpm virtio_balloon tpm_bios dm_raid45 dm_message dm_region_hash dm_log dm_mod dm_mem_cache ata_piix libata sd_mod scsi_mod virtio_blk virtio_pci virtio_ring virtio ext3 jbd uhci_hcd ohci_hcd ehci_hcd
Pid: 9687, comm: gpibtsw Tainted: P     ---- 2.6.18-308.el5 #1
RIP: 0010:[<ffffffff88756f92>]  [<ffffffff88756f92>] :ni488k:ni488k-unversioned0000063+0x32/0x1a0
RSP: 0018:ffff81060e5f5b28  EFLAGS: 00010246
RAX: 000000000a063970 RBX: ffff810631a96130 RCX: 0000000000000000
RDX: ffff810631a9618c RSI: ffff8105ea390180 RDI: ffff810631a96130
RBP: ffff810631a9618c R08: 0000000000000000 R09: ffff810631a9618c
R10: 0000000000000000 R11: 0000000000000046 R12: 0000000000000046
R13: ffff810631a96130 R14: ffff8105ea390180 R15: ffff8105ea390180
FS:  0000000000000000(0000) GS:ffff810115f9e4c0(0063) knlGS:00000000f7f856c0
CS:  0010 DS: 002b ES: 002b CR0: 000000008005003b
CR2: ffffffffd8ad2760 CR3: 00000005e91b2000 CR4: 00000000000006a0
Process gpibtsw (pid: 9687, threadinfo ffff81060e5f4000, task ffff81063ca1d0c0)
Stack:  ffffffff88745e78 ffffffff886284e5 ffff81060e5f5b58 ffffffff8863265e
 0000000000000001 ffff810631a9618c ffff810631a9618c 0000000000000046
 0000000000000046 ffff810631a96130 ffff810631a960e0 ffffffff8875babe
Call Trace:
 [<ffffffff886284e5>] :nipalk:nipalk-unversioned0002022+0x75/0x90
 [<ffffffff8863265e>] :nipalk:nipalk-unversioned0002224+0x8e/0xf0
 [<ffffffff8875babe>] :ni488k:ni488k-unversioned0000180+0x11e/0x150
 [<ffffffff885ac416>] :nipalk:nipalk-unversioned0000052+0xc6/0xe0
 [<ffffffff885d2d11>] :nipalk:nipalk-unversioned0001102+0x151/0x1c0
 [<ffffffff885d494d>] :nipalk:nipalk-unversioned0001111+0x34d/0x1140
 [<ffffffff8863192a>] :nipalk:nipalk-unversioned0002198+0x2a/0xa0
 [<ffffffff885ad910>] :nipalk:nipalk-unversioned0000011+0x220/0x240
 [<ffffffff88645b15>] :nipalk:nipalk_exported9+0x185/0x1e0
 [<ffffffff886450c5>] :nipalk:_ZNK22tMemBlockReferenceBase10getPointerEPi+0x25/0x50
 [<ffffffff885c1823>] :nipalk:nipalk-unversioned0000917+0x113/0x200
 [<ffffffff8862829c>] :nipalk:_Z15ioControlHelperPvjS_j+0x2c/0x180
 [<ffffffff88669f0a>] :nipalk:_ZNV14tSyncAtomicU32mmEi+0x1a/0x40
 [<ffffffff8862829c>] :nipalk:_Z15ioControlHelperPvjS_j+0x2c/0x180
 [<ffffffff88628736>] :nipalk:nipalk-unversioned0002020+0x1b6/0x270
 [<ffffffff885851f2>] :nikal:nNIKAL100_ioctl+0x32/0x3a
 [<ffffffff88585224>] :nikal:nNIKAL100_compatIoctl+0x13/0x17
 [<ffffffff800ff5f4>] compat_sys_ioctl+0xc5/0x2b1
 [<ffffffff800614b5>] sysenter_do_call+0x1e/0x76

Code: 4c 8b 24 c5 e0 5b 7b 88 4d 85 e4 75 11 c7 02 84 3b ff ff e9 
RIP  [<ffffffff88756f92>] :ni488k:ni488k-unversioned0000063+0x32/0x1a0
 RSP <ffff81060e5f5b28>
Comment 7 Alex Williamson 2015-09-24 16:52:13 EDT
So we're getting a guest kernel crash caused by a page fault in the proprietary driver for a device that we've never claimed to support for device assignment.  Do we have any sort of relationship with National Instruments to debug why their driver is failing?  Does the customer?

The only shot-in-the-dark guess I can make is that possibly the driver is not expecting to find the device in the PCI topology we have in the virtual machine.  The Q35 machine type for the VM is more similar, but is still tech preview for 7.2 and does not yet allow configuration of the emulated PCIe devices necessary to bring the topology more in line with a modern physical system.

In the meantime the only thing I can suggest would be to create a pci-bridge in the VM and place the assigned device behind the bridge, just in case the driver might be looking for an upstream device from the VM perspective.  To do that, add a new controller in the domain XML:

    <controller type='pci' index='1' model='pci-bridge'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x0'/>

Then modify the <hostdev> entry for the assigned device so that the VM defined bus number matches the index of this new controller.  For example:

    <hostdev mode='subsystem' type='pci' managed='yes'>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>

The <address> within the <source> tags define the physical device, do not change these.  The second <address> defines the address of the <hostdev> as seen by the guest, change the \bus\ property to match the \index\ property of the pci-bridge <controller>, as shown above.

As I said, this is a long shot, but it's really our only chance since we have no visibility into the driver.  If it doesn't work, the only next steps are CLOSED CANTFIX or some debug assistance from National Instruments.
Comment 9 Alex Williamson 2016-01-07 16:12:45 EST
Customer case has been closed, closing.

Note You need to log in before you can comment on or make changes to this bug.