Bug 1369795

Summary: QMP should prompt more specific information when hotplug more than 32 vfs to guest
Product: Red Hat Enterprise Linux 7 Reporter: Yanan Fu <yfu>
Component: qemu-kvm-rhevAssignee: Eric Auger <eric.auger>
Status: CLOSED ERRATA QA Contact: Yanan Fu <yfu>
Severity: low Docs Contact:
Priority: low    
Version: 7.3CC: alex.williamson, chayang, jinzhao, juzhang, knoel, mrezanin, virt-maint
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-kvm-rhev-2.8.0-5.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-01 23:34:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Yanan Fu 2016-08-24 12:24:03 UTC
Description of problem:
The limitation to hotplug vfs to guest on rhel7 is 32.

When i hotplug the 33th vf to guest with hmp,it prompt:
(qemu)vfio: Maximum supported vfio devices (32) already attached

But when i use qmp to do the same thing, it only prompt:
{"error": {"class": "GenericError", "desc": "Device initialization failed"}}
I think it should be improved.


Version-Release number of selected component (if applicable):
qemu: qemu-kvm-rhev-2.6.0-22.el7.x86_64
kernel: kernel-3.10.0-495.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.setup the test enviroment for sr-iov test.
  Host bios: sr-iov enable, vt-d enable
  Host kernel: intel_iommu=on / amd_iommu=on
  Nic has SR_IOV capabilities

2. Generate more than 32 vfs, and bind them to vfio-pci
3. boot one guest with "-chardev socket,id=qmp_monitor,path=/var/tmp/qmpmonitor,server,nowait -mon chardev=qmp_monitor,mode=control"

connect to the qmp
# nc -U /var/tmp/qmpmonitor
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 6, "major": 2}, "package": " (qemu-kvm-rhev-2.6.0-22.el7)"}, "capabilities": []}}
{"execute":"qmp_capabilities"}
{"return": {}}
... 

{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:05.2","id":"vf-05.2","bus":"bridge1","addr":"0x14"}} --->repeat, add 32 vfs

...
{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:03.1","id":"vf-03.1","bus":"bridge2","addr":"0x2"}} --->the 33th
{"error": {"class": "GenericError", "desc": "Device initialization failed"}}


Actual results:
qmp prompt "Device initialization failed" when add more than 32 vfs

Expected results:
Should prompt more specific information,such as "Maximum supported vfio devices (32) already attached" from hmp.

Additional info:

Comment 2 Eric Auger 2016-09-20 21:14:50 UTC
This needs to wait for migration to vfio-pci realize landing upstream.
See [PATCH v2 0/12] Convert VFIO-PCI to realize. Then fixing this BZ becomes straightforward.

Comment 3 Miroslav Rezanina 2017-02-20 10:06:12 UTC
Fix included in qemu-kvm-rhev-2.8.0-5.el7

Comment 5 Yanan Fu 2017-03-13 12:30:40 UTC
Test steps:
1.setup the test environment for sr-iov test.
2. Generate more than 32 vfs, and bind them to vfio-pci.
3. boot one guest: 
/usr/libexec/qemu-kvm -chardev socket,id=qmp_monitor,path=/var/tmp/qmpmonitor,server,nowait -mon chardev=qmp_monitor,mode=control -monitor stdio -device pci-bridge,id=bridge1,chassis_nr=1 -device pci-bridge,id=bridge2,chassis_nr=2

4. connect to the qmp
# nc -U /var/tmp/qmpmonitor
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 6, "major": 2}, "package": " (qemu-kvm-rhev-2.6.0-22.el7)"}, "capabilities": []}}
{"execute":"qmp_capabilities"}
{"return": {}}
... 
{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:02.7","id":"vf-02.7","bus":"bridge1","addr":"0x0"}}
...
{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:02.6","id":"vf-02.6","bus":"bridge1","addr":"0x1F"}} -->the 32th
...

-------------------Reproduce--------------------
Test version:
kernel: kernel-3.10.0-595.el7.x86_64
qemu: qemu-kvm-rhev-2.8.0-4.el7

After the step4 above, try to add the 33th VF with qmp.
{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:03.0","id":"vf-03.0","bus":"bridge2","addr":"0x0"}} --->the 33th
Ncat: Connection reset by peer.

And in hmp, get segmentation fault error:
(qemu) vfio: Maximum supported vfio devices (32) already attached

Segmentation fault

In dmesg, meet error:
[ 5582.909145] qemu-kvm[45299]: segfault at 4 ip 00007f9a4ccc8710 sp 00007fff4706c8a8 error 4 in libc-2.17.so[7f9a4cb7b000+1b8000]

I think, this result can be considered the same with it in the Description.

-------------------Verification-----------------
Test version:
kernel: kernel-3.10.0-595.el7.x86_64
qemu: qemu-kvm-rhev-2.8.0-5.el7


Try to add the 33th vf with qmp after step4 above:

{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:03.0","id":"vf-03.0","bus":"bridge2","addr":"0x0"}} --->the 33th
{"error": {"class": "GenericError", "desc": "Maximum supported vfio devices (32) already attached"}}

no error in dmesg.

According to the test result above, I think this bz can be move to VERIFIED.

Comment 7 errata-xmlrpc 2017-08-01 23:34:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 8 errata-xmlrpc 2017-08-02 01:12:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 9 errata-xmlrpc 2017-08-02 02:04:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 10 errata-xmlrpc 2017-08-02 02:45:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 11 errata-xmlrpc 2017-08-02 03:09:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 12 errata-xmlrpc 2017-08-02 03:29:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392