Bug 1369795 - QMP should prompt more specific information when hotplug more than 32 vfs to guest
Summary: QMP should prompt more specific information when hotplug more than 32 vfs to ...
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev   
(Show other bugs)
Version: 7.3
Hardware: x86_64
OS: Unspecified
low
low
Target Milestone: rc
: ---
Assignee: Eric Auger
QA Contact: Yanan Fu
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-24 12:24 UTC by Yanan Fu
Modified: 2017-08-02 03:29 UTC (History)
7 users (show)

Fixed In Version: qemu-kvm-rhev-2.8.0-5.el7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-01 23:34:44 UTC
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:2392 normal SHIPPED_LIVE Important: qemu-kvm-rhev security, bug fix, and enhancement update 2017-08-01 20:04:36 UTC

Description Yanan Fu 2016-08-24 12:24:03 UTC
Description of problem:
The limitation to hotplug vfs to guest on rhel7 is 32.

When i hotplug the 33th vf to guest with hmp,it prompt:
(qemu)vfio: Maximum supported vfio devices (32) already attached

But when i use qmp to do the same thing, it only prompt:
{"error": {"class": "GenericError", "desc": "Device initialization failed"}}
I think it should be improved.


Version-Release number of selected component (if applicable):
qemu: qemu-kvm-rhev-2.6.0-22.el7.x86_64
kernel: kernel-3.10.0-495.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.setup the test enviroment for sr-iov test.
  Host bios: sr-iov enable, vt-d enable
  Host kernel: intel_iommu=on / amd_iommu=on
  Nic has SR_IOV capabilities

2. Generate more than 32 vfs, and bind them to vfio-pci
3. boot one guest with "-chardev socket,id=qmp_monitor,path=/var/tmp/qmpmonitor,server,nowait -mon chardev=qmp_monitor,mode=control"

connect to the qmp
# nc -U /var/tmp/qmpmonitor
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 6, "major": 2}, "package": " (qemu-kvm-rhev-2.6.0-22.el7)"}, "capabilities": []}}
{"execute":"qmp_capabilities"}
{"return": {}}
... 

{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:05.2","id":"vf-05.2","bus":"bridge1","addr":"0x14"}} --->repeat, add 32 vfs

...
{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:03.1","id":"vf-03.1","bus":"bridge2","addr":"0x2"}} --->the 33th
{"error": {"class": "GenericError", "desc": "Device initialization failed"}}


Actual results:
qmp prompt "Device initialization failed" when add more than 32 vfs

Expected results:
Should prompt more specific information,such as "Maximum supported vfio devices (32) already attached" from hmp.

Additional info:

Comment 2 Eric Auger 2016-09-20 21:14:50 UTC
This needs to wait for migration to vfio-pci realize landing upstream.
See [PATCH v2 0/12] Convert VFIO-PCI to realize. Then fixing this BZ becomes straightforward.

Comment 3 Miroslav Rezanina 2017-02-20 10:06:12 UTC
Fix included in qemu-kvm-rhev-2.8.0-5.el7

Comment 5 Yanan Fu 2017-03-13 12:30:40 UTC
Test steps:
1.setup the test environment for sr-iov test.
2. Generate more than 32 vfs, and bind them to vfio-pci.
3. boot one guest: 
/usr/libexec/qemu-kvm -chardev socket,id=qmp_monitor,path=/var/tmp/qmpmonitor,server,nowait -mon chardev=qmp_monitor,mode=control -monitor stdio -device pci-bridge,id=bridge1,chassis_nr=1 -device pci-bridge,id=bridge2,chassis_nr=2

4. connect to the qmp
# nc -U /var/tmp/qmpmonitor
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 6, "major": 2}, "package": " (qemu-kvm-rhev-2.6.0-22.el7)"}, "capabilities": []}}
{"execute":"qmp_capabilities"}
{"return": {}}
... 
{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:02.7","id":"vf-02.7","bus":"bridge1","addr":"0x0"}}
...
{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:02.6","id":"vf-02.6","bus":"bridge1","addr":"0x1F"}} -->the 32th
...

-------------------Reproduce--------------------
Test version:
kernel: kernel-3.10.0-595.el7.x86_64
qemu: qemu-kvm-rhev-2.8.0-4.el7

After the step4 above, try to add the 33th VF with qmp.
{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:03.0","id":"vf-03.0","bus":"bridge2","addr":"0x0"}} --->the 33th
Ncat: Connection reset by peer.

And in hmp, get segmentation fault error:
(qemu) vfio: Maximum supported vfio devices (32) already attached

Segmentation fault

In dmesg, meet error:
[ 5582.909145] qemu-kvm[45299]: segfault at 4 ip 00007f9a4ccc8710 sp 00007fff4706c8a8 error 4 in libc-2.17.so[7f9a4cb7b000+1b8000]

I think, this result can be considered the same with it in the Description.

-------------------Verification-----------------
Test version:
kernel: kernel-3.10.0-595.el7.x86_64
qemu: qemu-kvm-rhev-2.8.0-5.el7


Try to add the 33th vf with qmp after step4 above:

{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"04:03.0","id":"vf-03.0","bus":"bridge2","addr":"0x0"}} --->the 33th
{"error": {"class": "GenericError", "desc": "Maximum supported vfio devices (32) already attached"}}

no error in dmesg.

According to the test result above, I think this bz can be move to VERIFIED.

Comment 7 errata-xmlrpc 2017-08-01 23:34:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 8 errata-xmlrpc 2017-08-02 01:12:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 9 errata-xmlrpc 2017-08-02 02:04:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 10 errata-xmlrpc 2017-08-02 02:45:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 11 errata-xmlrpc 2017-08-02 03:09:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 12 errata-xmlrpc 2017-08-02 03:29:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392


Note You need to log in before you can comment on or make changes to this bug.