RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1175113 - pci-bridge should behave the same when adding devices from cli or at hotplug time
Summary: pci-bridge should behave the same when adding devices from cli or at hotplug ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.1
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: ---
Assignee: Marcel Apfelbaum
QA Contact: yduan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-12-17 08:04 UTC by Sibiao Luo
Modified: 2017-08-02 03:22 UTC (History)
19 users (show)

Fixed In Version: qemu-kvm-rhev-2.9.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 23:27:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
dmesg_info (35.78 KB, text/plain)
2017-04-24 07:18 UTC, yduan
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:2392 0 normal SHIPPED_LIVE Important: qemu-kvm-rhev security, bug fix, and enhancement update 2017-08-01 20:04:36 UTC

Description Sibiao Luo 2014-12-17 08:04:06 UTC
Description of problem:
Launch a KVM attach with a device to the pci-bridge specified to slot 0, qemu will quit with a "Unsupported PCI slot 0 for standard hotplug controller. Valid slots are between 1 and 31." warning message.
But if you hot-plug the device to the pci-bridge specified to slot 0 which can successfully.

Version-Release number of selected component (if applicable):
host info:
# uname -r && rpm -q qemu-kvm-rhev
3.10.0-217.el7.x86_64
qemu-kvm-rhev-2.1.2-17.el7.x86_64
guest info:
rhel6.6-z, 2.6.32-504.6.1.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1.launch a KVM attach with a device to the pci-bridge specified to slot 0.
e.g:/usr/libexec/qemu-kvm -machine type=pc,dump-guest-core=off -S -cpu SandyBridge -enable-kvm -m 4096 -smp 4,sockets=2,cores=2,threads=1 -no-kvm-pit-reinjection -device pci-bridge,bus=pci.0,id=bridge1,chassis_nr=1,addr=0x3...-drive file=/home/my-data-disk.raw,if=none,id=drive0,format=raw,cache=none,aio=native -device virtio-blk-pci,bus=bridge1,addr=0x0,drive=drive0,id=disk0

2.launch a KVM with a pci-bridge.
e.g:/usr/libexec/qemu-kvm -machine type=pc,dump-guest-core=off -S -cpu SandyBridge -enable-kvm -m 4096 -smp 4,sockets=2,cores=2,threads=1 -no-kvm-pit-reinjection -device pci-bridge,bus=pci.0,id=bridge1,chassis_nr=1,addr=0x3

3.hot-plug the device to the pci-bridge specified to slot 0
{"execute": "__com.redhat_drive_add", "arguments": {"id":"drive0", "file": "/home/my-data-disk.raw", "format": "raw" }}
{"return": {}}
{"execute": "device_add", "arguments": {"bus": "bridge1", "driver": "virtio-blk-pci", "drive": "drive0", "id": "disk0", "addr": "0x0" }}
{"return": {}}
{"execute": "query-block"}
{"return": [{..., {"io-status": "ok", "device": "ide1-cd0", "locked": false, "removable": true, "tray_open": false, "type": "unknown"}, {"device": "floppy0", "locked": false, "removable": true, "tray_open": false, "type": "unknown"}, {"device": "sd0", "locked": false, "removable": true, "tray_open": false, "type": "unknown"},...}]}

4. check the guest and qemu monitor.
# ls -lh /dev/vd*
# fdisk -l

Actual results:
after step 1, qemu will quit with a "Unsupported PCI slot 0 for standard hotplug controller. Valid slots are between 1 and 31." warning message.
QEMU 2.1.2 monitor - type 'help' for more information
(qemu) qemu-kvm: -device virtio-blk-pci,bus=bridge1,addr=0x0,drive=drive0,id=disk0: Unsupported PCI slot 0 for standard hotplug controller. Valid slots are between 1 and 31.
qemu-kvm: -device virtio-blk-pci,bus=bridge1,addr=0x0,drive=drive0,id=disk0: Device 'virtio-blk-pci' could not be initialized

after step 3, it can hotplug successfully.
after step 4, the disk can be detected in guest correctly.

Expected results:
It should fail to hotplug device to the pci-bridge specified the PCI slot 0.

Additional info:

Comment 1 Sibiao Luo 2014-12-17 08:06:04 UTC
(In reply to Sibiao Luo from comment #0)
> 3.hot-plug the device to the pci-bridge specified to slot 0
> {"execute": "__com.redhat_drive_add", "arguments": {"id":"drive0", "file":
> "/home/my-data-disk.raw", "format": "raw" }}
> {"return": {}}
> {"execute": "device_add", "arguments": {"bus": "bridge1", "driver":
> "virtio-blk-pci", "drive": "drive0", "id": "disk0", "addr": "0x0" }}
> {"return": {}}
> {"execute": "query-block"}
> {"return": [{..., {"io-status": "ok", "device": "ide1-cd0", "locked": false,
> "removable": true, "tray_open": false, "type": "unknown"}, {"device":
> "floppy0", "locked": false, "removable": true, "tray_open": false, "type":
> "unknown"}, {"device": "sd0", "locked": false, "removable": true,
> "tray_open": false, "type": "unknown"},...}]}
{"return": [{...,{"io-status": "ok", "device": "drive0", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "/home/my-data-disk.raw", "format": "raw", "actual-size": 10737422336, "dirty-flag": false}, "iops_wr": 0, "ro": false, "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "file": "/home/my-data-disk.raw", "encryption_key_missing": false}, "type": "unknown"}]}

Comment 2 Amos Kong 2014-12-17 14:36:15 UTC
In two ways to hotplug the same device to same pci_bridge, different hotplug handlers were called.

1) add device in qemu commandline:
    
   shpc_device_hotplug_cb()

2) hotplug by device_add in monitor:

   piix4_device_plug_cb()

Comment 3 Amos Kong 2014-12-17 15:25:25 UTC
We use standard hot plug controller to add commandline pci devices, 'slot 0' is only reserved by standard hot plug controller.

After guest starts up, we use acpi to handle pci hotplug, 'slot 0' is available for acpi pci hotplug.

So it's not a BUG.

Comment 4 Marcel Apfelbaum 2014-12-24 11:51:40 UTC
This is not a BUG, but the bridge should behave the same for hotplug/command line.

Comment 8 Marcel Apfelbaum 2015-06-23 14:31:35 UTC
We will have a look on 7.3

Comment 11 Laine Stump 2016-05-31 18:29:16 UTC
(In reply to Marcel Apfelbaum from comment #4)
> This is not a BUG, but the bridge should behave the same for hotplug/command
> line.

Are you saying that pci-bridge should allow plugging devices into slot 0 in both cases? If so, then isn't the fact that it doesn't allow it a bug? Or are you saying that use of slot 0 on a pci-bridge should never be allowed? Either way, your two sentences seem to contradict each other.

Note that libvirt only allows uses of slots 1 - 31 on a pci-bridge. This may be slightly wasteful of resources, but consistency is more important (and I must confess that after experiencing that a device couldn't be connected to slot 0 on the commandline, it never even occurred to me that it might work with an device_add command.)

Comment 12 Marcel Apfelbaum 2016-06-01 12:00:38 UTC
(In reply to Laine Stump from comment #11)
> (In reply to Marcel Apfelbaum from comment #4)
> > This is not a BUG, but the bridge should behave the same for hotplug/command
> > line.
> 
> Are you saying that pci-bridge should allow plugging devices into slot 0 in
> both cases?

No.

> If so, then isn't the fact that it doesn't allow it a bug?

No, slot 0 is in use during machine boot by the SHPC (hot-plug controller)

> Or
> are you saying that use of slot 0 on a pci-bridge should never be allowed?

Yes.

> Either way, your two sentences seem to contradict each other.
> 
I am sorry if I wasn't clear.

> Note that libvirt only allows uses of slots 1 - 31 on a pci-bridge. This may
> be slightly wasteful of resources, but consistency is more important (and I
> must confess that after experiencing that a device couldn't be connected to
> slot 0 on the commandline, it never even occurred to me that it might work
> with an device_add command.)

Once the machine is booted, the guest firmware/OS can choose what kind
of PCI hotplug will use:
 - if the acpi hotplug takes over all the available slots are "fair play".
 - if the guest OS/firmware is using the shpc controller, slot 0 is occupied.


I agree the consistency is more important. Using only the slots 1-31
is an acceptable solution.

Thanks,
Marcel

Comment 14 Marcel Apfelbaum 2016-12-05 11:18:00 UTC
I posted a patch upstream:
   https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg00377.html
that solves the problem by disabling the shpc component for newer
machine types. 
Please see the upstream discussion that follows the question if the
shpc component is used by other archs besides x86.
The answers seems to be the shpc is not needed be default, I will try to
push this upstream.

Thanks,
Marcel

Comment 15 Laine Stump 2016-12-06 17:54:47 UTC
I should add that (as discussed offline) even once qemu turns off shpc by default, libvirt will continue to forbid use of slot 0 on pci-bridge until it's able to query the default value of either the shpc option, or the range of usable slots for each PCI controller device, for individual machinetypes. At that point libvirt will begin taking advantage of the extra slot when appropriate.

(The alternative would have been to expose yet another option to libvirt config, which I prefer to avoid since it usually just adds to user confusion)

Comment 16 Amnon Ilan 2017-01-12 16:20:17 UTC
V2 posted upstream and is expected to be merged by MST soon

Comment 17 yduan 2017-04-24 07:16:46 UTC
  When verifying this bug with latest qemu-kvm-rhev, hotplug virtio-blk device to pci-bridge slot 0 failed with error info "[  127.974047] virtio-pci: probe of 0000:01:00.0 failed with error -12" in guest "dmesg".
  The whole dmesg info is as attachment.

Version-Release number of selected component (if applicable):
Host:
# uname -r && rpm -q qemu-kvm-rhev
3.10.0-648.el7.x86_64
qemu-kvm-rhev-2.9.0-1.el7.x86_64
Guest:
# uname -r
3.10.0-648.el7.x86_64

Hi marcel,

  Whether need to open a new bug?
  Thanks in advance!

yduan

Comment 18 yduan 2017-04-24 07:18:10 UTC
Created attachment 1273522 [details]
dmesg_info

Comment 20 Marcel Apfelbaum 2017-05-08 09:15:26 UTC
The issue seen here is actually https://bugzilla.redhat.com/show_bug.cgi?id=1434706(In reply to yduan from comment #18)
> Created attachment 1273522 [details]
> dmesg_info

(In reply to yduan from comment #18)
> Created attachment 1273522 [details]
> dmesg_info

The issue seen here is actually:
 - https://bugzilla.redhat.com/show_bug.cgi?id=1434706
No need for a new BZ.

In order to check if the slot 0 is hot-pluggable, add a bridge with a device on another slot. Then it should work.
For more details please see the mentioned BZ.

Thanks,
Marcel

Comment 21 yduan 2017-05-08 12:30:14 UTC
Reproduced with qemu-kvm-rhev-2.8.0-6.el7.x86_64.

Verified with qemu-kvm-rhev-2.9.0-1.el7.x86_64 according to Comment 20.

Comment 23 errata-xmlrpc 2017-08-01 23:27:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 24 errata-xmlrpc 2017-08-02 01:04:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 25 errata-xmlrpc 2017-08-02 01:56:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 26 errata-xmlrpc 2017-08-02 02:37:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 27 errata-xmlrpc 2017-08-02 03:02:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 28 errata-xmlrpc 2017-08-02 03:22:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392


Note You need to log in before you can comment on or make changes to this bug.