Bug 1471667 - RHEL7.4: libvirtError: internal error: unable to execute QEMU command 'device_del': Bus 'pci.0' does not support hotplugging
RHEL7.4: libvirtError: internal error: unable to execute QEMU command 'device...
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt (Show other bugs)
7.4
x86_64 Linux
urgent Severity urgent
: rc
: ---
Assigned To: Libvirt Maintainers
Virtualization Bugs
: Regression, Reopened
Depends On:
Blocks: 1412074
  Show dependency treegraph
 
Reported: 2017-07-17 04:23 EDT by Michael Burman
Modified: 2017-07-18 07:23 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1472286 (view as bug list)
Environment:
Last Closed: 2017-07-18 07:21:42 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Logs (3.51 MB, application/x-gzip)
2017-07-17 04:23 EDT, Michael Burman
no flags Details
dom xml (7.58 KB, text/plain)
2017-07-18 02:00 EDT, Michael Burman
no flags Details

  None (edit)
Description Michael Burman 2017-07-17 04:23:53 EDT
Created attachment 1299690 [details]
Logs

Description of problem:
libvirtError: internal error: unable to execute QEMU command 'device_del': Bus 'pci.0' does not support hotplugging

Can't hotunplug a vNIC on running VM. Seems to be a bug on libvirt side(but not sure for 100%).


2017-07-17 10:07:57,155+0300 INFO  (jsonrpc/6) [vdsm.api] START hotunplugNic(params={'nic': {'nicModel': 'pv', 'macAddr': '00:1a:4a:16:91:d2', 'linkActive': 'true', 'network': 'ovirtmgmt', 'filterParameters': [], 
'filter': 'vdsm-no-mac-spoofing', 'specParams': {'inbound': {}, 'outbound': {}}, 'deviceId': '3aa529e9-35b8-4d7b-8d79-328bf27c7fde', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci',
 'slot': '0x03'}, 'device': 'bridge', 'type': 'interface'}, 'vmId': 'de251773-3006-4ed7-9a11-e74854b8e8fd'}) from=::ffff:10.35.162.63,33952, flow_id=4d6215a4 (api:46)
2017-07-17 10:07:57,157+0300 INFO  (jsonrpc/6) [virt.vm] (vmId='de251773-3006-4ed7-9a11-e74854b8e8fd') Hotunplug NIC xml: <?xml version='1.0' encoding='UTF-8'?>
<interface type="bridge">
    <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
    <mac address="00:1a:4a:16:91:d2" />
    <model type="virtio" />
    <source bridge="ovirtmgmt" />
    <filterref filter="vdsm-no-mac-spoofing" />
    <link state="up" />
    <bandwidth />
</interface>
 (vm:2850)
2017-07-17 10:07:57,173+0300 ERROR (jsonrpc/6) [virt.vm] (vmId='de251773-3006-4ed7-9a11-e74854b8e8fd') Hotunplug failed (vm:2882)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in hotunplugNic
    self._dom.detachDevice(nicXml)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 92, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 125, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 586, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1147, in detachDevice
    if ret == -1: raise libvirtError ('virDomainDetachDevice() failed', dom=self)
libvirtError: internal error: unable to execute QEMU command 'device_del': Bus 'pci.0' does not support hotplugging
2017-07-17 10:07:57,183+0300 INFO  (jsonrpc/6) [vdsm.api] FINISH hotunplugNic return={'status': {'message': "internal error: unable to execute QEMU command 'device_del': Bus 'pci.0' does not support hotplugging", 
'code': 50}} from=::ffff:10.35.162.63,33952, flow_id=4d6215a4 (api:52)
2017-07-17 10:07:57,183+0300 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call VM.hotunplugNic failed (error 50) in 0.03 seconds (__init__:583)


Version-Release number of selected component (if applicable):
libvirt-client-3.2.0-14.el7.x86_64
3.10.0-693.el7.x86_64
vdsm-4.20.1-200.git4b74487.el7.centos.x86_64
qemu-kvm-ev-2.6.0-28.el7.10.1.x86_64
qemu-kvm-common-ev-2.6.0-28.el7.10.1.x86_64

How reproducible:
100% on some setups and HWs such as:
HP ProLiant DL170e G6 
Dell Inc. PowerEdge R210 II


Steps to Reproduce:
1. Start VM with vNIC on rhv-m
2. Try to hotunplug 

Actual results:
Failed with libvirt error

Expected results:
Should work as expected
Comment 4 Pavel Hrdina 2017-07-17 10:36:44 EDT
Hi, could you please provide the domain XML?  It would be also good to test it with qemu-kvm-rhev-2.9.0 which is the one used in RHEL-7.4.  From the packages it seems the the host OS is CentOS with old QEMU.
Comment 7 Michael Burman 2017-07-18 01:59:55 EDT
(In reply to Pavel Hrdina from comment #4)
> Hi, could you please provide the domain XML?  It would be also good to test
> it with qemu-kvm-rhev-2.9.0 which is the one used in RHEL-7.4.  From the
> packages it seems the the host OS is CentOS with old QEMU.

Hi,

We running latest rhel 7.4 actually - 
Linux puma25.scl.lab.tlv.redhat.com 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux

NAME="Red Hat Enterprise Linux Server"
VERSION="7.4 (Maipo)"
ID="rhel"

And our latest version on all QE production servers is - qemu-kvm-ev-2.6.0-28.el7.10.1.x86_64 (which is come from the ovirt repo as dependency of vdsm)
Comment 8 Michael Burman 2017-07-18 02:00 EDT
Created attachment 1300263 [details]
dom xml
Comment 9 Michael Burman 2017-07-18 02:33:19 EDT
Tested with qemu-kvm-rhev-2.9.0-17.el7a.x86_64
qemu-kvm-common-rhev-2.9.0-17.el7a.x86_64

And got the same result.
Comment 10 Pavel Hrdina 2017-07-18 04:12:29 EDT
So I was able to narrow it down to missing <acpi/> feature in your XML which means that ACPI hotplug is not available and QEMU will use native PCI hot plug which is not supported for host-bus (pci.0 bus).  I don't think that there is any regression, this never worked.

If it worked with some older version of libvirt and/or QEMU could you please try to provide versions of the combination that worked and also the domain XML.
Comment 11 Dan Kenigsberg 2017-07-18 04:59:57 EDT
It may be an ovirt-side regression, then. Michael, can you also check whether Engine passes acpiEnable=True on hosts where you don't see this behaviour?
Comment 12 Michael Burman 2017-07-18 06:15:28 EDT
(In reply to Pavel Hrdina from comment #10)
> So I was able to narrow it down to missing <acpi/> feature in your XML which
> means that ACPI hotplug is not available and QEMU will use native PCI hot
> plug which is not supported for host-bus (pci.0 bus).  I don't think that
> there is any regression, this never worked.
> 
> If it worked with some older version of libvirt and/or QEMU could you please
> try to provide versions of the combination that worked and also the domain
> XML.

This always worked for us, on all previous qemu-kvm versions, which means that the regression is on our ovirt side as Danken suspect now. 

When engine pass - FINISH, FullListVDSCommand, return: [{acpiEnable=true
I don't see this behaviour and hot plug worked as expected.

When engine pass - FINISH, FullListVDSCommand, return: [{acpiEnable=false
I see this behaviour and hot plug failed with libvirt error.

So the regression is on our side.
Comment 13 Francesco Romani 2017-07-18 06:23:16 EDT
(In reply to Michael Burman from comment #12)
> (In reply to Pavel Hrdina from comment #10)
> > So I was able to narrow it down to missing <acpi/> feature in your XML which
> > means that ACPI hotplug is not available and QEMU will use native PCI hot
> > plug which is not supported for host-bus (pci.0 bus).  I don't think that
> > there is any regression, this never worked.
> > 
> > If it worked with some older version of libvirt and/or QEMU could you please
> > try to provide versions of the combination that worked and also the domain
> > XML.
> 
> This always worked for us, on all previous qemu-kvm versions, which means
> that the regression is on our ovirt side as Danken suspect now. 
> 
> When engine pass - FINISH, FullListVDSCommand, return: [{acpiEnable=true
> I don't see this behaviour and hot plug worked as expected.
> 
> When engine pass - FINISH, FullListVDSCommand, return: [{acpiEnable=false
> I see this behaviour and hot plug failed with libvirt error.
> 
> So the regression is on our side.

It is.
It is a (of course unwanted) byproduct of commit a877434796eeb5af51368f6acdf8ed7c8bf33906 . Working on a fix, feel free to assign the bug to me.
Comment 14 Jaroslav Suchanek 2017-07-18 06:51:52 EDT
Per comment 12 closing this as there is nothing to fix in libvirt.
Comment 15 Michael Burman 2017-07-18 07:06:43 EDT
(In reply to Jaroslav Suchanek from comment #14)
> Per comment 12 closing this as there is nothing to fix in libvirt.

Why closing it??
It is a BUG, but not on libvirt
Comment 16 Michael Burman 2017-07-18 07:07:40 EDT
(In reply to Jaroslav Suchanek from comment #14)
> Per comment 12 closing this as there is nothing to fix in libvirt.

Why closing it??
It is a BUG, but not on libvirt
Comment 18 Dan Kenigsberg 2017-07-18 07:21:02 EDT
Bugzilla does not allow me to move this bug to oVirt or RHV (due to missing ovirt team field). Would you agree to reopen in on oVirt, Michael?
Comment 19 Michael Burman 2017-07-18 07:21:42 EDT
This report has been cloned to ovirt-engine, see BZ  1472286 to track this issue.

It's a bug on ovirt side. closing the libvirt report.
Comment 20 Michael Burman 2017-07-18 07:23:00 EDT
(In reply to Dan Kenigsberg from comment #18)
> Bugzilla does not allow me to move this bug to oVirt or RHV (due to missing
> ovirt team field). Would you agree to reopen in on oVirt, Michael?

Yes, i had the same issue, i have cloned it to ovirt-engine

Note You need to log in before you can comment on or make changes to this bug.