Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1557330 - [SR-IOV] - Can't start VM with SR-IOV vNIC (depends on libvirt bug 1558655 )
[SR-IOV] - Can't start VM with SR-IOV vNIC (depends on libvirt bug 1558655 )
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
4.2.0
x86_64 Linux
unspecified Severity high
: ovirt-4.2.3
: ---
Assigned To: Michal Skrivanek
Michael Burman
: Regression
Depends On: 1556828
Blocks:
  Show dependency treegraph
 
Reported: 2018-03-16 08:40 EDT by Michal Skrivanek
Modified: 2018-05-15 13:55 EDT (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1556828
Environment:
Last Closed: 2018-05-15 13:54:02 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Virt
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:1489 None None None 2018-05-15 13:55 EDT

  None (edit)
Description Michal Skrivanek 2018-03-16 08:40:17 EDT
cloning back to RHV for possible dependency update once fixed in libvirt

+++ This bug was initially created as a clone of Bug #1556828 +++

Description of problem:
[SR-IOV] - Can't start VM with SR-IOV vNIC

2018-03-15 11:25:40,890+0200 ERROR (vm/f9bd0e85) [virt.vm] (vmId='f9bd0e85-54e5-46ec-9ae7-5a61861120a9') The vm start process failed (vm:940)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 869, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2829, in _run
    dom = self._connection.defineXML(domxml)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3676, in defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirtError: XML error: non unique alias detected: ua-04c2decd-4e33-4023-84de-a2205c777af7
2018-03-15 11:25:40,891+0200 INFO  (vm/f9bd0e85) [virt.vm] (vmId='f9bd0e85-54e5-46ec-9ae7-5a61861120a9') Changed state to Down: XML error: non unique alias detected: ua-04c2decd-4e33-4023-84de-a2205c777af7 (code=1
) (vm:1677)

Version-Release number of selected component (if applicable):
4.2.2.2-0.1.el7
vdsm-4.20.20-1.el7ev.x86_64
libvirt-3.9.0-13.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Try to start VM with sr-iov vNIC

Actual results:
Failed to run

Expected results:
Should work

--- Additional comment from Michael Burman on 2018-03-15 11:30:14 CET ---

The issue reproduced with new libvirt libvirt-3.9.0-14.el7.x86_64

2018-03-15 12:27:16,562+0200 ERROR (vm/f9bd0e85) [virt.vm] (vmId='f9bd0e85-54e5-46ec-9ae7-5a61861120a9') The vm start process failed (vm:940)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 869, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2829, in _run
    dom = self._connection.defineXML(domxml)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3676, in defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirtError: XML error: non unique alias detected: ua-04c2decd-4e33-4023-84de-a2205c777af7
2018-03-15 12:27:16,566+0200 INFO  (vm/f9bd0e85) [virt.vm] (vmId='f9bd0e85-54e5-46ec-9ae7-5a61861120a9') Changed state to Down: XML error: non unique alias detected: ua-04c2decd-4e33-4023-84de-a2205c777af7 (code=1
) (vm:1677)

--- Additional comment from Michal Skrivanek on 2018-03-16 09:22:59 CET ---

can you try with a build from https://bugzilla.redhat.com/show_bug.cgi?id=1554962#c3 ?

--- Additional comment from Michael Burman on 2018-03-16 09:51:41 CET ---

(In reply to Michal Skrivanek from comment #2)
> can you try with a build from
> https://bugzilla.redhat.com/show_bug.cgi?id=1554962#c3 ?

Hi Michal,
OK i did what you asked, tested with https://bugzilla.redhat.com/show_bug.cgi?id=1554962#c3 and get the same error and result - 

 2018-03-16 10:48:34,210+0200 ERROR (vm/f9bd0e85) [virt.vm] (vmId='f9bd0e85-54e5-46ec-9ae7-5a61861120a9') The vm start process failed (vm:940)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 869, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2829, in _run
    dom = self._connection.defineXML(domxml)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3676, in defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirtError: XML error: non unique alias detected: ua-92597e9b-5bc2-41a9-ad1d-2d603ecaedfa
2018-03-16 10:48:34,214+0200 INFO  (vm/f9bd0e85) [virt.vm] (vmId='f9bd0e85-54e5-46ec-9ae7-5a61861120a9') Changed state to Down: XML error: non unique alias detected: ua-92597e9b-5bc2-41a9-ad1d-2d603ecaedfa (code=1
) (vm:1677)

[root@puma22 ~]# rpm -qa | grep libvirt
libvirt-daemon-driver-interface-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-storage-logical-3.9.0-15.el7ua.x86_64
libvirt-daemon-kvm-3.9.0-15.el7ua.x86_64
libvirt-python-3.9.0-1.el7.x86_64
libvirt-daemon-3.9.0-15.el7ua.x86_64
libvirt-daemon-config-nwfilter-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-storage-scsi-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-nwfilter-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-storage-iscsi-3.9.0-15.el7ua.x86_64
libvirt-client-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-network-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-secret-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-lxc-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-storage-rbd-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-storage-3.9.0-15.el7ua.x86_64
libvirt-lock-sanlock-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-storage-core-3.9.0-15.el7ua.x86_64
libvirt-daemon-config-network-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-storage-mpath-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-qemu-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-storage-disk-3.9.0-15.el7ua.x86_64
libvirt-3.9.0-15.el7ua.x86_64
libvirt-libs-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-nodedev-3.9.0-15.el7ua.x86_64
libvirt-daemon-driver-storage-gluster-3.9.0-15.el7ua.x86_64

from - https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=15543641

--- Additional comment from Michal Privoznik on 2018-03-16 12:40:56 CET ---

That won't help. This is genuine libvirt bug. Problem is that when libvirt parses domain definition for <interface type='hostdev'/> it creates second entry in internal domain representation just like if it was <hostdev/>. So later when user alias validation runs it finds two devices with the same alias and throws and error. Patch proposed upstream:

https://www.redhat.com/archives/libvir-list/2018-March/msg00935.html
Comment 2 Dan Kenigsberg 2018-03-27 06:24:35 EDT
Underlying libvirt bug fixed in libvirt-3.9.0-14.el7_5.2
Comment 3 Michael Burman 2018-03-27 09:42:39 EDT
Verified on - vdsm-4.20.23-1.el7ev.x86_64 and libvirt-daemon-3.9.0-14.el7_5.2.x86_64


libvirt-libs-3.9.0-14.el7_5.2.x86_64
libvirt-client-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.2.x86_64
libvirt-python-3.9.0-1.el7.x86_64
libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-kvm-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-qemu-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-storage-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-network-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.2.x86_64
libvirt-lock-sanlock-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-interface-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.2.x86_64
libvirt-daemon-driver-secret-3.9.0-14.el7_5.2.x86_64

4.2.2.5-0.1.el7

kernel-3.10.0-862.el7.x86_64

 <interface type='hostdev'>
      <mac address='00:1a:4a:16:20:ba'/>
      <driver name='vfio'/>
      <source>
        <address type='pci' domain='0x0000' bus='0x05' slot='0x10' function='0x0'/>
      </source>
      <alias name='ua-72590144-2e88-40ef-9865-c349a3354958'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <interface type='hostdev'>
      <mac address='00:1a:4a:16:20:bc'/>
      <driver name='vfio'/>
      <source>
        <address type='pci' domain='0x0000' bus='0x05' slot='0x10' function='0x1'/>
      </source>
      <alias name='ua-bec4005c-8f59-46ce-b103-f3df8cbd550f'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </interface>
Comment 4 RHV Bugzilla Automation and Verification Bot 2018-04-13 05:53:47 EDT
INFO: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[No external trackers attached]

For more info please contact: rhv-devops@redhat.com
Comment 7 errata-xmlrpc 2018-05-15 13:54:02 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1489

Note You need to log in before you can comment on or make changes to this bug.