Bug 1469235

Summary: [Blocked on platform bug 1532183] Hotplug failed because libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk7' could not be initialized
Product: [oVirt] vdsm Reporter: Raz Tamir <ratamir>
Component: GeneralAssignee: Daniel Erez <derez>
Status: CLOSED CURRENTRELEASE QA Contact: Avihai <aefrat>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.19.20CC: aefrat, alistair.ross, amureini, buettner, bugs, derez, ebenahar, eblake, ehabkost, gveitmic, jbryant, jsuchane, justin.crook, kwolf, libvirt-maint, lveyde, mayur.indalkar, mkalinin, nsoffer, pkrempa, ratamir, rtamir, tnisan
Target Milestone: ovirt-4.2.2Keywords: Automation
Target Release: 4.20.19Flags: rule-engine: ovirt-4.2+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: vdsm v4.20.19 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1526197 (view as bug list) Environment:
Last Closed: 2018-03-29 10:58:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1532183, 1540872    
Bug Blocks: 1526197    
Attachments:
Description Flags
engine and vdsm logs
none
qemu none

Description Raz Tamir 2017-07-10 17:00:45 UTC
Description of problem:
On VM with 4 disks - A, B, C, D where A is the bootable disk with OS. After starting the VM, if we will hot-unplug the 3 disks in order B, C and D and after that will try to hotplug back disk C, it will fail with:
VDSM host_mixed_2 command HotPlugDiskVDS failed: Internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk3' could not be initialized

engine.log:

2017-07-10 19:47:12,806+03 INFO  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (org.ovirt.thread.pool-7-thread-38) [9185722b-577f-4462-86a8-e3a264f92d8e] Running command: HotPlugDiskToVmCommand in
ternal: false. Entities affected :  ID: 848657b3-1dc1-4c12-873c-b819e3415840 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER
2017-07-10 19:47:12,819+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (org.ovirt.thread.pool-7-thread-38) [9185722b-577f-4462-86a8-e3a264f92d8e] START, HotPlugDiskVDSCommand(HostName = 
host_mixed_2, HotPlugDiskVDSParameters:{runAsync='true', hostId='ce31742f-1f79-4d5a-84c8-103e194029e2', vmId='848657b3-1dc1-4c12-873c-b819e3415840', diskId='573314a3-45d9-49a2-80eb-9d62c8bbdf2c', addressMap='null'}
), log id: 69315296
2017-07-10 19:47:13,858+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler4) [25f64cfb] START, FullListVDSCommand(HostName = host_mixed_2, FullListVDSCommandParameters:{
runAsync='true', hostId='ce31742f-1f79-4d5a-84c8-103e194029e2', vmIds='[848657b3-1dc1-4c12-873c-b819e3415840]'}), log id: 6bc6800d
2017-07-10 19:47:14,205+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler4) [25f64cfb] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-
rhel7.3.0, vmId=848657b3-1dc1-4c12-873c-b819e3415840, guestDiskMapping={bf334992-dd70-48dd-8={name=/dev/vdb}, b19764da-eb8f-4c41-a={name=/dev/vda}, 573314a3-45d9-49a2-8={name=/dev/vdc}, QEMU_DVD-ROM_QM00003={name=/
dev/sr0}, 0QEMU_QEMU_HARDDISK_1ed10d71-7cc5-42aa-b={name=/dev/sda}}, transparentHugePages=true, timeOffset=0, cpuType=Conroe, smp=1, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@2a124cc2, smartcardEnable=fal
se, custom={device_1a6f0061-74c1-4c9d-a7d8-f6500b6cc79b=VmDevice:{id='VmDeviceId:{deviceId='1a6f0061-74c1-4c9d-a7d8-f6500b6cc79b', vmId='848657b3-1dc1-4c12-873c-b819e3415840'}', device='ide', type='CONTROLLER', boo
tOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', l
ogicalName='null', hostDevice='null'}, device_1a6f0061-74c1-4c9d-a7d8-f6500b6cc79bdevice_65c98fdd-f9be-48ea-92af-3658a0938762=VmDevice:{id='VmDeviceId:{deviceId='65c98fdd-f9be-48ea-92af-3658a0938762', vmId='848657b
3-1dc1-4c12-873c-b819e3415840'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', devic
eAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_1a6f0061-74c1-4c9d-a7d8-f6500b6cc79bdevice_65c98fdd-f9be-48ea-92af-3658a0938762device_6bcf3112-5ed9-4d38-a
cc4-1992ecbfc49adevice_03d1e32c-cd35-4887-9c5c-8e6196e3d65c=VmDevice:{id='VmDeviceId:{deviceId='03d1e32c-cd35-4887-9c5c-8e6196e3d65c', vmId='848657b3-1dc1-4c12-873c-b819e3415840'}', device='spicevmc', type='CHANNEL
', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=3}', managed='false', plugged='true', readOnly='false', deviceAlias='channel2', customProperties='[]', snapshotId='null', l
ogicalName='null', hostDevice='null'}, device_1a6f0061-74c1-4c9d-a7d8-f6500b6cc79bdevice_65c98fdd-f9be-48ea-92af-3658a0938762device_6bcf3112-5ed9-4d38-acc4-1992ecbfc49a=VmDevice:{id='VmDeviceId:{deviceId='6bcf3112-
5ed9-4d38-acc4-1992ecbfc49a', vmId='848657b3-1dc1-4c12-873c-b819e3415840'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='fals
e', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=1024, smpCoresPerSocket=1, vmName=vm_iscsi_1_iscs
i_1018421468, nice=0, status=Up, maxMemSize=4096, bootMenuEnable=false, pid=32744, smpThreadsPerCore=1, memGuaranteedSize=1024, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Ob
ject;@12735a0c, display=qxl, maxVCpus=16, clientIp=10.35.4.157, statusTime=6031012890, maxMemSlots=16}], log id: 6bc6800d
2017-07-10 19:47:14,224+03 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler4) [25f64cfb] Received a spice Device without an address when processing VM 848657b3-1dc1-4c1
2-873c-b819e3415840 devices, skipping device: {device=spice, specParams={fileTransferEnable=true, displayNetwork=ovirtmgmt, copyPasteEnable=true, displayIp=10.35.82.68, spiceSecureChannels=smain,sinputs,scursor,spl
ayback,srecord,sdisplay,ssmartcard,susbredir}, type=graphics, deviceId=f501cb72-48ef-481a-b689-530ebf1bc483, tlsPort=5900}
2017-07-10 19:47:15,381+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (org.ovirt.thread.pool-7-thread-38) [9185722b-577f-4462-86a8-e3a264f92d8e] Failed in 'HotPlugDiskVDS' method
2017-07-10 19:47:15,395+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-38) [9185722b-577f-4462-86a8-e3a264f92d8e] EVENT_ID: VDS_BROKER_COMMAND_FAILUR
E(10,802), Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_2 command HotPlugDiskVDS failed: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'dri
ve-virtio-disk3' could not be initialized
2017-07-10 19:47:15,395+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (org.ovirt.thread.pool-7-thread-38) [9185722b-577f-4462-86a8-e3a264f92d8e] Command 'org.ovirt.engine.core.vdsbroker
.vdsbroker.HotPlugDiskVDSCommand' return value 'StatusOnlyReturn [status=Status [code=45, message=internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk3' could not be in
itialized]]'
2017-07-10 19:47:15,395+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (org.ovirt.thread.pool-7-thread-38) [9185722b-577f-4462-86a8-e3a264f92d8e] HostName = host_mixed_2
2017-07-10 19:47:15,396+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (org.ovirt.thread.pool-7-thread-38) [9185722b-577f-4462-86a8-e3a264f92d8e] Command 'HotPlugDiskVDSCommand(HostName 
= host_mixed_2, HotPlugDiskVDSParameters:{runAsync='true', hostId='ce31742f-1f79-4d5a-84c8-103e194029e2', vmId='848657b3-1dc1-4c12-873c-b819e3415840', diskId='573314a3-45d9-49a2-80eb-9d62c8bbdf2c', addressMap='null
'})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk3' could not be ini
tialized, code = 45
2017-07-10 19:47:15,396+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (org.ovirt.thread.pool-7-thread-38) [9185722b-577f-4462-86a8-e3a264f92d8e] FINISH, HotPlugDiskVDSCommand, log id: 6
9315296
2017-07-10 19:47:15,396+03 ERROR [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (org.ovirt.thread.pool-7-thread-38) [9185722b-577f-4462-86a8-e3a264f92d8e] Command 'org.ovirt.engine.core.bll.storage
.disk.HotPlugDiskToVmCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable t
o execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk3' could not be initialized, code = 45 (Failed with error FailedToPlugDisk and code 45)
2017-07-10 19:47:15,409+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-38) [9185722b-577f-4462-86a8-e3a264f92d8e] EVENT_ID: USER_FAILED_HOTPLUG_DISK(2,001), Correlation ID: 9185722b-577f-4462-86a8-e3a264f92d8e, Call Stack: null, Custom Event ID: -1, Message: Failed to plug disk disk_to_plug_iscsi_6 to VM vm_iscsi_1_iscsi_1018421468 (User: admin@internal-authz).

vdsm.log:

2017-07-10 19:47:13,687+0300 INFO  (jsonrpc/7) [vds] prepared volume path: /rhev/data-center/290170d6-2703-47a4-acd1-252655cba202/10868c43-fea0-4c2a-8da7-3b21d7c80e68/images/573314a3-45d9-49a2-80eb-9d62c8bbdf2c/3f8d5490-6517-4543-94e0-dc98b3cd12fe (clientIF:374)
2017-07-10 19:47:13,689+0300 INFO  (jsonrpc/7) [vdsm.api] START getVolumeSize(sdUUID=u'10868c43-fea0-4c2a-8da7-3b21d7c80e68', spUUID=u'290170d6-2703-47a4-acd1-252655cba202', imgUUID=u'573314a3-45d9-49a2-80eb-9d62c8bbdf2c', volUUID=u'3f8d5490-6517-4543-94e0-dc98b3cd12fe', options=None) from=::ffff:10.35.161.131,49826, flow_id=9185722b-577f-4462-86a8-e3a264f92d8e, task_id=b982d92c-5097-48bc-bf6b-dd41d0d1d323 (api:46)
2017-07-10 19:47:13,690+0300 INFO  (jsonrpc/7) [vdsm.api] FINISH getVolumeSize return={'truesize': '1073741824', 'apparentsize': '1073741824'} from=::ffff:10.35.161.131,49826, flow_id=9185722b-577f-4462-86a8-e3a264f92d8e, task_id=b982d92c-5097-48bc-bf6b-dd41d0d1d323 (api:52)
2017-07-10 19:47:13,701+0300 INFO  (jsonrpc/7) [virt.vm] (vmId='848657b3-1dc1-4c12-873c-b819e3415840') Hotplug disk xml: <?xml version='1.0' encoding='UTF-8'?>
<disk address="" device="disk" snapshot="no" type="block">
    <source dev="/rhev/data-center/290170d6-2703-47a4-acd1-252655cba202/10868c43-fea0-4c2a-8da7-3b21d7c80e68/images/573314a3-45d9-49a2-80eb-9d62c8bbdf2c/3f8d5490-6517-4543-94e0-dc98b3cd12fe" />
    <target bus="virtio" dev="vdd" />
    <serial>573314a3-45d9-49a2-80eb-9d62c8bbdf2c</serial>
    <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2" />
</disk>
 (vm:2996)
2017-07-10 19:47:13,752+0300 ERROR (jsonrpc/7) [virt.vm] (vmId='848657b3-1dc1-4c12-873c-b819e3415840') Hotplug failed (vm:3004)
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 3002, in hotplugDisk
    self._dom.attachDevice(driveXml)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 540, in attachDevice
    if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk3' could not be initialized



Version-Release number of selected component (if applicable):
rhevm-4.1.4-0.2.el7.noarch
vdsm-4.19.21-1.el7ev.x86_64
libvirt-client-3.2.0-14.el7.x86_64
qemu-kvm-rhev-2.9.0-16.el7.x86_64


How reproducible:
100%

Steps to Reproduce:
1. Create a VM with 1 disk (A) + OS
2. Attach new 3 disks (B, C and D) - active
3. Start the VM - wait for the OS to finish the boot sequence
3. Hot unplug the disks in that order - B -> C -> D
4. Hotplug disk C again

Actual results:
VDSM host_mixed_2 command HotPlugDiskVDS failed: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk3' could not be initialized

Expected results:


Additional info:

Comment 1 Daniel Erez 2017-07-11 08:46:37 UTC
@Raz - can you please attach full logs, including libvirt and qemu. Also, which OS did you use?

Comment 2 Raz Tamir 2017-07-11 09:14:16 UTC
Created attachment 1296139 [details]
engine and vdsm logs

Hi Daniel,

Full logs attached, libvirt doesn't exist anymore.

I used el 7.3 but it also happens on 7.4

Comment 3 Raz Tamir 2017-07-11 09:14:36 UTC
Created attachment 1296140 [details]
qemu

Comment 4 Daniel Erez 2017-07-11 13:20:11 UTC
According to qemu log[1], it seems there's some permission issue. Can you please reproduce the scenario again and attach libvirt log, so we could ask libvirt dev for advise on the libvirtError[2].

[1] Could not open '/rhev/data-center/290170d6-2703-47a4-acd1-252655cba202/10868c43-fea0-4c2a-8da7-3b21d7c80e68/images/573314a3-45d9-49a2-80eb-9d62c8bbdf2c/3f8d5490-6517-4543-94e0-dc98b3cd12fe': Operation not permitted

[2] libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk3' could not be initialized

Comment 5 Raz Tamir 2017-07-12 07:39:50 UTC
Hi Daniel,

Is there any issue to reproduce this on your environment?
Let me know if you need my help

Comment 6 Allon Mureinik 2017-07-17 13:38:13 UTC
Also, to help triage the problem - can we retry this on RHEL7.3 please?

Comment 7 Allon Mureinik 2017-07-18 12:33:34 UTC
(In reply to Allon Mureinik from comment #6)
> Also, to help triage the problem - can we retry this on RHEL7.3 please?
Missed comment 2, my apologies.

Comment 8 Daniel Erez 2017-07-24 16:19:21 UTC
(In reply to Raz Tamir from comment #5)
> Hi Daniel,
> 
> Is there any issue to reproduce this on your environment?
> Let me know if you need my help

I've couldn't reproduce it on el7.3 nor el7.4. Was it reproduced in any other environment? Reproduced in a specific configuration (block/file/disk interface)?

Comment 9 Raz Tamir 2017-07-25 09:16:35 UTC
Reproduced in my environment.
Daniel, you can fetch all logs needed from my environment

Comment 10 Daniel Erez 2017-08-06 14:01:57 UTC
IIUC, seems that a similar issue has been encountered in specific versions of qemu-img/qemu-kvm:

https://ask.openstack.org/en/question/56961/returning-exception-internal-error-unable-to-execute-qemu-command-__comredhat_drive_add-device-drive-virtio-disk1-could-not-be-initialized-to-caller/

@Kevin - is it a known issue?

Comment 11 Kevin Wolf 2017-08-07 09:19:43 UTC
I am not aware of the issue. It looks very much like a problem outside of qemu,
because the kernel seems to return -EPERM for the open() syscall. You can confirm
this with strace.

I seem to remember that we had such problems related to libvirt not applying the
correct SELinux labels to the image, so SELinux is where I'd look first.

Comment 20 Daniel Erez 2017-11-13 13:43:05 UTC
(In reply to Kevin Wolf from comment #11)
> I am not aware of the issue. It looks very much like a problem outside of
> qemu,
> because the kernel seems to return -EPERM for the open() syscall. You can
> confirm
> this with strace.
> 
> I seem to remember that we had such problems related to libvirt not applying
> the
> correct SELinux labels to the image, so SELinux is where I'd look first.

@Eric - is there any known issue with libvirt applying SELinux labels?
Seems we're sporadically getting a libvirtError on disk attach [1].
See also event failure in comment 18 [2].

[1]
  File "/usr/share/vdsm/virt/vm.py", line 3002, in hotplugDisk
    self._dom.attachDevice(driveXml)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 540, in attachDevice
    if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk3' could not be initialized

[2]
type=VIRT_RESOURCE msg=audit(1510241246.816:4418878): pid=3405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=disk reason=attach vm="backup-rhv" uuid=1bfe3e00-0b33-41f6-b413-dc64b9eda5b8 old-disk="?" new-disk="/var/lib/vdsm/transient/ac8a04d6-bdcf-4516-8956-21f495a8ffda-a1539e2d-cc80-4c48-becf-ba81d86f012f.8Sj5f9" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=failed'

Comment 21 Nir Soffer 2017-11-13 15:40:48 UTC
Raz, can you reproduce this with selinux permissive mode?

If you can, please attach the output of:

    ausearch -m AVC

Comment 22 Nir Soffer 2017-11-13 15:41:46 UTC
Jason, provide the info requested in comment 21?

Comment 23 Eric Blake 2017-11-13 15:56:25 UTC
Widening the query in comment #20 to libvirt-maint

Comment 25 Raz Tamir 2017-11-14 14:02:24 UTC
Nir,

My input still relevant here?
I see comment #24 also provided that info

Comment 26 Nir Soffer 2017-11-28 09:09:34 UTC
(In reply to Raz Tamir from comment #25)
> My input still relevant here?
> I see comment #24 also provided that info

Yes, I want to know if you can reproduce in permissive mode.

Comment 27 Daniel Erez 2017-11-28 12:03:52 UTC
(In reply to Nir Soffer from comment #26)
> (In reply to Raz Tamir from comment #25)
> > My input still relevant here?
> > I see comment #24 also provided that info
> 
> Yes, I want to know if you can reproduce in permissive mode.

Returning the needinfo.

Comment 28 Jaroslav Suchanek 2017-11-28 13:04:39 UTC
Might be fixed by this:
https://bugzilla.redhat.com/show_bug.cgi?id=1506072

Peter, can you have a look please? Thanks.

Comment 30 Raz Tamir 2017-12-08 11:50:45 UTC
(In reply to Nir Soffer from comment #21)
> Raz, can you reproduce this with selinux permissive mode?
> 
> If you can, please attach the output of:
> 
>     ausearch -m AVC

Nir,

I've set Permissive mode on the vdsms and reproduced the bug.
However, executing 'ausearch -m AVC' returned no matches:
[root@storage-ge5-vdsm3 ~]# ausearch -m AVC
<no matches>

How to proceed from this point?

Comment 31 Nir Soffer 2017-12-08 12:52:33 UTC
Maybe this is a duplicate of bug 1506157?

Raz, can you test again with current vdsm master, or latest vdsm 4.1 (4.19.42)?

We require now libvirt-daemon >= 3.2.0-14.el7_4.5 - if this is a duplicate, it 
should be fixed now.

If this still happens, please return the needinfo for Peter removed in comment 30
(by mistake?)

Comment 32 Raz Tamir 2017-12-08 14:30:41 UTC
Issue reproduced on:
rhvm-4.2.0-0.6.el7
vdsm-4.20.9-1.el7ev.x86_64
libvirt-3.2.0-14.el7_4.4.x86_64

vdsm.log:
2017-12-08 16:19:25,500+0200 ERROR (jsonrpc/5) [virt.vm] (vmId='629c80df-8c85-4c43-90de-319540132829') Hotplug failed (vm:3632)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3630, in hotplugDisk
    self._dom.attachDevice(driveXml)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 126, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 512, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 540, in attachDevice
    if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-3' could not be initialized

engine.log:
2017-12-08 16:19:26,554+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39464) [a3fbe6ef-96a1-4d9a-aeb1-0f21a21c8a17] Failed in 'HotPlugDiskVDS' method
2017-12-08 16:19:26,566+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-39464) [a3fbe6ef-96a1-4d9a-aeb1-0f21a21c8a17] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host_mixed_1 command HotPlugDiskVDS failed: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-3' could not be initialized
2017-12-08 16:19:26,566+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39464) [a3fbe6ef-96a1-4d9a-aeb1-0f21a21c8a17] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return value 'StatusOnlyReturn [status=Status [code=45, message=internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-3' could not be initialized]]'
2017-12-08 16:19:26,567+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39464) [a3fbe6ef-96a1-4d9a-aeb1-0f21a21c8a17] HostName = host_mixed_1
2017-12-08 16:19:26,567+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39464) [a3fbe6ef-96a1-4d9a-aeb1-0f21a21c8a17] Command 'HotPlugDiskVDSCommand(HostName = host_mixed_1, HotPlugDiskVDSParameters:{hostId='20a94a01-73e4-431f-a796-cb2238512967', vmId='629c80df-8c85-4c43-90de-319540132829', diskId='8513de32-c652-48ee-b47f-01719cf39e43', addressMap='[bus=0, controller=0, unit=3, type=drive, target=0]'})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-3' could not be initialized, code = 45
2017-12-08 16:19:26,567+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-39464) [a3fbe6ef-96a1-4d9a-aeb1-0f21a21c8a17] FINISH, HotPlugDiskVDSCommand, log id: 3e7236ec
2017-12-08 16:19:26,568+02 ERROR [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (EE-ManagedThreadFactory-engine-Thread-39464) [a3fbe6ef-96a1-4d9a-aeb1-0f21a21c8a17] Command 'org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-3' could not be initialized, code = 45 (Failed with error FailedToPlugDisk and code 45)
2017-12-08 16:19:26,586+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-39464) [a3fbe6ef-96a1-4d9a-aeb1-0f21a21c8a17] EVENT_ID: USER_FAILED_HOTPLUG_DISK(2,001), Failed to plug disk test_Disk2 to VM test (User: admin@internal-authz).



I don't see libvirt-3.2.0-14.el7_4.5 available even on u/s 4.2

re-adding the needinfo according comment #31

Comment 34 Raz Tamir 2017-12-14 09:48:29 UTC
(In reply to Nir Soffer from comment #31)
> Maybe this is a duplicate of bug 1506157?
> 
> Raz, can you test again with current vdsm master, or latest vdsm 4.1
> (4.19.42)?
> 
> We require now libvirt-daemon >= 3.2.0-14.el7_4.5 - if this is a duplicate,
> it 
> should be fixed now.
> 
> If this still happens, please return the needinfo for Peter removed in
> comment 30
> (by mistake?)

Tested again with libvirt-3.2.0-14.el7_4.5 installed and the issue still exists

Comment 36 Alistair Ross 2017-12-18 03:33:55 UTC
This issue is affecting our ability to back up any servers other than those on the same host as RHVM, so it is high priority for us. 

We use Commvault 11SP9 which makes RHV API calls to make the snapshots.

Comment 37 Justin Crook 2017-12-19 04:14:42 UTC
This issue is also affecting our ability to back up any servers other than those on the same host - currently have fully updated to 4.1.8

We use Commvault 11SP9+ which makes RHV API calls to make the snapshots.

This has now stopped our major DR project as we cannot obtain the required backups of VMs.
Through Commvault we have been advised that this is expected to be fixed in 4.1.9. Is there an ETA on a release date or a hotfix that can be installed yet?

Comment 39 Eduardo Habkost 2017-12-26 23:02:55 UTC
I just found out that the audit logs are broken due to a libvirt bug. See https://github.com/ehabkost/libvirt/commit/fa7b97da69595ec4b8992ccaacfe5a7347436d6a

Comment 40 Eduardo Habkost 2017-12-27 18:00:25 UTC
I need help from somebody familiar with VDSM to reproduce the bug. Is there anybody available this week who is able to reproduce the bug and can give me access to the host where it can be seen?

Comment 42 Nir Soffer 2017-12-27 19:50:36 UTC
Raz, can we setup a test system for Eduardo? see comment 40.

Comment 43 Eduardo Habkost 2017-12-28 14:31:16 UTC
I confirm that this a bug on libvirt's namespace handling.  It's possible to work around the bug by adding "namespaces = [ ]" to /etc/libvirt/qemu.conf.

Comment 44 Nir Soffer 2017-12-28 16:42:46 UTC
(In reply to Eduardo Habkost from comment #43)
> I confirm that this a bug on libvirt's namespace handling.  It's possible to
> work around the bug by adding "namespaces = [ ]" to /etc/libvirt/qemu.conf.

How does it change libvirt/qemu behavior? What functionality is lost when using
this workaround?

Comment 45 Eduardo Habkost 2017-12-28 17:25:24 UTC
(In reply to Nir Soffer from comment #44)
> (In reply to Eduardo Habkost from comment #43)
> > I confirm that this a bug on libvirt's namespace handling.  It's possible to
> > work around the bug by adding "namespaces = [ ]" to /etc/libvirt/qemu.conf.
> 
> How does it change libvirt/qemu behavior? What functionality is lost when
> using
> this workaround?

The libvirt namespace feature creates a separate /dev directory for QEMU to use.  The bug is in the code that handles symlinks: it assumes that an existing symlink will always point to the same target, but in the case of LVM this assumption is broken: if devices are deactivated and reactivated in a different order, the existing symlink inside QEMU's /dev directory needs to be updated to match the new path.

The feature is an additional security layer, but no functionality should be lost if disabling it.  However, the namespace feature might also help avoid races between udev and libvirt when managing devices, so some testing is recommended if changing this setting in production.

Comment 46 Eduardo Habkost 2017-12-28 17:47:51 UTC
(In reply to Eduardo Habkost from comment #45)
> The feature is an additional security layer, but no functionality should be
> lost if disabling it.  However, the namespace feature might also help avoid
> races between udev and libvirt when managing devices, so some testing is
> recommended if changing this setting in production.

For reference, this is the bug addressed by the namespace feature in libvirt:
https://bugzilla.redhat.com/show_bug.cgi?id=1404952

Comment 47 Eduardo Habkost 2017-12-28 18:30:41 UTC
Experimental fix at:
https://github.com/ehabkost/libvirt/commit/89f1a08b9518148f6a86600c0ded6f52886e44b4

Comment 48 Nir Soffer 2017-12-28 18:53:33 UTC
(In reply to Eduardo Habkost from comment #46)
> For reference, this is the bug addressed by the namespace feature in libvirt:
> https://bugzilla.redhat.com/show_bug.cgi?id=1404952

Vdsm already worked around this issue since 2014 by not specifying owner and
group in vdsm udev rules, using chown to set the owner and group:
https://gerrit.ovirt.org/33875

So we should be safe to disable libvirt namespaces, but we never tested this 
configuration in 7.4.

Comment 49 Raz Tamir 2017-12-31 09:21:49 UTC
(In reply to Eduardo Habkost from comment #43)
> I confirm that this a bug on libvirt's namespace handling.  It's possible to
> work around the bug by adding "namespaces = [ ]" to /etc/libvirt/qemu.conf.

The suggested W/A didn't work for me.
1) added namespaces = [ ] to /etc/libvirt/qemu.conf
2) # systectl restart vdsmd
3) and performed the steps to reproduced from the original bug.

Issue reproduced:

engine.log:
2017-12-31 11:15:11,928+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-347874) [dc627a88-bb35-4037-bf1b-130049e1c078] Failed in 'HotPlugDiskVDS' method
2017-12-31 11:15:11,945+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-347874) [dc627a88-bb35-4037-bf1b-130049e1c078] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host_mixed_1 command HotPlugDiskVDS failed: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-3' could not be initialized
2017-12-31 11:15:11,946+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-347874) [dc627a88-bb35-4037-bf1b-130049e1c078] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return value 'StatusOnlyReturn [status=Status [code=45, message=internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-3' could not be initialized]]'
2017-12-31 11:15:11,946+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-347874) [dc627a88-bb35-4037-bf1b-130049e1c078] HostName = host_mixed_1
2017-12-31 11:15:11,946+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-347874) [dc627a88-bb35-4037-bf1b-130049e1c078] Command 'HotPlugDiskVDSCommand(HostName = host_mixed_1, HotPlugDiskVDSParameters:{hostId='9a41eab7-a49c-4a8e-9283-217a2d25cf94', vmId='650d5642-bc06-478e-9865-8a37551c9770', diskId='df1f057e-6450-4f82-924a-5bc2d744d3cc', addressMap='[bus=0, controller=0, unit=3, type=drive, target=0]'})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-3' could not be initialized, code = 45
2017-12-31 11:15:11,946+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-347874) [dc627a88-bb35-4037-bf1b-130049e1c078] FINISH, HotPlugDiskVDSCommand, log id: 60bb7b14
2017-12-31 11:15:11,947+02 ERROR [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (EE-ManagedThreadFactory-engine-Thread-347874) [dc627a88-bb35-4037-bf1b-130049e1c078] Command 'org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-3' could not be initialized, code = 45 (Failed with error FailedToPlugDisk and code 45)
2017-12-31 11:15:11,964+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-347874) [dc627a88-bb35-4037-bf1b-130049e1c078] EVENT_ID: USER_FAILED_HOTPLUG_DISK(2,001), Failed to plug disk test_Disk2 to VM test (User: admin@internal-authz).
2017-12-31 11:15:11,965+02 INFO  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (EE-ManagedThreadFactory-engine-Thread-347874) [dc627a88-bb35-4037-bf1b-130049e1c078] Lock freed to object 'EngineLock:{exclusiveLocks='[df1f057e-6450-4f82-924a-5bc2d744d3cc=DISK]', sharedLocks='[650d5642-bc06-478e-9865-8a37551c9770=VM]'}'


vdsm.log:

2017-12-31 11:15:11,448+0200 DEBUG (jsonrpc/4) [storage.TaskManager.Task] (Task='f3e0fb64-35c3-4741-85b2-c4be9c6f565a') ref 0 aborting False (task:1002)
2017-12-31 11:15:11,454+0200 INFO  (jsonrpc/4) [virt.vm] (vmId='650d5642-bc06-478e-9865-8a37551c9770') Hotplug disk xml: <?xml version='1.0' encoding='UTF-8'?>
<disk device="disk" snapshot="no" type="block">
    <address bus="0" controller="0" target="0" type="drive" unit="3" />
    <source dev="/rhev/data-center/mnt/blockSD/1524036d-2ce0-47ff-9f10-f985f96c0d1a/images/df1f057e-6450-4f82-924a-5bc2d744d3cc/18e8602f-abc5-4b2e-89a5-a50bac84b5b1" />
    <target bus="scsi" dev="sdd" />
    <serial>df1f057e-6450-4f82-924a-5bc2d744d3cc</serial>
    <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" />
</disk>
 (vm:3638)
2017-12-31 11:15:11,515+0200 ERROR (jsonrpc/4) [virt.vm] (vmId='650d5642-bc06-478e-9865-8a37551c9770') Hotplug failed (vm:3646)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3644, in hotplugDisk
    self._dom.attachDevice(driveXml)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 126, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 512, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 540, in attachDevice
    if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-3' could not be initialized


In qemu.log I see:

Could not open '/rhev/data-center/mnt/blockSD/1524036d-2ce0-47ff-9f10-f985f96c0d1a/images/df1f057e-6450-4f82-924a-5bc2d744d3cc/18e8602f-abc5-4b2e-89a5-a50bac84b5b1': Operation not permitted

this is the image I'm trying to hotplug

Comment 50 Raz Tamir 2017-12-31 09:41:52 UTC
Ignore the last comment.

I forgot to restart libvirtd service as well (thanks masayag).

The W/A worked

Comment 51 Daniel Erez 2018-01-07 13:55:07 UTC
*** Bug 1531155 has been marked as a duplicate of this bug. ***

Comment 52 Allon Mureinik 2018-01-22 13:27:47 UTC
A proper fix depends on a libvirt fix that is not yet released. Pushing out to 4.1.10.

Comment 53 mayur 2018-02-09 17:25:40 UTC
Hit with same issue.

nova-compute.log--->

instance: 010391e3-dc9d-419c-94b6-848239fb29eb] Failed to attach volume at mountpoint: /dev/vdc
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb] Traceback (most recent call last):
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1252, in attach_volume
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]     guest.attach_device(conf, persistent=True, live=live)
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 309, in attach_device
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]     self._domain.attachDeviceFlags(device_xml, flags=flags)
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]     rv = execute(f, *args, **kwargs)
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]     six.reraise(c, e, tb)
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]     rv = meth(*args, **kwargs)
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 564, in attachDeviceFlags
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb]     if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2018-02-09 17:58:15.855 4716 ERROR nova.virt.libvirt.driver [instance: 010391e3-dc9d-419c-94b6-848239fb29eb] libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized




qemu.log---------->

Could not open '/dev/disk/by-path/ip-10.182.174.189:3260-iscsi-iqn.2017-02.com.veritas:target07-lun-3': Operation not permitted



I also tried with W/A mentioned in comment #43, but still not working.

Comment 54 Eduardo Habkost 2018-02-09 17:32:08 UTC
(In reply to mayur from comment #53)
> I also tried with W/A mentioned in comment #43, but still not working.

Were libvirtd and the VMs restarted?  The workaround requires restarting libvirt and restarting the VMs after changing qemu.conf.

Comment 55 mayur 2018-02-10 06:58:32 UTC
In reply to  Eduardo Habkost  from comment #54

Thanks Eduardo. It worked. :-)
Earlier I had restarted only libvirt service but did not restarted VMs.

Now I have restarted both of them and it worked.

But one confusion - 
I was abled to attach Volume when I tried it for first time. This issue came when I dettached it and tried to attach it again. What may be the reason for this.

Comment 56 Eduardo Habkost 2018-02-12 13:57:59 UTC
(In reply to mayur from comment #55)
> But one confusion - 
> I was abled to attach Volume when I tried it for first time. This issue came
> when I dettached it and tried to attach it again. What may be the reason for
> this.

The bug is heavily dependent on the ordering of disk attach/detach operations in the host.  It happens when the same LVM volume is reattached to a VM, but only if the major/minor number of the underlying device-mapper file changes when the volume is reactivated+reattached.

Comment 57 Avihai 2018-02-25 12:16:59 UTC
Verified on vdsm-4.20.18-1.el7ev.x86_64 .

Comment 58 Sandro Bonazzola 2018-03-29 10:58:53 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.

Comment 59 Alistair Ross 2018-07-03 23:53:23 UTC
We are still using the namespaces workaround in /etc/libvirt/qemu.conf in 4.2.3, as this bug is stated as fixed in 4.2.2, should we remove this setting from the file now?

Comment 60 Alistair Ross 2018-07-03 23:53:48 UTC
We are still using the namespaces workaround in /etc/libvirt/qemu.conf in 4.2.3, as this bug is stated as fixed in 4.2.2, should we remove this setting from the file now?

Comment 61 Germano Veit Michel 2018-07-04 22:41:00 UTC
(In reply to Alistair Ross from comment #60)
> We are still using the namespaces workaround in /etc/libvirt/qemu.conf in
> 4.2.3, as this bug is stated as fixed in 4.2.2, should we remove this
> setting from the file now?

Hi Alistair,

Yes, the workaround is not needed anymore.