Created attachment 1756191 [details] logs Description of problem: I saw the same error as in Bug 1905108 in 2 of our automation test cases as part of tier1 in hotplug disk operation: 2021-02-07 06:19:18,975+0200 INFO (jsonrpc/2) [vdsm.api] START getVolumeSize(sdUUID='307c61d6-b13d-43e6-9bf7-d14ee1b3d2b2', spUUID='04eb3426-965b -489c-a91a-c040e70ae2c5', imgUUID='9492e6a7-d165-4578-8ae9-0fb2f7e0ea44', volUUID='d52acc99-ca3a-4fa1-b366-029bd7d59829', options=None) from=::fff f:10.46.16.252,60738, flow_id=diskattachments_create_f62fd331-a268, task_id=4c123bf5-6a48-456b-a561-973a16b989cd (api:48) 2021-02-07 06:19:18,975+0200 INFO (jsonrpc/2) [vdsm.api] FINISH getVolumeSize return={'apparentsize': '1073741824', 'truesize': '1073741824'} fro m=::ffff:10.46.16.252,60738, flow_id=diskattachments_create_f62fd331-a268, task_id=4c123bf5-6a48-456b-a561-973a16b989cd (api:54) 2021-02-07 06:19:18,976+0200 INFO (jsonrpc/2) [virt.vm] (vmId='4c6e5813-efe6-4542-b7fa-a960fc5b8489') Hotplug disk xml: <?xml version='1.0' encod ing='utf-8'?> <disk device="disk" snapshot="no" type="block"> <address bus="0" controller="0" target="0" type="drive" unit="0" /> <source dev="/rhev/data-center/mnt/blockSD/307c61d6-b13d-43e6-9bf7-d14ee1b3d2b2/images/9492e6a7-d165-4578-8ae9-0fb2f7e0ea44/d52acc99-ca3a-4fa1 -b366-029bd7d59829"> <seclabel model="dac" relabel="no" type="none" /> </source> <target bus="scsi" dev="sdb" /> <serial>9492e6a7-d165-4578-8ae9-0fb2f7e0ea44</serial> <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2" /> <alias name="ua-9492e6a7-d165-4578-8ae9-0fb2f7e0ea44" /> </disk> (vm:3686) 2021-02-07 06:19:18,994+0200 ERROR (jsonrpc/2) [virt.vm] (vmId='4c6e5813-efe6-4542-b7fa-a960fc5b8489') Hotplug failed (vm:3694) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3692, in hotplugDisk self._dom.attachDevice(driveXml) File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f ret = attr(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python3.6/site-packages/libvirt.py", line 606, in attachDevice if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self) libvirt.libvirtError: Requested operation is not valid: Domain already contains a disk with that address Version-Release number of selected component (if applicable): rhv-4.4.5-4: vdsm-4.40.50.4-1.el8ev.x86_64 libvirt-6.6.0-13.module+el8.3.1+9548+0a8fede5.x86_64 ovirt-engine-4.4.5.4-0.6.el8ev.noarch How reproducible: Couldn't reproduce it, saw it in 2 automation test cases. Steps to Reproduce: 1. Create VM from template and run the VM 2. Try to attach disks to the VM Actual results: Failed to hotplug disk Expected results: Operation shpuld succeed Additional info: Attaching engine + vdsm + qemu logs (There were no libvirt logs)
Not a regression - there's a timing issue that could always happen but now that the automation covers more frequent attach-disk operations, the probability of this to happen is higher It could be that the devices monitoring is executed right after plugging a disk, the reported domain xml doesn't contain the plugged device and so we unplug the device when we get to attach the next disk - and so it will get the same address that was assigned to the previous disk.
All our hotplug automation tests ran several times since this was fixed (4.4.6.4) - there were no failures. Can be considered as verified.
This bugzilla is included in oVirt 4.4.6 release, published on May 4th 2021. Since the problem described in this bug report should be resolved in oVirt 4.4.6 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.
*** Bug 1965213 has been marked as a duplicate of this bug. ***