Created attachment 1448372 [details] logs Description of problem: AddVmTemplate's copyImage task is finished with " 'taskResult': 'success' " on vdsm is and for some reason, its AddVMFromTemplate corresponding command fails on engine. Version-Release number of selected component (if applicable): ovirt-engine-4.2.4-0.1.el7.noarch vdsm-4.20.28-1.el7ev.x86_64 How reproducible: Happened once Steps to Reproduce: 1. Create a template with from a VM with disk (both original VM and the new template disk on iSCSI Actual results: copyImage succeeds on vdsm: 2018-06-01 19:37:14,050+0300 INFO (jsonrpc/7) [vdsm.api] START copyImage(sdUUID='966a2519-b967-46e9-b2c1-82a0ae881524', spUUID='de30d396-750e-407c-8de5-433271701978', vmUUID='', srcImgUUID='18633425-afcb-4c8a-a906-f20646fd8670', srcVolUUID='732d40b0-8257-4458-9783-1ba139210434', dstImgUUID='b72a3671-79a3-4559-85a6-8a62ef3c2a2c', dstVolUUID='3f893668-6b91-4c5c-b5b5-7fadda1ea264', description='{"DiskAlias":"vm_TestCase5993_0119061498_Disk_0","DiskDescription":"rhel7.4_rhv4.2_guest_disk (2954e57)"}', dstSdUUID='966a2519-b967-46e9-b2c1-82a0ae881524', volType=6, volFormat=4, preallocate=2, postZero='false', force='false', discard=True) from=::ffff:10.35.161.127,40768, flow_id=templates_create_aeef78e5-1054-421c, task_id=6334aae8-f493-4105-ab5c-7715d26f9241 (api:46) 2018-06-01 19:38:36,677+0300 INFO (jsonrpc/5) [vdsm.api] FINISH getAllTasksStatuses return={'allTasksStatus': {'6334aae8-f493-4105-ab5c-7715d26f9241': {'code': 0, 'message': '1 jobs completed successfully', 'ta skState': 'finished', 'taskResult': 'success', 'taskID': '6334aae8-f493-4105-ab5c-7715d26f9241'}}} from=::ffff:10.35.161.127,40458, task_id=4ba7425a-79e4-4560-abcc-92752f57ed96 (api:52) 2018-06-01 19:38:36,677+0300 DEBUG (jsonrpc/5) [storage.TaskManager.Task] (Task='4ba7425a-79e4-4560-abcc-92752f57ed96') finished: {'allTasksStatus': {'6334aae8-f493-4105-ab5c-7715d26f9241': {'code': 0, 'message' : '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '6334aae8-f493-4105-ab5c-7715d26f9241'}}} (task:1201) And for some reason, AddVmTemplate command ends with a failure on engine: 2018-06-01 19:38:39,458+03 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-80) [templates_create_aeef78e5-1054-421c] Command 'AddVmTemplate' id: '05d06571-1866-4245-b013-53c5e9e1c3d1' child commands '[f6cc573f-e9c6-49c5-9c23-5d69798428eb]' executions were completed, status 'FAILED' Expected results: I don't see any reason for AddVmTemplate command to fail. Additional info: logs
Elad, did you manage to reproduce this? The VM is base on another template? please add full steps to reproduce.
(In reply to Eyal Shenitzky from comment #1) > Elad, did you manage to reproduce this? No, reproduced only once > The VM is base on another template? Yes > please add full steps to reproduce. 1. Create a VM from template with disk on iSCSI 2. Create a template from the VM with disk (both original VM and the new template disk on iSCSI
The VM created from the template as a clone?
No, Thin: 2018-06-01 19:36:49,371+03 INFO [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand] (default task-16) [vms_create_ab8392b6-3234-49f4] Running command: CreateSnapshotFromTemplateCommand internal: true. Entities affected : ID: 3410ef17-5630-4ea1-88a7-45283982e922 Type: Storage 2018-06-01 19:36:49,431+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-16) [vms_create_ab8392b6-3234-49f4] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='de30d396-750e-407c-8de5-433271701978', ignoreFailoverLimit='false', storageDomainId='966a2519-b967-46e9-b2c1-82a0ae881524', imageGroupId='18633425-afcb-4c8a-a906-f20646fd8670', imageSizeInBytes='10737418240', volumeFormat='COW', newImageId='732d40b0-8257-4458-9783-1ba139210434', imageType='Sparse', newImageDescription='', imageInitialSizeInBytes='0', imageId='b4275e6a-5bc1-46c0-be9c-6074afd08289', sourceImageGroupId='9b132c5f-3b96-4a7c-8bcf-25a2e6852ef2'}), log id: 46762362
Elad I failed to reproduce the bug according to the steps you mentioned for several times. Feel free to open the bug if it reproduced again.
Created attachment 1452097 [details] logs1 Occurred once again, template creation ends with a failure: 2018-06-15 05:00:50,725+03 ERROR [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [templates_create_da4430f9-2080-4a69] Ending command 'org.ovirt.engine.core.b ll.AddVmTemplateCommand' with failure. 2018-06-15 05:00:50,729+03 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [templates_create_da4430f9-2080-4a69] START, SetVmStatusVDSCommand( Se tVmStatusVDSCommandParameters:{vmId='fc1db19c-43ce-4095-8d0b-4f8581c80f84', status='Down', exitStatus='Normal'}), log id: 47221e8d 2018-06-15 05:00:50,730+03 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [templates_create_da4430f9-2080-4a69] FINISH, SetVmStatusVDSCommand, l og id: 47221e8d 2018-06-15 05:00:50,735+03 INFO [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [templates_create_da4430f9-2080-4a69] Lock freed to object 'EngineLock:{exclu siveLocks='[2163ff89-18bd-470f-8a25-39bfaeb7d595=DISK, templ_TestCase5683_1504593381=TEMPLATE_NAME, e7381003-6276-4a15-92d3-f2a0432472b0=TEMPLATE]', sharedLocks='[fc1db19c-43ce-4095-8d0b-4f8581c80f84=VM]'}' 2018-06-15 05:00:50,741+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [templates_create_da4430f9-2080-4a69] EVENT_ID: USER_AD D_VM_TEMPLATE_CREATE_TEMPLATE_FAILURE(1,322), Failed to create Template templ_TestCase5683_1504593381 or its disks from VM golden_env_mixed_virtio_1_0. While copyImage task finishes successfully on vdsm: 2018-06-15 04:59:34,614+0300 DEBUG (jsonrpc/3) [storage.TaskManager.Task] (Task='2bdafb44-75d1-4721-8b8b-4d773809f458') moving from state preparing -> state preparing (task:602) 2018-06-15 04:59:34,614+0300 INFO (jsonrpc/3) [vdsm.api] START copyImage(sdUUID='d6f06a03-9be1-4616-8c51-6d18ff70cb6a', spUUID='c15b4cf2-701c-11e8-a7f1-00163e7bd001', vmUUID='', srcImgUUID='b02dfb99-a170-44cf-9c6e-c51eabc4ed73', srcVolUUID='1ff5fc07-11d0-4a9b-b13e-22668f32be00', dstImgUUID='d92657fe-905b-497c-aedb-ab02c40c9258', dstVolUUID='248e8128-419a-4073-a117-399c57782397', description='{"DiskAlias":"golden_mixed_virtio_template","DiskDescription":"rhel7.4_rhv4.2_guest_disk (2954e57)"}', dstSdUUID='229df558-4039-4f97-abe4-33c06b2ee004', volType=6, volFormat=4, preallocate=2, postZero='false', force='false', discard=True) from=::ffff:10.35.162.71,59450, flow_id=templates_create_da4430f9-2080-4a69, task_id=2bdafb44-75d1-4721-8b8b-4d773809f458 (api:46) 2018-06-15 05:00:48,731+0300 DEBUG (jsonrpc/3) [storage.TaskManager.Task] (Task='2dc7056d-eff4-4779-bd3a-1ead01fe90e7') finished: {'allTasksStatus': {'2bdafb44-75d1-4721-8b8b-4d773809f458': {'code': 0, 'message' : '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '2bdafb44-75d1-4721-8b8b-4d773809f458'}}} (task:1201) 2018-06-15 05:00:49,492+0300 INFO (jsonrpc/2) [vdsm.api] START clearTask(taskID='2bdafb44-75d1-4721-8b8b-4d773809f458', spUUID=None, options=None) from=::ffff:10.35.162.71,59448, flow_id=templates_create_da4430 f9-2080-4a69, task_id=0a9b7534-8b02-4b09-90a6-b626342383bc (api:46) 2018-06-15 05:00:49,492+0300 DEBUG (jsonrpc/2) [storage.TaskManager] Entry. taskID: 2bdafb44-75d1-4721-8b8b-4d773809f458 (taskManager:172) Steps: - Create a vm from template as thin on iSCSI - Create a template from the VM
Reproduced quite a lot in our automation, raising severity
(In reply to Elad from comment #7) > Reproduced quite a lot in our automation, raising severity Elad, can you try to provide consistent steps to reproduce. This bug occurs only on in the automation runs and doesn't reproduce locally. Please try to investigate the runs logs to find some common ground.
The flow we do in our automation for every environment we build is: - Import rhel7.4 image from Glance repository as a template - Copy the template disk to all storage domains - Create VMs from this template as thin copy The additional step relevant here is the template creation from one of the VMs. We are testing on iSCSI so the new template disk resides on iSCSI domain.
Adding AutomationBlocker keyword as it fails many of our automation tests
Elad, I couldn't reproduce this bug. Please provide steps to reproduce.
Hi Eyal, comments #2 and #9 cover the steps
Hi, the bug is not 100% reproducible but happens sometimes. I've faced it in one automation run while building the template from the high_performance_vm. The logs for the failure are attached in failure_templ_build.tar.gz. The automation test rhevmtests/compute/sla/high_performance_vm/high_performance_vm_test.py (TestHighPerformanceVMSanity03)
Created attachment 1476085 [details] please see the failure in engine.log line 239
Created attachment 1501376 [details] Another reproduction logs of this issue Another reproduction logs of this issue seen in : ovirt-engine 4.2.7.4-0.1.el7ev VDSM -4.20.43-1
(In reply to Avihai from comment #15) > Created attachment 1501376 [details] > Another reproduction logs of this issue > > Another reproduction logs of this issue seen in : > ovirt-engine 4.2.7.4-0.1.el7ev > VDSM -4.20.43-1 Avihai, Did it reproduce manually? What were the steps? Can you please add the other hosts vdsm.log?
(In reply to Eyal Shenitzky from comment #16) > (In reply to Avihai from comment #15) > > Created attachment 1501376 [details] > > Another reproduction logs of this issue > > > > Another reproduction logs of this issue seen in : > > ovirt-engine 4.2.7.4-0.1.el7ev > > VDSM -4.20.43-1 > > Avihai, > > Did it reproduce manually? Yes, but it only happened 1 (out of 10 reproduction efforts) so it's VERY hard to reproduce. > What were the steps? Same scenario as Bug 1645881 : -Start test : <name>iscsi_0</name> <committed>24696061952</committed> => 23G 1. Create VM1, VM2 - same committed size as expected. 2. Add 2G qcow sparse disk to VM1 and activate it. 3. Add 2G raw prealloc disk to VM2 and activate it. 4. Create a template t1 from VM1 with same disks params (2G qcow sparse). > Can you please add the other hosts vdsm.log? Unfortunately, I ran Jenkins jobs on this stand(storage-ge-04) that deleted the VDSM logs from yesterday so no additional logs other than the existing SPM log :(
Reproduce this issue + bug 1645881 SD allocation issue. This was reproduced after running more than 10 times this scenario via automation TestCase11541 - HARD TO REP . This time I collected all VDSM logs - see attached.
Created attachment 1502003 [details] 4.2.7.4 reproduction with all VDSM hosts logs 4.2.7.4 reproduction with all VDSM hosts logs as requested . Timeline via engine log EVENT_ID : 2018-11-05 17:47:13,289+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-4) [58e1912c] EVENT_ID: USER_ADD_VM(34), VM vm_TestCase11541_0517471210 was created by admin@internal-authz. 2018-11-05 17:47:15,371+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-22) [2385cd58] EVENT_ID: NETWORK_ACTIVATE_VM_INTERFACE_SUCCESS(1,012), Network Interface nic1 (VirtIO) was plugged to VM vm_TestCase11541_0517471210. (User: admin@internal-authz) 2018-11-05 17:47:15,388+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-22) [2385cd58] EVENT_ID: NETWORK_ADD_VM_INTERFACE(932), Interface nic1 (VirtIO) was added to VM vm_TestCase11541_0517471210. (User: admin@internal-authz) 2018-11-05 17:47:16,827+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-9) [diskattachments_create_18e3bf5a-4af8] EVENT_ID: USER_ADD_DISK_TO_VM(78), Add-Disk operation of vm_TestCase11541_0517471210_Disk1 was initiated on VM vm_TestCase11541_0517471210 by admin@internal-authz. 2018-11-05 17:47:35,974+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-75) [] EVENT_ID: USER_ADD_DISK_TO_VM_FINISHED_SUCCESS(97), The disk vm_TestCase11541_0517471210_Disk1 was successfully added to VM vm_TestCase11541_0517471210. 2018-11-05 17:47:38,222+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-29) [7742a801] EVENT_ID: USER_ADD_VM(34), VM vm_TestCase11541_0517473742 was created by admin@internal-authz. 2018-11-05 17:47:39,693+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [1505096] EVENT_ID: NETWORK_ACTIVATE_VM_INTERFACE_SUCCESS(1,012), Network Interface nic1 (VirtIO) was plugged to VM vm_TestCase11541_0517473742. (User: admin@internal-authz) 2018-11-05 17:47:39,712+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [1505096] EVENT_ID: NETWORK_ADD_VM_INTERFACE(932), Interface nic1 (VirtIO) was added to VM vm_TestCase11541_0517473742. (User: admin@internal-authz) 2018-11-05 17:47:41,043+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-18) [diskattachments_create_80a75a90-e653] EVENT_ID: USER_ADD_DISK_TO_VM(78), Add-Disk operation of vm_TestCase11541_0517473742_Disk1 was initiated on VM vm_TestCase11541_0517473742 by admin@internal-authz. 2018-11-05 17:47:48,434+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-89) [] EVENT_ID: USER_ADD_DISK_TO_VM_FINISHED_SUCCESS(97), The disk vm_TestCase11541_0517473742_Disk1 was successfully added to VM vm_TestCase11541_0517473742. 2018-11-05 17:47:54,368+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-12) [templates_create_431a52a6-a408-46ba] EVENT_ID: USER_ADD_VM_TEMPLATE(48), Creation of Template templ_type_0517473742 from VM vm_TestCase11541_0517471210 was initiated by admin@internal-authz. 2018-11-05 17:48:59,563+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-51) [templates_create_431a52a6-a408-46ba] EVENT_ID: USER_ADD_VM_TEMPLATE_CREATE_TEMPLATE_FAILURE(1,322), Failed to create Template templ_type_0517473742 or its disks from VM vm_TestCase11541_0517471210. 2018-11-05 17:50:14,536+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-64) [] EVENT_ID: VDS_LOW_MEM(531), Available memory of host host_mixed_1 [946 MB] is under defined threshold [1024 MB]
This bug is VERY HARD TO reproduce. I ran multiple automation tests and only after 21 iterations in 4.2.7.4 did this issue appear! In 4.2.7.3 I ran 50 iterations and the bug did not appear!
reproduced in 4.3 Tier3 TestCase16410. ovirt-engine-4.3.0-0.5.alpha1.el7.noarch VDSM: vdsm-4.30.3-1.el7ev.x86_64 Engine and hosts logs attached . Engine log: VDSM (hypervisor-lynx17_vdsm.log) log tracking 'copyImage: 2018-11-24 07:47:26,032+0200 INFO (jsonrpc/1) [vdsm.api] START copyImage(sdUUID=u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', spUUID=u'9b485ff8-ef7b-11e8-b74c-001a4a168bf8', vmUUID='', srcImgUUID=u'554b04f4-ebab-4ed5-adfd-ae7e3445d634', srcVolUUID=u'2da937f3-6241-4300-9862-17b053e71356', dstImgUUID=u'751dd2f0-f7de-475e-ad82-a5f607fff41f', dstVolUUID=u'19908605-3dcf-46b5-93b9-7ae2ddd984df', description=u'{"DiskAlias":"vm_TestCase16410_2407463194_Disk1","DiskDescription":""}', dstSdUUID=u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', volType=6, volFormat=5, preallocate=1, postZero=u'false', force=u'false', discard=True) from=::ffff:10.46.16.248,40002, flow_id=templates_create_f1d35e4d-4325-4907, task_id=6769a811-c7a5-4ce3-9179-1248dc85c0dd (api:48) 2018-11-24 07:47:26,556+0200 DEBUG (jsonrpc/1) [storage.TaskManager] scheduled job copyImage_751dd2f0-f7de-475e-ad82-a5f607fff41f for task 6769a811-c7a5-4ce3-9179-1248dc85c0dd (taskManager:78) 2018-11-24 07:47:26,556+0200 INFO (jsonrpc/1) [vdsm.api] FINISH copyImage return=None from=::ffff:10.46.16.248,40002, flow_id=templates_create_f1d35e4d-4325-4907, task_id=6769a811-c7a5-4ce3-9179-1248dc85c0dd (api:54) 2018-11-24 07:47:26,614+0200 DEBUG (tasks/8) [storage.TaskManager.Task] (Task='6769a811-c7a5-4ce3-9179-1248dc85c0dd') Task.run: running job 0: copyImage_751dd2f0-f7de-475e-ad82-a5f607fff41f: <bound method StoragePool.copyImage of <vdsm.storage.sp.StoragePool object at 0x7f45803bc610>> (args: (u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', '', u'554b04f4-ebab-4ed5-adfd-ae7e3445d634', u'2da937f3-6241-4300-9862-17b053e71356', u'751dd2f0-f7de-475e-ad82-a5f607fff41f', u'19908605-3dcf-46b5-93b9-7ae2ddd984df', u'{"DiskAlias":"vm_TestCase16410_2407463194_Disk1","DiskDescription":""}', u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', 6, 5, 1, False, False, True) kwargs: {}) (task:914) 2018-11-24 07:47:26,614+0200 DEBUG (tasks/8) [storage.TaskManager.Task] (Task='6769a811-c7a5-4ce3-9179-1248dc85c0dd') Job.run: running copyImage_751dd2f0-f7de-475e-ad82-a5f607fff41f: <bound method StoragePool.copyImage of <vdsm.storage.sp.StoragePool object at 0x7f45803bc610>> (args: (u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', '', u'554b04f4-ebab-4ed5-adfd-ae7e3445d634', u'2da937f3-6241-4300-9862-17b053e71356', u'751dd2f0-f7de-475e-ad82-a5f607fff41f', u'19908605-3dcf-46b5-93b9-7ae2ddd984df', u'{"DiskAlias":"vm_TestCase16410_2407463194_Disk1","DiskDescription":""}', u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', 6, 5, 1, False, False, True) kwargs: {}) callback None (task:333) 2018-11-24 07:47:26,727+0200 INFO (jsonrpc/3) [vdsm.api] START copyImage(sdUUID=u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', spUUID=u'9b485ff8-ef7b-11e8-b74c-001a4a168bf8', vmUUID='', srcImgUUID=u'e49447d0-2e85-4ffd-b727-e476a8f6b446', srcVolUUID=u'faef2ae0-003b-4f1b-9dc4-89172642999f', dstImgUUID=u'61592b12-0fe0-4a92-8244-72a5975d8900', dstVolUUID=u'5f4e3bbe-2c7f-4893-9806-71618995927a', description=u'{"DiskAlias":"vm_TestCase16410_2407463194_Disk2","DiskDescription":""}', dstSdUUID=u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', volType=6, volFormat=5, preallocate=1, postZero=u'false', force=u'false', discard=True) from=::ffff:10.46.16.248,40002, flow_id=templates_create_f1d35e4d-4325-4907, task_id=67bbace0-e630-47d1-bf13-89e6d7578dc3 (api:48) 2018-11-24 07:47:27,273+0200 DEBUG (jsonrpc/3) [storage.TaskManager] scheduled job copyImage_61592b12-0fe0-4a92-8244-72a5975d8900 for task 67bbace0-e630-47d1-bf13-89e6d7578dc3 (taskManager:78) 2018-11-24 07:47:27,274+0200 INFO (jsonrpc/3) [vdsm.api] FINISH copyImage return=None from=::ffff:10.46.16.248,40002, flow_id=templates_create_f1d35e4d-4325-4907, task_id=67bbace0-e630-47d1-bf13-89e6d7578dc3 (api:54) 2018-11-24 07:47:27,357+0200 DEBUG (tasks/6) [storage.TaskManager.Task] (Task='67bbace0-e630-47d1-bf13-89e6d7578dc3') Task.run: running job 0: copyImage_61592b12-0fe0-4a92-8244-72a5975d8900: <bound method StoragePool.copyImage of <vdsm.storage.sp.StoragePool object at 0x7f45803bc610>> (args: (u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', '', u'e49447d0-2e85-4ffd-b727-e476a8f6b446', u'faef2ae0-003b-4f1b-9dc4-89172642999f', u'61592b12-0fe0-4a92-8244-72a5975d8900', u'5f4e3bbe-2c7f-4893-9806-71618995927a', u'{"DiskAlias":"vm_TestCase16410_2407463194_Disk2","DiskDescription":""}', u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', 6, 5, 1, False, False, True) kwargs: {}) (task:914) 2018-11-24 07:47:27,357+0200 DEBUG (tasks/6) [storage.TaskManager.Task] (Task='67bbace0-e630-47d1-bf13-89e6d7578dc3') Job.run: running copyImage_61592b12-0fe0-4a92-8244-72a5975d8900: <bound method StoragePool.copyImage of <vdsm.storage.sp.StoragePool object at 0x7f45803bc610>> (args: (u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', '', u'e49447d0-2e85-4ffd-b727-e476a8f6b446', u'faef2ae0-003b-4f1b-9dc4-89172642999f', u'61592b12-0fe0-4a92-8244-72a5975d8900', u'5f4e3bbe-2c7f-4893-9806-71618995927a', u'{"DiskAlias":"vm_TestCase16410_2407463194_Disk2","DiskDescription":""}', u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', 6, 5, 1, False, False, True) kwargs: {}) callback None (task:333) 2018-11-24 07:49:24,042+0200 INFO (jsonrpc/7) [vdsm.api] START copyImage(sdUUID=u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', spUUID=u'9b485ff8-ef7b-11e8-b74c-001a4a168bf8', vmUUID='', srcImgUUID=u'1707a19a-6951-4811-9e65-e680e637007d', srcVolUUID=u'ae310aed-82ea-480d-ae65-e904465b71a4', dstImgUUID=u'8a9a7960-d868-4304-98c1-5ab728aa5a21', dstVolUUID=u'8a78c8e5-59ec-43b3-a959-d61bf82cf02c', description=u'{"DiskAlias":"vm_TestCase16411_2407484016_Disk2","DiskDescription":""}', dstSdUUID=u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', volType=6, volFormat=5, preallocate=1, postZero=u'false', force=u'false', discard=True) from=::ffff:10.46.16.248,40002, flow_id=templates_create_cce91458-348d-4c17, task_id=4b234dc3-7b36-437a-9e01-03a3a47ccb41 (api:48) 2018-11-24 07:49:24,593+0200 DEBUG (jsonrpc/7) [storage.TaskManager] scheduled job copyImage_8a9a7960-d868-4304-98c1-5ab728aa5a21 for task 4b234dc3-7b36-437a-9e01-03a3a47ccb41 (taskManager:78) 2018-11-24 07:49:24,593+0200 INFO (jsonrpc/7) [vdsm.api] FINISH copyImage return=None from=::ffff:10.46.16.248,40002, flow_id=templates_create_cce91458-348d-4c17, task_id=4b234dc3-7b36-437a-9e01-03a3a47ccb41 (api:54) 2018-11-24 07:49:24,647+0200 DEBUG (tasks/2) [storage.TaskManager.Task] (Task='4b234dc3-7b36-437a-9e01-03a3a47ccb41') Task.run: running job 0: copyImage_8a9a7960-d868-4304-98c1-5ab728aa5a21: <bound method StoragePool.copyImage of <vdsm.storage.sp.StoragePool object at 0x7f45803bc610>> (args: (u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', '', u'1707a19a-6951-4811-9e65-e680e637007d', u'ae310aed-82ea-480d-ae65-e904465b71a4', u'8a9a7960-d868-4304-98c1-5ab728aa5a21', u'8a78c8e5-59ec-43b3-a959-d61bf82cf02c', u'{"DiskAlias":"vm_TestCase16411_2407484016_Disk2","DiskDescription":""}', u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', 6, 5, 1, False, False, True) kwargs: {}) (task:914) 2018-11-24 07:49:24,647+0200 DEBUG (tasks/2) [storage.TaskManager.Task] (Task='4b234dc3-7b36-437a-9e01-03a3a47ccb41') Job.run: running copyImage_8a9a7960-d868-4304-98c1-5ab728aa5a21: <bound method StoragePool.copyImage of <vdsm.storage.sp.StoragePool object at 0x7f45803bc610>> (args: (u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', '', u'1707a19a-6951-4811-9e65-e680e637007d', u'ae310aed-82ea-480d-ae65-e904465b71a4', u'8a9a7960-d868-4304-98c1-5ab728aa5a21', u'8a78c8e5-59ec-43b3-a959-d61bf82cf02c', u'{"DiskAlias":"vm_TestCase16411_2407484016_Disk2","DiskDescription":""}', u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', 6, 5, 1, False, False, True) kwargs: {}) callback None (task:333) 2018-11-24 07:49:24,752+0200 INFO (jsonrpc/2) [vdsm.api] START copyImage(sdUUID=u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', spUUID=u'9b485ff8-ef7b-11e8-b74c-001a4a168bf8', vmUUID='', srcImgUUID=u'cc37a283-e642-4c99-8624-b3a5dd5014bd', srcVolUUID=u'68b72c33-364f-441b-88b2-de4fb027894d', dstImgUUID=u'2fc85007-3dad-467a-b1f4-7cad7f6786a1', dstVolUUID=u'c38400d0-85ec-4957-8c26-01866647b8de', description=u'{"DiskAlias":"vm_TestCase16411_2407484016_Disk1","DiskDescription":""}', dstSdUUID=u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', volType=6, volFormat=5, preallocate=1, postZero=u'false', force=u'false', discard=True) from=::ffff:10.46.16.248,40002, flow_id=templates_create_cce91458-348d-4c17, task_id=345a8a56-487c-4bd6-be46-8f4cc30fa155 (api:48) 2018-11-24 07:49:25,313+0200 DEBUG (jsonrpc/2) [storage.TaskManager] scheduled job copyImage_2fc85007-3dad-467a-b1f4-7cad7f6786a1 for task 345a8a56-487c-4bd6-be46-8f4cc30fa155 (taskManager:78) 2018-11-24 07:49:25,313+0200 INFO (jsonrpc/2) [vdsm.api] FINISH copyImage return=None from=::ffff:10.46.16.248,40002, flow_id=templates_create_cce91458-348d-4c17, task_id=345a8a56-487c-4bd6-be46-8f4cc30fa155 (api:54) 2018-11-24 07:49:25,365+0200 DEBUG (tasks/6) [storage.TaskManager.Task] (Task='345a8a56-487c-4bd6-be46-8f4cc30fa155') Task.run: running job 0: copyImage_2fc85007-3dad-467a-b1f4-7cad7f6786a1: <bound method StoragePool.copyImage of <vdsm.storage.sp.StoragePool object at 0x7f45803bc610>> (args: (u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', '', u'cc37a283-e642-4c99-8624-b3a5dd5014bd', u'68b72c33-364f-441b-88b2-de4fb027894d', u'2fc85007-3dad-467a-b1f4-7cad7f6786a1', u'c38400d0-85ec-4957-8c26-01866647b8de', u'{"DiskAlias":"vm_TestCase16411_2407484016_Disk1","DiskDescription":""}', u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', 6, 5, 1, False, False, True) kwargs: {}) (task:914) 2018-11-24 07:49:25,365+0200 DEBUG (tasks/6) [storage.TaskManager.Task] (Task='345a8a56-487c-4bd6-be46-8f4cc30fa155') Job.run: running copyImage_2fc85007-3dad-467a-b1f4-7cad7f6786a1: <bound method StoragePool.copyImage of <vdsm.storage.sp.StoragePool object at 0x7f45803bc610>> (args: (u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', '', u'cc37a283-e642-4c99-8624-b3a5dd5014bd', u'68b72c33-364f-441b-88b2-de4fb027894d', u'2fc85007-3dad-467a-b1f4-7cad7f6786a1', u'c38400d0-85ec-4957-8c26-01866647b8de', u'{"DiskAlias":"vm_TestCase16411_2407484016_Disk1","DiskDescription":""}', u'08ea40a2-4d09-4dff-9743-941cc6b7aa65', 6, 5, 1, False, False, True) kwargs: {}) callback None (task:333)
Created attachment 1509517 [details] latest 4.3 reproduction engine and hosts logs
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.
Created attachment 1528961 [details] logs from reproduce in rhv-4.2.8-5 reproduced in rhv-4.2.8-5 automation run logs attached
Created attachment 1539120 [details] logs-4.3.1.1-0.1 Still occurs with ovirt-engine-4.3.1.1-0.1 2019-02-25 17:49:38,725+02 ERROR [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-48) [templates_create_791656d4-c923-425e] Ending command 'org.ovirt.engine.core.b ll.AddVmTemplateCommand' with failure
Created attachment 1555516 [details] Logs_TestCase25265
Reproduced in ovirt-engine-4.2.8.6-0.1 automation run Logs attached
*** Bug 1700259 has been marked as a duplicate of this bug. ***
Reproduced in ovirt-engine-4.3.3.6 automation run Logs attached
Created attachment 1559622 [details] Test_Import_export_logs
Another reproduction of this issue in TestCase25265 (only seen in NFS storage flavor, FCP/ISCSI/GLUSTER looks OK). Engine log: 2019-05-31 21:55:20,776+03 ERROR [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-62) [templates_create_73d7a094-467b-4179] Ending command 'org.ovirt.engine.core.bll.AddVmTemplateCommand' with failure. 2019-05-31 21:55:20,783+03 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-62) [templates_create_73d7a094-467b-4179] START, SetVmStatusVDSCommand( SetVmStatusVDSCommandParameters:{vmId='d9208c4b-66ca-4232-ac01-66e5ca7514a7', status='Down', exitStatus='Normal'}), log id: 41c851f1 2019-05-31 21:55:20,785+03 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-62) [templates_create_73d7a094-467b-4179] FINISH, SetVmStatusVDSCommand, return: , log id: 41c851f1 2019-05-31 21:55:20,789+03 INFO [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-62) [templates_create_73d7a094-467b-4179] Lock freed to object 'EngineLock:{exclusiveLocks='[7148bea3-b986-410d-ba1a-c6eea69356cf=DISK, templ_TestCase25265_3121541369=TEMPLATE_NAME, 8737725b-d5f7-4011-89f7-e3d4535fe18d=TEMPLATE]', sharedLocks='[d9208c4b-66ca-4232-ac01-66e5ca7514a7=VM]'}' 2019-05-31 21:55:20,796+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-62) [templates_create_73d7a094-467b-4179] EVENT_ID: USER_ADD_VM_TEMPLATE_CREATE_TEMPLATE_FAILURE(1,322), Failed to create Template templ_TestCase25265_3121541369 or its disks from VM vm_TestCase25265_3121524454. VDSM (hypervisor-lynx22_vdsm.log) log tracking templates_create_73d7a094 flow, no issues are seen : [~/Downloads]$ grep templates_create_73d7a094 hypervisor-* hypervisor-lynx22_vdsm.log:2019-05-31 21:54:14,533+0300 INFO (jsonrpc/5) [vdsm.api] START copyImage(sdUUID=u'eb4d249d-70a0-42e7-9eaa-f47c3953d5a5', spUUID=u'6df3d69f-9483-4244-96c0-a823e5836e99', vmUUID='', srcImgUUID=u'c4538541-2b34-41df-9e21-e37da8231575', srcVolUUID=u'507058c0-5a66-4be6-8e8f-7f856036587c', dstImgUUID=u'3b336e0b-35e2-49cd-a1cc-e44dab161488', dstVolUUID=u'44a4f869-436a-40c7-9d50-17f7048367cd', description=u'{"DiskAlias":"vm_TestCase25265_3121524454_Disk_0","DiskDescription":"rhel8.0_rhv4.3_guest_disk (73a54e1)"}', dstSdUUID=u'eb4d249d-70a0-42e7-9eaa-f47c3953d5a5', volType=6, volFormat=4, preallocate=2, postZero=u'false', force=u'false', discard=False) from=::ffff:10.46.16.252,35808, flow_id=templates_create_73d7a094-467b-4179, task_id=7ef629e4-5635-491e-a6ca-3f41b80e4d3c (api:48) hypervisor-lynx22_vdsm.log:2019-05-31 21:54:14,593+0300 INFO (jsonrpc/5) [vdsm.api] FINISH copyImage return=None from=::ffff:10.46.16.252,35808, flow_id=templates_create_73d7a094-467b-4179, task_id=7ef629e4-5635-491e-a6ca-3f41b80e4d3c (api:54) hypervisor-lynx22_vdsm.log:2019-05-31 21:55:19,418+0300 INFO (jsonrpc/4) [vdsm.api] START getVolumeInfo(sdUUID=u'eb4d249d-70a0-42e7-9eaa-f47c3953d5a5', spUUID=u'6df3d69f-9483-4244-96c0-a823e5836e99', imgUUID=u'3b336e0b-35e2-49cd-a1cc-e44dab161488', volUUID=u'44a4f869-436a-40c7-9d50-17f7048367cd', options=None) from=::ffff:10.46.16.252,35764, flow_id=templates_create_73d7a094-467b-4179, task_id=4595410f-0f05-4b35-8f32-42b53a251834 (api:48) hypervisor-lynx22_vdsm.log:2019-05-31 21:55:19,461+0300 INFO (jsonrpc/4) [vdsm.api] FINISH getVolumeInfo return={'info': {'status': 'OK', 'domain': 'eb4d249d-70a0-42e7-9eaa-f47c3953d5a5', 'voltype': 'SHARED', 'description': '{"DiskAlias":"vm_TestCase25265_3121524454_Disk_0","DiskDescription":"rhel8.0_rhv4.3_guest_disk (73a54e1)"}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'COW', 'generation': 0, 'image': '3b336e0b-35e2-49cd-a1cc-e44dab161488', 'ctime': '1559328855', 'disktype': 'DATA', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '2989293568', 'children': [], 'pool': '', 'capacity': '10737418240', 'uuid': u'44a4f869-436a-40c7-9d50-17f7048367cd', 'truesize': '3001008128', 'type': 'SPARSE', 'lease': {'path': u'/rhev/data-center/mnt/mantis-nfs-lif2.lab.eng.tlv2.redhat.com:_nas01_ge__5__unused__nfs__0/eb4d249d-70a0-42e7-9eaa-f47c3953d5a5/images/3b336e0b-35e2-49cd-a1cc-e44dab161488/44a4f869-436a-40c7-9d50-17f7048367cd.lease', 'owners': [], 'version': None, 'offset': 0}}} from=::ffff:10.46.16.252,35764, flow_id=templates_create_73d7a094-467b-4179, task_id=4595410f-0f05-4b35-8f32-42b53a251834 (api:54) hypervisor-lynx22_vdsm.log:2019-05-31 21:55:19,736+0300 INFO (jsonrpc/3) [vdsm.api] START clearTask(taskID=u'7ef629e4-5635-491e-a6ca-3f41b80e4d3c', spUUID=None, options=None) from=::ffff:10.46.16.252,35764, flow_id=templates_create_73d7a094-467b-4179, task_id=20b0a691-6690-4d51-86c1-ef3259d79a3c (api:48) hypervisor-lynx22_vdsm.log:2019-05-31 21:55:19,743+0300 INFO (jsonrpc/3) [vdsm.api] FINISH clearTask return=None from=::ffff:10.46.16.252,35764, flow_id=templates_create_73d7a094-467b-4179, task_id=20b0a691-6690-4d51-86c1-ef3259d79a3c (api:54) hypervisor-lynx24_vdsm.log:2019-05-31 21:55:19,537+0300 INFO (jsonrpc/4) [vdsm.api] START prepareImage(sdUUID=u'eb4d249d-70a0-42e7-9eaa-f47c3953d5a5', spUUID=u'6df3d69f-9483-4244-96c0-a823e5836e99', imgUUID=u'3b336e0b-35e2-49cd-a1cc-e44dab161488', leafUUID=u'44a4f869-436a-40c7-9d50-17f7048367cd', allowIllegal=True) from=::ffff:10.46.16.252,32936, flow_id=templates_create_73d7a094-467b-4179, task_id=a16b1e3c-f9e2-41ed-8c7d-ac3ec9fbf565 (api:48) hypervisor-lynx24_vdsm.log:2019-05-31 21:55:19,557+0300 INFO (jsonrpc/4) [vdsm.api] FINISH prepareImage return={'info': {'path': u'/rhev/data-center/mnt/mantis-nfs-lif2.lab.eng.tlv2.redhat.com:_nas01_ge__5__unused__nfs__0/eb4d249d-70a0-42e7-9eaa-f47c3953d5a5/images/3b336e0b-35e2-49cd-a1cc-e44dab161488/44a4f869-436a-40c7-9d50-17f7048367cd', 'type': 'file'}, 'path': u'/rhev/data-center/mnt/mantis-nfs-lif2.lab.eng.tlv2.redhat.com:_nas01_ge__5__unused__nfs__0/eb4d249d-70a0-42e7-9eaa-f47c3953d5a5/images/3b336e0b-35e2-49cd-a1cc-e44dab161488/44a4f869-436a-40c7-9d50-17f7048367cd', 'imgVolumesInfo': [{'domainID': u'eb4d249d-70a0-42e7-9eaa-f47c3953d5a5', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/mantis-nfs-lif2.lab.eng.tlv2.redhat.com:_nas01_ge__5__unused__nfs__0/eb4d249d-70a0-42e7-9eaa-f47c3953d5a5/images/3b336e0b-35e2-49cd-a1cc-e44dab161488/44a4f869-436a-40c7-9d50-17f7048367cd', 'volumeID': u'44a4f869-436a-40c7-9d50-17f7048367cd', 'leasePath': u'/rhev/data-center/mnt/mantis-nfs-lif2.lab.eng.tlv2.redhat.com:_nas01_ge__5__unused__nfs__0/eb4d249d-70a0-42e7-9eaa-f47c3953d5a5/images/3b336e0b-35e2-49cd-a1cc-e44dab161488/44a4f869-436a-40c7-9d50-17f7048367cd.lease', 'imageID': u'3b336e0b-35e2-49cd-a1cc-e44dab161488'}]} from=::ffff:10.46.16.252,32936, flow_id=templates_create_73d7a094-467b-4179, task_id=a16b1e3c-f9e2-41ed-8c7d-ac3ec9fbf565 (api:54) hypervisor-lynx24_vdsm.log:2019-05-31 21:55:19,562+0300 INFO (jsonrpc/1) [vdsm.api] START getQemuImageInfo(sdUUID=u'eb4d249d-70a0-42e7-9eaa-f47c3953d5a5', spUUID=u'6df3d69f-9483-4244-96c0-a823e5836e99', imgUUID=u'3b336e0b-35e2-49cd-a1cc-e44dab161488', volUUID=u'44a4f869-436a-40c7-9d50-17f7048367cd', options=None) from=::ffff:10.46.16.252,32936, flow_id=templates_create_73d7a094-467b-4179, task_id=c4436d62-7f9b-4c7d-9942-df50fc4fcce4 (api:48) hypervisor-lynx24_vdsm.log:2019-05-31 21:55:19,696+0300 INFO (jsonrpc/1) [vdsm.api] FINISH getQemuImageInfo return={'info': {'compat': u'1.1', 'clustersize': 65536, 'actualsize': 3001008128, 'virtualsize': 10737418240, 'format': u'qcow2'}} from=::ffff:10.46.16.252,32936, flow_id=templates_create_73d7a094-467b-4179, task_id=c4436d62-7f9b-4c7d-9942-df50fc4fcce4 (api:54) hypervisor-lynx24_vdsm.log:2019-05-31 21:55:19,710+0300 INFO (jsonrpc/0) [vdsm.api] START teardownImage(sdUUID=u'eb4d249d-70a0-42e7-9eaa-f47c3953d5a5', spUUID=u'6df3d69f-9483-4244-96c0-a823e5836e99', imgUUID=u'3b336e0b-35e2-49cd-a1cc-e44dab161488', volUUID=None) from=::ffff:10.46.16.252,32936, flow_id=templates_create_73d7a094-467b-4179, task_id=97d2619c-51c2-4503-98cb-f04c69cabc13 (api:48) hypervisor-lynx24_vdsm.log:2019-05-31 21:55:19,712+0300 INFO (jsonrpc/0) [vdsm.api] FINISH teardownImage return=None from=::ffff:10.46.16.252,32936, flow_id=templates_create_73d7a094-467b-4179, task_id=97d2619c-51c2-4503-98cb-f04c69cabc13 (api:54)
This issue was seen in the following versions: ovirt-engine-4.3.4.2-0.1.el7.noarch vdsm vdsm-4.30.17-1.el7ev.x86_64 libvirt libvirt-4.5.0-10.el7_6.10.x86_64 qemu-img-rhev qemu-img-rhev-2.12.0-18.el7_6.5.x86_64 Attaching logs
Created attachment 1576214 [details] 4.3.4.2_rep_TestCase25265_NFS
Please try to run the engine with debug log level, maybe it will reveal the issue.
Hi Shir, Can you please change the engine with debug log level and run this TC in a loop to see if we can catch this isssue with debug logs?
Also seen on ovirt-engine 4.3.5.4 on TestCase16408 Tier3 run.
Happened again in tier 1 TestCase18258 (test_download_image). In this case, the failure is creating a VM. After the VM was created, the test waits for the VM to reach status 'down'. During this polling, I see in the art log "Element vm doesn't have attribute status". Attaching the relevant logs.
Created attachment 1592551 [details] logs_TestCase18258
I ran the TestCase16408 in a loop (10 times). At the first 10 attempts, the test case passed. At the next 10 attempts, at the first attempt, the test case failed. 2019-08-12 11:01:35,108 - MainThread - root - INFO - create new template with {'name': 'vm_TestCase16408_1211011132', 'disks': {'e1b69f7b-89d6-429d-bf36-e9e317d3491e': {'storagedomain': 'iscsi_2', 'sparse': True, 'format': 'cow'}, 'f1c66cf7-fd3f-4b2a-a7f0-d9e78c37b092': {'storagedomain': 'iscsi_2', 'sparse': True, 'format': 'cow'}}, 'vm': 'vm_TestCase16408_1211011132', 'cluster': 'golden_env_mixed_1', 'storagedomain': 'iscsi_2', 'timeout': 900, 'wait': True} 2019-08-12 11:01:35,537 - MainThread - templates - INFO - Using Correlation-Id: templates_create_eff69600-9628-45c9 2019-08-12 11:01:37,193 - MainThread - templates - INFO - New entity was added 2019-08-12 11:01:37,197 - MainThread - templates - INFO - Property 'template->name' has correct value: vm_TestCase16408_1211011132 2019-08-12 11:01:37,198 - MainThread - templates - INFO - Property 'template->cluster->id' has correct value: cfacd546-8fa2-4d7d-bb54-7871558e7de9 2019-08-12 11:02:47,844 - MainThread - templates - ERROR - Response code is not valid, expected is: [200, 201], actual is: 404 2019-08-12 11:02:47,845 - MainThread - templates - ERROR - Element template doesn't have attribute status 2019-08-12 11:02:47,846 - MainThread - root - ERROR - Failed to create new template with {'name': 'vm_testcase16408_1211011132', 'disks': {'e1b69f7b-89d6-429d-bf36-e9e317d3491e': {'storagedomain': 'iscsi_2', 'sparse': true, 'format': 'cow'}, 'f1c66cf7-fd3f-4b2a-a7f0-d9e78c37b092': {'storagedomain': 'iscsi_2', 'sparse': true, 'format': 'cow'}}, 'vm': 'vm_testcase16408_1211011132', 'cluster': 'golden_env_mixed_1', 'storagedomain': 'iscsi_2', 'timeout': 900, 'wait': true} ERROR2019-08-12 11:02:47,961 - MainThread - art.ll_lib.vms - INFO - Powering off VMs: ['vm_TestCase16408_1211011132'] 2019-08-12 11:02:48,137 - MainThread - root - INFO - Remove VM vm_TestCase16408_1211011132 with {'wait': False} 2019-08-12 11:02:48,230 - MainThread - vms - INFO - Using Correlation-Id: vms_delete_683692d9-7983-468c 2019-08-12 11:02:49,838 - MainThread - test_utils - INFO - All ['vm_TestCase16408_1211011132'] are gone. TemplateException: Unable to create template vm_TestCase16408_1211011132 Errors from engine.log: 2019-08-12 10:26:03,338+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-7) [] EVENT_ID: USER_VDC_LOGIN_FAILED(114), User admin@internal connecting from '10.35.206.42' failed to log in : 'Unable to log in. Verify your login information or contact the system administrator.'. 2019-08-12 10:26:03,342+03 ERROR [org.ovirt.engine.core.sso.servlets.InteractiveAuthServlet] (default task-1) [] Cannot authenticate user 'admin@internal' connecting from '10.35.206.42': Unable to log in. Verify your login information or contact the system administrator. 2019-08-12 10:26:06,698+03 ERROR [org.ovirt.engine.core.sso.servlets.InteractiveAuthServlet] (default task-3) [] Cannot authenticate user 'admin@internal' connecting from '10.35.206.42': Unable to log in. Verify your login information or contact the system administrator. 2019-08-12 10:33:25,074+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-9) [] Operation Failed: [Cannot add VM. Thin provisioned template disks can not be defined as Raw.] 2019-08-12 10:36:26,426+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-9) [] Operation Failed: [Cannot add VM. Thin provisioned template disks can not be defined as Raw.] 2019-08-12 10:39:40,787+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-8) [] Operation Failed: [Cannot add VM. Thin provisioned template disks can not be defined as Raw.] 2019-08-12 10:42:45,534+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-8) [] Operation Failed: [Cannot add VM. Thin provisioned template disks can not be defined as Raw.] 2019-08-12 10:45:54,948+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-8) [] Operation Failed: [Cannot add VM. Thin provisioned template disks can not be defined as Raw.] 2019-08-12 10:48:52,418+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-8) [] Operation Failed: [Cannot add VM. Thin provisioned template disks can not be defined as Raw.] 2019-08-12 10:51:41,298+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-8) [] Operation Failed: [Cannot add VM. Thin provisioned template disks can not be defined as Raw.] 2019-08-12 10:54:34,310+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-9) [] Operation Failed: [Cannot add VM. Thin provisioned template disks can not be defined as Raw.] 2019-08-12 10:56:59,733+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-8) [] Operation Failed: [Cannot add VM. Thin provisioned template disks can not be defined as Raw.] 2019-08-12 10:59:41,677+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-13) [] Operation Failed: [Cannot add VM. Thin provisioned template disks can not be defined as Raw.] 2019-08-12 11:02:41,712+03 ERROR [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-80) [templates_create_eff69600-9628-45c9] Ending command 'org.ovirt.engine.core.bll.AddVmTemplateCommand' with failure. 2019-08-12 11:02:41,735+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-80) [templates_create_eff69600-9628-45c9] EVENT_ID: USER_ADD_VM_TEMPLATE_CREATE_TEMPLATE_FAILURE(1,322), Failed to create Template vm_TestCase16408_1211011132 or its disks from VM vm_TestCase16408_1211011132. Logs attached
Created attachment 1602784 [details] Logs_bug_1588100
I ran the TestCase11987 in a loop (10 times). After 7 attempts, the test case failed. From engine log: 2019-08-18 17:32:19,072+03 ERROR [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [templates_create_8b05112a-02b7-450c] Ending command 'org.ovirt.engine.core.b ll.AddVmTemplateCommand' with failure. 2019-08-18 17:32:19,088+03 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [templates_create_8b05112a-02b7-450c] START, SetVmStatusVDSCommand( Se tVmStatusVDSCommandParameters:{vmId='86dcfb5e-5bef-4e0d-8a8c-cc1ad85301df', status='Down', exitStatus='Normal'}), log id: 567cc4a4 2019-08-18 17:32:19,092+03 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [templates_create_8b05112a-02b7-450c] FINISH, SetVmStatusVDSCommand, r eturn: , log id: 567cc4a4 2019-08-18 17:32:19,100+03 INFO [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [templates_create_8b05112a-02b7-450c] Lock freed to object 'EngineLock:{exclu siveLocks='[templ_TestCase11987_1817310865=TEMPLATE_NAME, 926f67f6-417f-465d-bf5d-d9ed2ecb6b30=TEMPLATE, a01dcac5-6809-4971-b3cd-63c762b1b731=DISK]', sharedLocks='[86dcfb5e-5bef-4e0d-8a8c-cc1ad85301df=VM]'}' 2019-08-18 17:32:19,113+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [templates_create_8b05112a-02b7-450c] EVENT_ID: USER_AD D_VM_TEMPLATE_CREATE_TEMPLATE_FAILURE(1,322), Failed to create Template templ_TestCase11987_1817310865 or its disks from VM vm_TestCase11987_1817303663. logs attached
Created attachment 1605493 [details] logs_testCase11987_4.3.6
(In reply to Shir Fishbain from comment #43) > Created attachment 1605493 [details] > logs_testCase11987_4.3.6 Please add the server.log. If you missing that right now, please add it in the next times it occurs.
Created attachment 1605746 [details] server_log
We just hit the same issue when running our regression suit. There is a test that runs 5 times in which we create a template out of a vm with two disks (1: raw 2: thin). So first 4 times it succeeded and the fifth it failed with the same engine error: 2019-09-24 18:42:17,114Z INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-95) [e86 fb4dd-8d18-45b0-8714-a95de12342aa] Command 'AddVmTemplate' id: '14f0dcf1-8270-4763-a5d7-041074e8d7fd' child commands '[8965c456-8b50-43ea-9a90-263022e30 4bb]' executions were completed, status 'FAILED' 2019-09-24 18:42:18,420Z ERROR [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [e86fb4dd-8d18-45b0-8714-a95de12342aa] Ending command 'org.ovirt.engine.core.bll.AddVmTemplateCommand' with failure. 2019-09-24 18:42:18,435Z INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [e86fb4dd-8d18-45b0-8714-a95de12342aa] START, SetVmStatusVDSCommand( SetVmStatusVDSCommandParameters:{vmId='a1e79ee3-9a51-47a2-b1bc-fe135d68047d', status='Down', exitStatus='Normal'}), log id: 5fe2a09e 2019-09-24 18:42:18,439Z INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [e86fb4dd-8d18-45b0-8714-a95de12342aa] FINISH, SetVmStatusVDSCommand, return: , log id: 5fe2a09e 2019-09-24 18:42:18,446Z INFO [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [e86fb4dd-8d18-45b0-8714-a95de12342aa] Lock freed to object 'EngineLock:{exclusiveLocks='[ansible_template_from_vm_ansible_regression_vm=TEMPLATE_NAME, 782a4e88-5f18-45d6-b8a7-9ca3124f345f=TEMPLATE, 127fd20a-332e-4657-ad8c-aa508550d865=DISK, d80a8006-40d5-4ffb-bfd8-f1bd0ef4f44e=DISK]', sharedLocks='[a1e79ee3-9a51-47a2-b1bc-fe135d68047d=VM]'}' 2019-09-24 18:42:18,457Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [e86fb4dd-8d18-45b0-8714-a95de12342aa] EVENT_ID: USER_ADD_VM_TEMPLATE_CREATE_TEMPLATE_FAILURE(1,322), Failed to create Template ansible_template_from_vm_ansible_regression_vm or its disks from VM ansible_regression_vm. Looking at vdsm log, i can see that both disks were successfully copied: Start of first disk: 2019-09-24 18:36:00,055+0000 INFO (jsonrpc/5) [vdsm.api] START copyImage(sdUUID=u'9ca6d43d-0990-474d-84e6-aff597031cd2', spUUID=u'07d89a33-43b3-48bb-aa 77-e3961644edd9', vmUUID='', srcImgUUID=u'db2aae08-9d00-4dcc-8c37-e9c4d3751503', srcVolUUID=u'76a3b457-d2af-461a-a061-e01e87f7818a', dstImgUUID=u'efbd19 e2-2232-47c0-92a2-4aeb5214384b', dstVolUUID=u'767df6fc-1b3c-473b-b71e-cd23b9a055d6', description=u'{"DiskAlias":"basic_regression_data","DiskDescription ":""}', dstSdUUID=u'9ca6d43d-0990-474d-84e6-aff597031cd2', volType=6, volFormat=5, preallocate=1, postZero=u'true', force=u'false', discard=False) from= ::ffff:10.1.41.3,54792, flow_id=e86fb4dd-8d18-45b0-8714-a95de12342aa, task_id=3e4ddb66-a49c-4be8-9c2e-5e6c9bdfb5df (api:48) End of first disk: 2019-09-24 18:42:16,611+0000 INFO (jsonrpc/3) [vdsm.api] FINISH getAllTasksStatuses return={'allTasksStatus': {'3e4ddb66-a49c-4be8-9c2e-5e6c9bdfb5df': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '3e4ddb66-a49c-4be8-9c2e-5e6c9bdfb5df'}, '7c50d976-0afd-43b5-b9ce-1a77e1776546': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '7c50d976-0afd-43b5-b9ce-1a77e1776546'}}} from=::ffff:10.1.41.3,53950, task_id=b3a9dda8-8aeb-4504-8c8e-eb74c096c5a7 (api:54) Start of second disk: 2019-09-24 18:36:03,939+0000 INFO (jsonrpc/6) [vdsm.api] START copyImage(sdUUID=u'9ca6d43d-0990-474d-84e6-aff597031cd2', spUUID=u'07d89a33-43b3-48bb-aa 77-e3961644edd9', vmUUID='', srcImgUUID=u'563265f3-463b-479e-a4e4-ae6c86b22a00', srcVolUUID=u'42135879-913d-4b90-9f73-59aa9c46d94c', dstImgUUID=u'e0ab99 60-3190-4165-85a9-66f89e3bbb28', dstVolUUID=u'624012a5-32d1-47ff-a41f-6113a49bacf7', description=u'{"DiskAlias":"rhel76_guest_t1_wad","DiskDescription": ""}', dstSdUUID=u'9ca6d43d-0990-474d-84e6-aff597031cd2', volType=6, volFormat=4, preallocate=2, postZero=u'true', force=u'false', discard=False) from=:: ffff:10.1.41.3,54792, flow_id=e86fb4dd-8d18-45b0-8714-a95de12342aa, task_id=7c50d976-0afd-43b5-b9ce-1a77e1776546 (api:48) End of second disk: 2019-09-24 18:42:16,611+0000 INFO (jsonrpc/3) [vdsm.api] FINISH getAllTasksStatuses return={'allTasksStatus': {'3e4ddb66-a49c-4be8-9c2e-5e6c9bdfb5df': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '3e4ddb66-a49c-4be8-9c2e-5e6c9bdfb5df'}, '7c50d976-0afd-43b5-b9ce-1a77e1776546': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '7c50d976-0afd-43b5-b9ce-1a77e1776546'}}} from=::ffff:10.1.41.3,53950, task_id=b3a9dda8-8aeb-4504-8c8e-eb74c096c5a7 (api:54) in addition i am attaching the server.log as per Avihi suggest. server_25-10.log
Created attachment 1619033 [details] server log
The issue is seen again at one regression TestCase16782/Tier2 this time. ovirt-engine-4.3.6.6-0.1.el7.noarch/vdsm-4.30.33-1.el7ev.x86_64. Engine log: [~/Downloads]$ grep ERROR engine.log 2019-09-27 19:00:15,106+03 ERROR [org.ovirt.engine.core.bll.AddVmTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-2) [templates_create_625c5dae-312f-4bb4] Ending command 'org.ovirt.engine.core.bll.AddVmTemplateCommand' with failure. 2019-09-27 19:00:15,171+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-2) [templates_create_625c5dae-312f-4bb4] EVENT_ID: USER_ADD_VM_TEMPLATE_CREATE_TEMPLATE_FAILURE(1,322), Failed to create Template templ_TestCase16782_2718590796 or its disks from VM golden_env_mixed_virtio_1_0. VDSM log: [~/Downloads]$ grep 625c5dae-312f-4bb4 hypervisor-lynx25_vdsm.log 2019-09-27 18:59:09,055+0300 INFO (jsonrpc/2) [vdsm.api] START copyImage(sdUUID=u'ef995e03-d61d-4f28-aa98-c3d24193ef04', spUUID=u'eff761b8-5ff2-4a26-a68d-5d8887dac159', vmUUID='', srcImgUUID=u'4ea0208e-e26c-4246-af89-3f7d4239e6b4', srcVolUUID=u'4761e91a-47c9-4dde-a2be-b33de72d503c', dstImgUUID=u'01928422-9555-41c4-8a02-8020b981ef65', dstVolUUID=u'737518de-8e8d-410d-a64d-e7c5abb22219', description=u'{"DiskAlias":"golden_template_default_rhel8.0","DiskDescription":"rhel8.0_rhv4.3_guest_disk (7d3bff6)"}', dstSdUUID=u'50a1418d-e9cc-4eaf-b845-a5916737fc82', volType=6, volFormat=4, preallocate=2, postZero=u'false', force=u'false', discard=False) from=::ffff:10.46.16.255,53028, flow_id=templates_create_625c5dae-312f-4bb4, task_id=d35422f8-3a25-4f70-a2db-2ed6316f446c (api:48) 2019-09-27 18:59:09,186+0300 INFO (jsonrpc/2) [vdsm.api] FINISH copyImage return=None from=::ffff:10.46.16.255,53028, flow_id=templates_create_625c5dae-312f-4bb4, task_id=d35422f8-3a25-4f70-a2db-2ed6316f446c (api:54) 2019-09-27 19:00:12,741+0300 INFO (jsonrpc/5) [vdsm.api] START getVolumeInfo(sdUUID=u'50a1418d-e9cc-4eaf-b845-a5916737fc82', spUUID=u'eff761b8-5ff2-4a26-a68d-5d8887dac159', imgUUID=u'01928422-9555-41c4-8a02-8020b981ef65', volUUID=u'737518de-8e8d-410d-a64d-e7c5abb22219', options=None) from=::ffff:10.46.16.255,53014, flow_id=templates_create_625c5dae-312f-4bb4, task_id=9fc632ea-634d-4eca-8e40-6a28b65b65c4 (api:48) 2019-09-27 19:00:12,764+0300 INFO (jsonrpc/5) [vdsm.api] FINISH getVolumeInfo return={'info': {'status': 'OK', 'domain': '50a1418d-e9cc-4eaf-b845-a5916737fc82', 'voltype': 'SHARED', 'description': '{"DiskAlias":"golden_template_default_rhel8.0","DiskDescription":"rhel8.0_rhv4.3_guest_disk (7d3bff6)"}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'COW', 'generation': 0, 'image': '01928422-9555-41c4-8a02-8020b981ef65', 'disktype': 'DATA', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '2818572288', 'children': [], 'pool': '', 'ctime': '1569599951', 'capacity': '10737418240', 'uuid': u'737518de-8e8d-410d-a64d-e7c5abb22219', 'truesize': '2818572288', 'type': 'SPARSE', 'lease': {'path': '/dev/50a1418d-e9cc-4eaf-b845-a5916737fc82/leases', 'owners': [], 'version': None, 'offset': 110100480}}} from=::ffff:10.46.16.255,53014, flow_id=templates_create_625c5dae-312f-4bb4, task_id=9fc632ea-634d-4eca-8e40-6a28b65b65c4 (api:54) 2019-09-27 19:00:14,047+0300 INFO (jsonrpc/0) [vdsm.api] START clearTask(taskID=u'd35422f8-3a25-4f70-a2db-2ed6316f446c', spUUID=None, options=None) from=::ffff:10.46.16.255,53014, flow_id=templates_create_625c5dae-312f-4bb4, task_id=ad1211c1-c5f3-4125-b6d9-bf8bfd08e081 (api:48) 2019-09-27 19:00:14,067+0300 INFO (jsonrpc/0) [vdsm.api] FINISH clearTask return=None from=::ffff:10.46.16.255,53014, flow_id=templates_create_625c5dae-312f-4bb4, task_id=ad1211c1-c5f3-4125-b6d9-bf8bfd08e081 (api:54)
Created attachment 1621801 [details] engine and vdsm logs of the issue on ovirt-engine 4.3.6.6-0.1.el7
...and another one. ovirt-engine-4.3.6.6-0.1.el7.noarch (build name rhv-4.3.6-9) vdsm-4.30.33-1.el7ev.x86_64 Kernel 3.10.0-1062.1.2.el7.x86_64
Created attachment 1623044 [details] engine, vdsm and server logs
(In reply to Beni Pelled from comment #51) > Created attachment 1623044 [details] > engine, vdsm and server logs Adding the same logs, again and again, will not assist in analyzing that case. The regular logs don't contain the needed data. Please add logs that contains DEBUG messages only.
Forgot the needinfo :) Can you please contact relevant QE directly so they can show you a reproducer and you can investigate with QE?
(In reply to Lukas Svaty from comment #53) > Forgot the needinfo :) Can you please contact relevant QE directly so they > can show you a reproducer and you can investigate with QE? QE doesn't have a reproducer, it is a random issue in the automation.
(In reply to Eyal Shenitzky from comment #52) > (In reply to Beni Pelled from comment #51) > > Created attachment 1623044 [details] > > engine, vdsm and server logs > > Adding the same logs, again and again, will not assist in analyzing that > case. > The regular logs don't contain the needed data. > > Please add logs that contains DEBUG messages only. Managed to reproduce it. Changed the Logs to DEBUG level. VDSM log: 2020-07-21 09:57:58,070+0300 INFO (jsonrpc/5) [vdsm.api] START copyImage(sdUUID='f4a1a4da-bb8d-483e-9a91-55dcc426d318', spUUID='f20d09ce-ce12-4af7-a7b0-362ead08724d', vmUUID='', srcImgUUID='0f75e2fb-a518-48c5-8606-3513f0d089bb', srcVolUUID='95614b7e-9e45-4f50-9c36-087523350652', dstImgUUID='6f21f0f7-995c-4d15-907f-eb53575fd71b', dstVolUUID='9e047b90-6976-4014-b6ca-76bdff0b95f0', description='{"DiskAlias":"vm_TestCaseCreateTemplateF_2109505102_Disk_0","DiskDescription":"latest-rhel-guest-image-8.2-infra (9f287fd)"}', dstSdUUID='f4a1a4da-bb8d-483e-9a91-55dcc426d318', volType=6, volFormat=4, preallocate=2, postZero='false', force='false', discard=False) from=::ffff:10.35.161.118,57182, flow_id=templates_create_05dee8db-0c1a-45fd, task_id=af9628b9-2d2f-450b-ae7c-8147dcb590e4 (api:48) 2020-07-21 09:58:41,549+0300 INFO (jsonrpc/3) [vdsm.api] FINISH getAllTasksStatuses return={'allTasksStatus': {'af9628b9-2d2f-450b-ae7c-8147dcb590e4': {'taskID': 'af9628b9-2d2f-450b-ae7c-8147dcb590 e4', 'taskState': 'finished', 'taskResult': 'success', 'code': 0, 'message': '1 jobs completed successfully'}}} from=::ffff:10.35.161.118,57154, task_id=8d174350-47e2-4226-be16-b0bbb0151b9c (api:54) Engine log: 2020-07-21 09:58:44,793+03 ERROR [org.ovirt.engine.core.bll.AddVmTemplateFromSnapshotCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-25) [templates_create_05dee8db-0c1a -45fd] Ending command 'org.ovirt.engine.core.bll.AddVmTemplateFromSnapshotCommand' with failure. Versions: ovirt-engine-4.4.1.8-0.7.el8ev.noarch vdsm-4.40.22-1.el8ev.x86_64 Attaching relevant logs.
Created attachment 1701851 [details] create_template_from_snapshot_failure_logs
Unfortunately, I still can't find any reasonable explanation for failing the template creation operation. According to the logs the creation of the template images was successful so the command started the endSuccessfully() method, there, we updating the image info by calling -Get image info -> set Qcow format -> prepare image -> get qemu image info -> teardown image. It seems to me that something when wrong when the engine tried to end the task for teardown the image. Benny can you please take a look? maybe I missed something.
No steps to reproduce, can be verified by running the regular tiers.
Verified by running our regression automation tiers (also the test cases that are including in "QA Whiteboard" with iscsi SD) RHV build name: rhv-4.4.7-6 (RC) ovirt-engine-4.4.7.6-0.11.el8ev.noarch vdsm-4.40.70.6-1.el8ev.x86_64
This bugzilla is included in oVirt 4.4.7 release, published on July 6th 2021. Since the problem described in this bug report should be resolved in oVirt 4.4.7 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.