Bug 1950209 - Leaf images used by the VM is deleted by the engine during snapshot merge
Summary: Leaf images used by the VM is deleted by the engine during snapshot merge
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.4.4
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ovirt-4.4.6-1
: 4.4.6
Assignee: Milan Zamazal
QA Contact: Qin Yuan
URL:
Whiteboard:
: 1957992 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-16 04:36 UTC by nijin ashok
Modified: 2022-08-18 14:45 UTC (History)
12 users (show)

Fixed In Version: ovirt-engine-4.4.6.7 vdsm-4.40.60.7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-01 13:25:23 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 6023851 0 None None None 2021-05-07 03:44:07 UTC
Red Hat Product Errata RHBA-2021:2182 0 None None None 2021-06-01 13:25:29 UTC
oVirt gerrit 114591 0 master MERGED core: Make sure async snapshots are used on newer hosts 2021-05-03 19:19:05 UTC
oVirt gerrit 114596 0 ovirt-engine-4.4.6.z MERGED core: Make sure async snapshots are used on newer hosts 2021-05-04 10:06:43 UTC
oVirt gerrit 114680 0 ovirt-4.4.6 MERGED virt: Refuse to run a snapshot with an invalid timeout 2021-05-17 12:00:05 UTC

Comment 3 Eyal Shenitzky 2021-04-19 14:21:49 UTC
Benny, can you please have a look at this issue?

Comment 4 Benny Zlotnik 2021-04-28 14:56:11 UTC
Looks very similar to https://bugzilla.redhat.com/show_bug.cgi?id=1916122

Liran, what do you think?

I suppose the logging will be fixed by https://bugzilla.redhat.com/show_bug.cgi?id=1933669
But I still don't understand how the snapshot ended up being in the chain after failed (because of a libvirt error?)

2021-04-12 16:31:24,487+0300 WARN  (virt/9dc3a8da) [virt.vm] (vmId='301c01ec-ff6c-4705-a7d7-e33612800292') Unable to freeze guest filesystems: Guest agent is not responding: QEMU guest agent is not connected (vm:4143)
2021-04-12 16:31:24,488+0300 INFO  (virt/9dc3a8da) [virt.vm] (vmId='301c01ec-ff6c-4705-a7d7-e33612800292') Taking a live snapshot (drives=vda,memory=True) (snapshot:460)
2021-04-12 16:31:24,694+0300 INFO  (virt/9dc3a8da) [virt.vm] (vmId='301c01ec-ff6c-4705-a7d7-e33612800292') Snapshot timeout reached, operation aborted (snapshot:80)
2021-04-12 16:31:24,694+0300 INFO  (virt/9dc3a8da) [virt.vm] (vmId='301c01ec-ff6c-4705-a7d7-e33612800292') Thawing guest filesystems (vm:4161)
2021-04-12 16:31:24,696+0300 WARN  (virt/9dc3a8da) [virt.vm] (vmId='301c01ec-ff6c-4705-a7d7-e33612800292') Unable to thaw guest filesystems: Guest agent is not responding: QEMU guest agent is not connected (vm:4170)


Time out is reached after 200ms
2021-04-12 16:31:24,488+0300 INFO  (virt/9dc3a8da) [virt.vm] (vmId='301c01ec-ff6c-4705-a7d7-e33612800292') Taking a live snapshot (drives=vda,memory=True) (snapshot:460)
2021-04-12 16:31:24,694+0300 INFO  (virt/9dc3a8da) [virt.vm] (vmId='301c01ec-ff6c-4705-a7d7-e33612800292') Snapshot timeout reached, operation aborted (snapshot:80)

Comment 5 Liran Rotenberg 2021-04-29 07:58:14 UTC
It looks like the exact same.
BZ 1933669 will improve the abort logging but I also saw this line:
2021-04-12 16:30:34,444+0300 INFO  (snap_abort/9dc3a8da) [virt.vm] (vmId='301c01ec-ff6c-4705-a7d7-e33612800292') Starting snapshot abort job, with check interval 0 (snapshot:616)

This means the timeout set is 0 minutes, as we receive it in minutes multiple it by 60 to seconds and do:
monitoring_interval = min(60, self._timeout // 10)
self._vm.log.info("Starting snapshot abort job, "
                  "with check interval %s",
                  monitoring_interval)

Can you see what is the output of engine-config -g LiveSnapshotTimeoutInMinutes?

Unfortunately it looks like I allowed 0:
LiveSnapshotTimeoutInMinutes.validValues=0..3000

I do wish for libvirt logs as well. But the timeout set to 0 might be the source of the problem.

Comment 6 Liran Rotenberg 2021-04-29 10:03:14 UTC
Thinking more carefully, let's say the timeout is 0. Then the abort thread will end but the snapshot will continue.
The abort mechanism won't set the abort flag, and won't execute domjobabort. Something else did. Which set the abort flag as a result.

If the timeout 0 is the cause in this bug, then although is seems exactly the same as BZ 1916122, it is not.
But given the deeper thought above I suspect if the timeout 0 is the real issues here.

Comment 9 Arik 2021-05-03 19:20:52 UTC
oops, needs backport..

Comment 12 Germano Veit Michel 2021-05-07 03:44:07 UTC
*** Bug 1957992 has been marked as a duplicate of this bug. ***

Comment 14 Qin Yuan 2021-05-10 16:24:54 UTC
Verified with:
ovirt-engine-4.4.6.7-0.1.el8ev.noarch
4.6 host: vdsm-4.40.60.6-1.el8ev.x86_64
4.3 host: vdsm-4.30.51-1.el7ev.x86_64

Steps:
1. Prepare 3 VMs:
   - vm_46_43 runs on 4.3 cluster, 4.6 host
   - vm_43_43 runs on 4.3 cluster, 4.3 host
   - vm_46_46 runs on 4.6 cluster, 4.6 host

2. Take live snapshot for the 3 VMs respectively.

Results:
1. Taking live snapshot for vm_46_43 succeeds. The live snapshot command timeout in vdsm log is 30. Engine waits for job on host to finish.

vdsm.log:
2021-05-10 11:02:43,512-0400 INFO  (jsonrpc/1) [api.virt] START snapshot(snapDrives=[{'imageID': '678eecc9-eddb-4ad0-8488-a16f18fa1cfa', 'baseVolumeID': '6c00ad21-f35a-419a-ba19-f9feb8b61a3d', 'volumeID': 'd2a8b57e-5585-400e-b7c4-48672a23ec43', 'domainID': '528b4865-0381-484f-a2b5-590889a30002'}], snapMemory=None, frozen=False, jobUUID='1650435d-28df-4d45-bf14-d795db535fa1', timeout=30) from=::ffff:10.35.30.101,35424, flow_id=418c0ca7-6d72-410c-92ea-2376662daee3, vmId=f250cc47-5097-4833-9f87-f66ccefe5926 (api:48)
2021-05-10 11:02:43,513-0400 INFO  (jsonrpc/1) [api.virt] FINISH snapshot return={'status': {'code': 0, 'message': 'Done'}} from=::ffff:10.35.30.101,35424, flow_id=418c0ca7-6d72-410c-92ea-2376662daee3, vmId=f250cc47-5097-4833-9f87-f66ccefe5926 (api:54)
2021-05-10 11:02:43,515-0400 INFO  (virt/1650435d) [root] Running job '1650435d-28df-4d45-bf14-d795db535fa1'... (jobs:185)
2021-05-10 11:02:43,515-0400 INFO  (snap_abort/1650435d) [virt.vm] (vmId='f250cc47-5097-4833-9f87-f66ccefe5926') Starting snapshot abort job, with check interval 60 (snapshot:622)
...
2021-05-10 11:02:44,748-0400 INFO  (virt/1650435d) [virt.vm] (vmId='f250cc47-5097-4833-9f87-f66ccefe5926') Disabling drive monitoring (drivemonitor:66)
2021-05-10 11:02:44,748-0400 INFO  (virt/1650435d) [virt.vm] (vmId='f250cc47-5097-4833-9f87-f66ccefe5926') Freezing guest filesystems (vm:4172)
2021-05-10 11:02:44,797-0400 INFO  (virt/1650435d) [virt.vm] (vmId='f250cc47-5097-4833-9f87-f66ccefe5926') 1 guest filesystems frozen (vm:4191)
2021-05-10 11:02:44,798-0400 INFO  (virt/1650435d) [virt.vm] (vmId='f250cc47-5097-4833-9f87-f66ccefe5926') Taking a live snapshot (drives=vda,memory=True) (snapshot:460)
2021-05-10 11:02:45,003-0400 INFO  (virt/1650435d) [virt.vm] (vmId='f250cc47-5097-4833-9f87-f66ccefe5926') Completed live snapshot (snapshot:491)
2021-05-10 11:02:45,003-0400 INFO  (virt/1650435d) [virt.vm] (vmId='f250cc47-5097-4833-9f87-f66ccefe5926') Starting snapshot teardown (snapshot:216)
2021-05-10 11:02:45,003-0400 INFO  (virt/1650435d) [virt.vm] (vmId='f250cc47-5097-4833-9f87-f66ccefe5926') Thawing guest filesystems (vm:4199)
2021-05-10 11:02:45,016-0400 INFO  (virt/1650435d) [virt.vm] (vmId='f250cc47-5097-4833-9f87-f66ccefe5926') 1 guest filesystems thawed (vm:4218)
2021-05-10 11:02:45,027-0400 INFO  (virt/1650435d) [virt.vm] (vmId='f250cc47-5097-4833-9f87-f66ccefe5926') clearing block threshold for drive 'vda[1]' (drivemonitor:149)

engine.log
2021-05-10 18:02:37,357+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-82162) [418c0ca7-6d72-410c-92ea-2376662daee3] EVENT_ID: USER_CREATE_SNAPSHOT(45), Snapshot 'snap_46_43' creation for VM 'vm_46_43' was initiated by admin@internal-authz.
...
2021-05-10 18:02:44,550+03 INFO  [org.ovirt.engine.core.bll.VirtJobCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-4) [418c0ca7-6d72-410c-92ea-2376662daee3] Command CreateLiveSnapshotForVm id: 'a2dd8eec-bb0b-4bce-bab2-2d0ace1e4a7b': waiting for job '1650435d-28df-4d45-bf14-d795db535fa1' on host 'host_46_43' (id: '7f9a8f65-16b2-4277-bc3b-e97fa40c5573') to complete
...
2021-05-10 18:02:46,579+03 INFO  [org.ovirt.engine.core.bll.VirtJobCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53) [418c0ca7-6d72-410c-92ea-2376662daee3] Command CreateLiveSnapshotForVm id: 'a2dd8eec-bb0b-4bce-bab2-2d0ace1e4a7b': job '1650435d-28df-4d45-bf14-d795db535fa1' execution was completed with VDSM job status 'done'
2021-05-10 18:02:46,583+03 INFO  [org.ovirt.engine.core.bll.VirtJobCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53) [418c0ca7-6d72-410c-92ea-2376662daee3] Command CreateLiveSnapshotForVm id: 'a2dd8eec-bb0b-4bce-bab2-2d0ace1e4a7b': execution was completed, the command status is 'SUCCEEDED'
...
2021-05-10 18:02:49,865+03 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [] Lock freed to object 'EngineLock:{exclusiveLocks='[f250cc47-5097-4833-9f87-f66ccefe5926=VM]', sharedLocks=''}'
2021-05-10 18:02:49,875+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_SUCCESS(68), Snapshot 'snap_46_43' creation for VM 'vm_46_43' has been completed.


2. Taking live snapshot for vm_43_43 succeeds. There is no live snapshot timeout parameter in vdsm log.

vdsm.log:
2021-05-10 11:09:05,006-0400 INFO  (jsonrpc/0) [api.virt] START snapshot(snapDrives=[{u'baseVolumeID': u'18689778-47ba-42bf-959d-128bc0c310ef', u'domainID': u'528b4865-0381-484f-a2b5-590889a30002', u'volumeID': u'1dc1cf7e-520a-4619-ae29-6ca32e062cef', u'imageID': u'df64a90b-1cdc-419d-b441-cbea087faafe'}], snapMemory=None, frozen=False) from=::ffff:10.35.30.101,43004, flow_id=732ff26c-d254-494b-84d1-b2ba5a2ff03d, vmId=b6d24fab-b324-4127-b434-e4a691ef7882 (api:48)
2021-05-10 11:09:05,007-0400 INFO  (jsonrpc/0) [vdsm.api] START prepareImage(sdUUID=u'528b4865-0381-484f-a2b5-590889a30002', spUUID='63607264-2540-4399-af23-b87d96a6fc50', imgUUID=u'df64a90b-1cdc-419d-b441-cbea087faafe', leafUUID=u'1dc1cf7e-520a-4619-ae29-6ca32e062cef', allowIllegal=False) from=::ffff:10.35.30.101,43004, flow_id=732ff26c-d254-494b-84d1-b2ba5a2ff03d, task_id=901fad80-b722-447f-843f-b26b27298049 (api:48)
...
2021-05-10 11:09:05,338-0400 INFO  (jsonrpc/0) [virt.vm] (vmId='b6d24fab-b324-4127-b434-e4a691ef7882') Disabling drive monitoring (drivemonitor:60)
2021-05-10 11:09:05,339-0400 INFO  (jsonrpc/0) [virt.vm] (vmId='b6d24fab-b324-4127-b434-e4a691ef7882') Freezing guest filesystems (vm:4272)
2021-05-10 11:09:05,400-0400 INFO  (jsonrpc/0) [virt.vm] (vmId='b6d24fab-b324-4127-b434-e4a691ef7882') 1 guest filesystems frozen (vm:4287)
2021-05-10 11:09:05,400-0400 INFO  (jsonrpc/0) [virt.vm] (vmId='b6d24fab-b324-4127-b434-e4a691ef7882') Taking a live snapshot (drives=vda, memory=True) (vm:4517)
2021-05-10 11:09:05,612-0400 INFO  (jsonrpc/0) [virt.vm] (vmId='b6d24fab-b324-4127-b434-e4a691ef7882') Completed live snapshot (vm:4519)
2021-05-10 11:09:05,612-0400 INFO  (jsonrpc/0) [virt.vm] (vmId='b6d24fab-b324-4127-b434-e4a691ef7882') Thawing guest filesystems (vm:4295)
2021-05-10 11:09:05,622-0400 INFO  (jsonrpc/0) [virt.vm] (vmId='b6d24fab-b324-4127-b434-e4a691ef7882') 1 guest filesystems thawed (vm:4310)
2021-05-10 11:09:05,648-0400 INFO  (jsonrpc/0) [virt.vm] (vmId='b6d24fab-b324-4127-b434-e4a691ef7882') clearing block threshold for drive 'vda[1]' (drivemonitor:143)
2021-05-10 11:09:05,661-0400 INFO  (jsonrpc/0) [vdsm.api] START getVolumeSize(sdUUID=u'528b4865-0381-484f-a2b5-590889a30002', spUUID='63607264-2540-4399-af23-b87d96a6fc50', imgUUID=u'df64a90b-1cdc-419d-b441-cbea087faafe', volUUID=u'1dc1cf7e-520a-4619-ae29-6ca32e062cef', options=None) from=::ffff:10.35.30.101,43004, flow_id=732ff26c-d254-494b-84d1-b2ba5a2ff03d, task_id=1e83390e-9863-4b5e-be44-3a90ff6489a4 (api:48)
2021-05-10 11:09:05,662-0400 INFO  (jsonrpc/0) [vdsm.api] FINISH getVolumeSize return={'truesize': '204800', 'apparentsize': '196768'} from=::ffff:10.35.30.101,43004, flow_id=732ff26c-d254-494b-84d1-b2ba5a2ff03d, task_id=1e83390e-9863-4b5e-be44-3a90ff6489a4 (api:54)
2021-05-10 11:09:05,662-0400 INFO  (jsonrpc/0) [virt.vm] (vmId='b6d24fab-b324-4127-b434-e4a691ef7882') Enabling drive monitoring (drivemonitor:56)
2021-05-10 11:09:05,663-0400 INFO  (jsonrpc/0) [api.virt] FINISH snapshot return={'status': {'message': 'Done', 'code': 0}, 'quiesce': True} from=::ffff:10.35.30.101,43004, flow_id=732ff26c-d254-494b-84d1-b2ba5a2ff03d, vmId=b6d24fab-b324-4127-b434-e4a691ef7882 (api:54)
2021-05-10 11:09:05,663-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call VM.snapshot succeeded in 0.66 seconds (__init__:312)

engine.log:
2021-05-10 18:09:00,803+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-82366) [732ff26c-d254-494b-84d1-b2ba5a2ff03d] EVENT_ID: USER_CREATE_SNAPSHOT(45), Snapshot 'snap_43_43' creation for VM 'vm_43_43' was initiated by admin@internal-authz.
...
2021-05-10 18:09:05,008+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-30) [732ff26c-d254-494b-84d1-b2ba5a2ff03d] START, SnapshotVDSCommand(HostName = host_43_43, SnapshotVDSCommandParameters:{hostId='131a2694-c33a-4c50-a759-ff68866db81e', vmId='b6d24fab-b324-4127-b434-e4a691ef7882'}), log id: 3a1b2cd9
2021-05-10 18:09:05,670+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-30) [732ff26c-d254-494b-84d1-b2ba5a2ff03d] FINISH, SnapshotVDSCommand, return: 96f91410-c7e8-4dfd-a2a3-9a38a44ced3a, log id: 3a1b2cd9
2021-05-10 18:09:07,706+03 INFO  [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-89) [732ff26c-d254-494b-84d1-b2ba5a2ff03d] Command 'CreateSnapshotForVm' id: '17f733ec-8a45-4e38-8261-33e9590e33ad' child commands '[9f3d8129-8086-43d7-8bd2-b795806b2b43, e97b2d31-c21c-4196-bd84-4d81aa88ef49]' executions were completed, status 'SUCCEEDED'
2021-05-10 18:09:08,732+03 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-37) [732ff26c-d254-494b-84d1-b2ba5a2ff03d] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand' successfully.
2021-05-10 18:09:08,734+03 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-37) [732ff26c-d254-494b-84d1-b2ba5a2ff03d] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotDiskCommand' successfully.
2021-05-10 18:09:08,737+03 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-37) [732ff26c-d254-494b-84d1-b2ba5a2ff03d] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand' successfully.
...
2021-05-10 18:09:09,039+03 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-37) [] Lock freed to object 'EngineLock:{exclusiveLocks='[b6d24fab-b324-4127-b434-e4a691ef7882=VM]', sharedLocks=''}'
2021-05-10 18:09:09,048+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-37) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_SUCCESS(68), Snapshot 'snap_43_43' creation for VM 'vm_43_43' has been completed.


3. Taking live snapshot for vm_46_46 succeeds. The live snapshot command timeout in vdsm log is 30. Engine waits for job on host to finish.

vdsm.log:
2021-05-10 18:15:50,308+0300 INFO  (jsonrpc/6) [api.virt] START snapshot(snapDrives=[{'imageID': '53d035c0-61b3-434f-bc24-a175c6260808', 'baseVolumeID': '158aee6c-863a-433c-b877-b338bd7d9dcd', 'volumeID': '7535c34c-1e4b-449b-8689-54929b50007d', 'domainID': '07865cf6-6cde-48c8-80d8-0a5017912478'}], snapMemory=None, frozen=False, jobUUID='411e2e5b-4d29-4696-8633-4ad878721088', timeout=30) from=::ffff:10.35.30.101,42860, flow_id=3fdc6533-4a3f-47f8-a9c5-60f5b08dbc6f, vmId=128d7803-0f0c-402c-be3f-48ff2e9d7ac6 (api:48)
2021-05-10 18:15:50,308+0300 INFO  (jsonrpc/6) [api.virt] FINISH snapshot return={'status': {'code': 0, 'message': 'Done'}} from=::ffff:10.35.30.101,42860, flow_id=3fdc6533-4a3f-47f8-a9c5-60f5b08dbc6f, vmId=128d7803-0f0c-402c-be3f-48ff2e9d7ac6 (api:54)
2021-05-10 18:15:50,309+0300 INFO  (virt/411e2e5b) [root] Running job '411e2e5b-4d29-4696-8633-4ad878721088'... (jobs:185)
2021-05-10 18:15:50,309+0300 INFO  (snap_abort/411e2e5b) [virt.vm] (vmId='128d7803-0f0c-402c-be3f-48ff2e9d7ac6') Starting snapshot abort job, with check interval 60 (snapshot:622)
...
2021-05-10 18:15:51,407+0300 INFO  (virt/411e2e5b) [virt.vm] (vmId='128d7803-0f0c-402c-be3f-48ff2e9d7ac6') Disabling drive monitoring (drivemonitor:66)
2021-05-10 18:15:51,407+0300 INFO  (virt/411e2e5b) [virt.vm] (vmId='128d7803-0f0c-402c-be3f-48ff2e9d7ac6') Freezing guest filesystems (vm:4172)
2021-05-10 18:15:51,429+0300 INFO  (virt/411e2e5b) [virt.vm] (vmId='128d7803-0f0c-402c-be3f-48ff2e9d7ac6') 1 guest filesystems frozen (vm:4191)
2021-05-10 18:15:51,429+0300 INFO  (virt/411e2e5b) [virt.vm] (vmId='128d7803-0f0c-402c-be3f-48ff2e9d7ac6') Taking a live snapshot (drives=vda,memory=True) (snapshot:460)
2021-05-10 18:15:51,476+0300 INFO  (virt/411e2e5b) [virt.vm] (vmId='128d7803-0f0c-402c-be3f-48ff2e9d7ac6') Completed live snapshot (snapshot:491)
2021-05-10 18:15:51,476+0300 INFO  (virt/411e2e5b) [virt.vm] (vmId='128d7803-0f0c-402c-be3f-48ff2e9d7ac6') Starting snapshot teardown (snapshot:216)
2021-05-10 18:15:51,476+0300 INFO  (virt/411e2e5b) [virt.vm] (vmId='128d7803-0f0c-402c-be3f-48ff2e9d7ac6') Thawing guest filesystems (vm:4199)
2021-05-10 18:15:51,481+0300 INFO  (virt/411e2e5b) [virt.vm] (vmId='128d7803-0f0c-402c-be3f-48ff2e9d7ac6') 1 guest filesystems thawed (vm:4218)
2021-05-10 18:15:51,489+0300 INFO  (virt/411e2e5b) [virt.vm] (vmId='128d7803-0f0c-402c-be3f-48ff2e9d7ac6') clearing block threshold for drive 'vda[1]' (drivemonitor:149)

engine.log:
2021-05-10 18:15:36,274+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-82580) [3fdc6533-4a3f-47f8-a9c5-60f5b08dbc6f] EVENT_ID: USER_CREATE_SNAPSHOT(45), Snapshot 'snap_46_46' creation for VM 'vm_46_46' was initiated by admin@internal-authz.
...
2021-05-10 18:15:51,340+03 INFO  [org.ovirt.engine.core.bll.VirtJobCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-58) [3fdc6533-4a3f-47f8-a9c5-60f5b08dbc6f] Command CreateLiveSnapshotForVm id: 'bfa393e0-ee87-41d3-b490-8defede49d2f': waiting for job '411e2e5b-4d29-4696-8633-4ad878721088' on host 'host_mixed_1' (id: '568ab30f-b6f2-4af1-b844-5c1013686467') to complete
...
2021-05-10 18:15:53,353+03 INFO  [org.ovirt.engine.core.bll.VirtJobCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-25) [3fdc6533-4a3f-47f8-a9c5-60f5b08dbc6f] Command CreateLiveSnapshotForVm id: 'bfa393e0-ee87-41d3-b490-8defede49d2f': job '411e2e5b-4d29-4696-8633-4ad878721088' execution was completed with VDSM job status 'done'
2021-05-10 18:15:53,357+03 INFO  [org.ovirt.engine.core.bll.VirtJobCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-25) [3fdc6533-4a3f-47f8-a9c5-60f5b08dbc6f] Command CreateLiveSnapshotForVm id: 'bfa393e0-ee87-41d3-b490-8defede49d2f': execution was completed, the command status is 'SUCCEEDED'
...
2021-05-10 18:15:55,512+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-41) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_SUCCESS(68), Snapshot 'snap_46_46' creation for VM 'vm_46_46' has been completed.

Comment 18 errata-xmlrpc 2021-06-01 13:25:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (0-day RHV Manager (ovirt-engine) [ovirt-4.4.6]), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:2182


Note You need to log in before you can comment on or make changes to this bug.