Bug 1695026 - Failure in creating snapshots during "Live Storage Migration" can result in a nonexistent snapshot
Summary: Failure in creating snapshots during "Live Storage Migration" can result in a...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.3.1
Hardware: All
OS: Linux
high
high
Target Milestone: ovirt-4.4.0
: ---
Assignee: Benny Zlotnik
QA Contact: Shir Fishbain
URL:
Whiteboard:
Depends On:
Blocks: gss_rhv_4_3_4 1709303
TreeView+ depends on / blocked
 
Reported: 2019-04-02 10:19 UTC by nijin ashok
Modified: 2023-10-06 18:12 UTC (History)
4 users (show)

Fixed In Version: rhv-4.4.0-28
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1709303 (view as bug list)
Environment:
Last Closed: 2020-08-04 13:16:58 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3545971 0 Troubleshoot None Snapshot deletion (live merge) fails with error. 2019-05-07 21:21:58 UTC
Red Hat Product Errata RHSA-2020:3247 0 None None None 2020-08-04 13:19:10 UTC
oVirt gerrit 99928 0 'None' MERGED core: validate source domain in LSM 2021-02-07 11:01:35 UTC
oVirt gerrit 99929 0 'None' MERGED core: make MoveOrCopyDiskCommandTest vars consistent 2021-02-07 11:01:36 UTC
oVirt gerrit 99936 0 'None' MERGED core: validate source domain in LSM 2021-02-07 11:01:36 UTC

Description nijin ashok 2019-04-02 10:19:33 UTC
Description of problem:

The VM was having below structure before the LSM and the snapshot id was f9feefbe and 8619b674

===

           image_guid              |            image_group_id            |               parentid               |            vm_snapshot_id            | imagestatus 
--------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------+-------------
 b31e31e6-3f23-4b48-8f08-8f8c87524b14 | 83ca424e-408e-43a6-8371-5f3cc90677a5 | 0d4b6990-9d8c-417d-afbf-5cafb8207fc3 | f9feefbe-16b7-4343-bcd7-7d43213ee765 |           1
 0d4b6990-9d8c-417d-afbf-5cafb8207fc3 | 83ca424e-408e-43a6-8371-5f3cc90677a5 | 00000000-0000-0000-0000-000000000000 | 8619b674-c6d8-44d0-98d9-fee3dbd9d4ea |           1
 0d0a93c9-8a7a-4401-9b7c-cc251b23e7f4 | 73ac2a39-d52c-4aa3-bd90-2e42a9af81f6 | 00000000-0000-0000-0000-000000000000 | 8619b674-c6d8-44d0-98d9-fee3dbd9d4ea |           1
 8f0b95c1-d63c-4bd5-8f6d-28c1995796d7 | 73ac2a39-d52c-4aa3-bd90-2e42a9af81f6 | 0d0a93c9-8a7a-4401-9b7c-cc251b23e7f4 | f9feefbe-16b7-4343-bcd7-7d43213ee765 |           1
===

The source storage domain was having 0 GB disk space. Then a live storage migration of the disk 73ac2a39 was executed which created snapshot id 3067f179.

(structure during the LSM operation)

===
              image_guid              |            image_group_id            |               parentid               |            vm_snapshot_id            | imagestatus 
--------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------+-------------
 b31e31e6-3f23-4b48-8f08-8f8c87524b14 | 83ca424e-408e-43a6-8371-5f3cc90677a5 | 0d4b6990-9d8c-417d-afbf-5cafb8207fc3 | 3067f179-8620-483c-bacd-6aea46b499be |           1
 0d4b6990-9d8c-417d-afbf-5cafb8207fc3 | 83ca424e-408e-43a6-8371-5f3cc90677a5 | 00000000-0000-0000-0000-000000000000 | 8619b674-c6d8-44d0-98d9-fee3dbd9d4ea |           1
 ed7c6d73-0d17-470b-8152-42fa45d2caea | 73ac2a39-d52c-4aa3-bd90-2e42a9af81f6 | 8f0b95c1-d63c-4bd5-8f6d-28c1995796d7 | 3067f179-8620-483c-bacd-6aea46b499be |           2
 8f0b95c1-d63c-4bd5-8f6d-28c1995796d7 | 73ac2a39-d52c-4aa3-bd90-2e42a9af81f6 | 0d0a93c9-8a7a-4401-9b7c-cc251b23e7f4 | f9feefbe-16b7-4343-bcd7-7d43213ee765 |           2
 0d0a93c9-8a7a-4401-9b7c-cc251b23e7f4 | 73ac2a39-d52c-4aa3-bd90-2e42a9af81f6 | 00000000-0000-0000-0000-000000000000 | 8619b674-c6d8-44d0-98d9-fee3dbd9d4ea |           2
===

The snapshot operation during the LSM failed because of low storage space in the source storage domain. The engine successfully deleted the snapshot 3067f179 from the snapshot table as a part of rollback.

===
engine=# select * from snapshots where snapshot_id = '3067f179-8620-483c-bacd-6aea46b499be';
 snapshot_id | vm_id | snapshot_type | status | description | creation_date | app_list | vm_configuration | _create_date | _update_date | memory_metadata_disk_id | memory_du
mp_disk_id | vm_configuration_broken 
-------------+-------+---------------+--------+-------------+---------------+----------+------------------+--------------+--------------+-------------------------+----------
-----------+-------------------------
(0 rows)
===

However, the other image 83ca424e is still pointing to 3067f179 which doesn't exist in the snapshot table.


(structure after failed snapshot operation)

===
              image_guid              |            image_group_id            |               parentid               |            vm_snapshot_id            | imagestatus 
--------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------+-------------
 b31e31e6-3f23-4b48-8f08-8f8c87524b14 | 83ca424e-408e-43a6-8371-5f3cc90677a5 | 0d4b6990-9d8c-417d-afbf-5cafb8207fc3 | 3067f179-8620-483c-bacd-6aea46b499be |           1
 0d4b6990-9d8c-417d-afbf-5cafb8207fc3 | 83ca424e-408e-43a6-8371-5f3cc90677a5 | 00000000-0000-0000-0000-000000000000 | 8619b674-c6d8-44d0-98d9-fee3dbd9d4ea |           1
 8f0b95c1-d63c-4bd5-8f6d-28c1995796d7 | 73ac2a39-d52c-4aa3-bd90-2e42a9af81f6 | 0d0a93c9-8a7a-4401-9b7c-cc251b23e7f4 | f9feefbe-16b7-4343-bcd7-7d43213ee765 |           1
 0d0a93c9-8a7a-4401-9b7c-cc251b23e7f4 | 73ac2a39-d52c-4aa3-bd90-2e42a9af81f6 | 00000000-0000-0000-0000-000000000000 | 8619b674-c6d8-44d0-98d9-fee3dbd9d4ea |           1
===

Now if you try to delete the old snapshot of disk 83ca424e, it will fail with NPE and image will be marked as illegal.

===
              image_guid              |            image_group_id            |               parentid               |            vm_snapshot_id            | imagestatus 
--------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------+-------------
 b31e31e6-3f23-4b48-8f08-8f8c87524b14 | 83ca424e-408e-43a6-8371-5f3cc90677a5 | 0d4b6990-9d8c-417d-afbf-5cafb8207fc3 | 3067f179-8620-483c-bacd-6aea46b499be |           1
 0d4b6990-9d8c-417d-afbf-5cafb8207fc3 | 83ca424e-408e-43a6-8371-5f3cc90677a5 | 00000000-0000-0000-0000-000000000000 | 8619b674-c6d8-44d0-98d9-fee3dbd9d4ea |           4
 0d0a93c9-8a7a-4401-9b7c-cc251b23e7f4 | 73ac2a39-d52c-4aa3-bd90-2e42a9af81f6 | 00000000-0000-0000-0000-000000000000 | f9feefbe-16b7-4343-bcd7-7d43213ee765 |           1



2019-04-01 00:00:55,761-04 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-50) [356357f5-391b-458a-9f03-fcbd96759637] null: java.lang.NullPointerException
	at org.ovirt.engine.core.common.action.MergeParameters.<init>(MergeParameters.java:30) [common.jar:]
	at org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand.buildMergeParameters(RemoveSnapshotSingleDiskLiveCommand.java:191) [bll.jar:]
	at org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand.performNextOperation(RemoveSnapshotSingleDiskLiveCommand.java:114) [bll.jar:]
	at org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32) [bll.jar:]
	at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77) [bll.jar:]
===

Manual database changes are required to correct the structure.


Version-Release number of selected component (if applicable):

I tested it in rhvm-4.3.1.1-0.1.el7.noarch 

The customer is in 4.2 and hence issue is there in old releases as well.


How reproducible:

100%

Steps to Reproduce:

1. Create a VM with 2 disks. 
2. Create a snapshot for this VM including both the disk.
3. Fill the storage domain so that there is 0GB free space.
4. Try to migrate one of the disks. It will fail during the snapshot operation since there is no free space in the source storage domain. 
5. Check the structure of the image in the database. The other disk will be pointing to an inexistent snapshot.
6. Try to delete the snapshot created in [2]. One will fail with NPE.

Actual results:

Low disk space in source storage domain during "Live Storage Migration" can result in a nonexistent snapshot.

Expected results:

The engine should not initiate the LSM if there is low disk space on source storage domain. We are already preventing the manual snapshot operation if there is low disk space on the storage domain. However, the snapshot created during the LSM doesn't check about free space in the source storage domain before initiating the operation.

Also, we should not create an inconsistent structure if the snapshot operation failed during LSM. 

Additional info:

Comment 7 Daniel Gur 2019-08-28 13:11:41 UTC
sync2jira

Comment 8 Daniel Gur 2019-08-28 13:15:54 UTC
sync2jira

Comment 9 RHV bug bot 2019-12-13 13:13:54 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 10 RHV bug bot 2019-12-20 17:43:47 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 11 RHV bug bot 2020-01-08 14:48:20 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 12 RHV bug bot 2020-01-08 15:14:36 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 13 RHV bug bot 2020-01-24 19:50:08 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 15 Shir Fishbain 2020-04-12 12:41:57 UTC
Verified

ovirt-engine-4.4.0-0.31.master.el8ev.noarch
vdsm-4.40.11-1.el8ev.x86_64

Verified this bug with the following steps:

1. Create a VM with 2 disks of the ISCSI storage domain.
2. Create a snapshot (snap_1) of the VM including both disks.

Got 2 snapshots: Active_VM and snap_1:

From the database:

Snaphosts:
engine=# select snapshot_id, vm_id, status, description from snapshots where vm_id='ccde52a1-3fff-45f2-93c1-e231eaa453f5';
             snapshot_id              |                vm_id                 | status | description 
--------------------------------------+--------------------------------------+--------+-------------
 541cb413-32a4-40b4-9069-c10865a6c82b | ccde52a1-3fff-45f2-93c1-e231eaa453f5 | OK     | Active VM
 ac6013aa-d51b-46b2-b8da-b841b65941e4 | ccde52a1-3fff-45f2-93c1-e231eaa453f5 | OK     | snap_1
(2 rows)


images:
engine=# select image_guid, image_group_id, parentid, vm_snapshot_id from images;
              image_guid              |            image_group_id            |               parentid               |            vm_snapshot_id            
--------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------
 adbe98ff-c1ec-4130-87c5-c4e8e2e4f4ff | 4dc6d278-685d-4ec6-9cde-f7f6e0020198 | 3ba8247b-7cf0-4139-bc75-2180e94abac1 | e4540486-6e06-478d-aa77-2919eb6c2be0
 00000000-0000-0000-0000-000000000000 |                                      |                                      | 
 ec184243-a7ac-41db-8571-237678977619 | 6ff19aca-4059-4d81-9134-043140644ed2 | 00000000-0000-0000-0000-000000000000 | 
 0d76bc93-eb3c-48f5-be22-52484aad752d | ca6d969f-4d59-45ca-913c-00d63e7e6fdc | 00000000-0000-0000-0000-000000000000 | 
 700072ea-b965-4c9b-baa3-24537bbb434f | e935dfd5-c7f5-4a3e-9366-16a60d5b4ab6 | 00000000-0000-0000-0000-000000000000 | 
 9a3e5661-9d65-4160-a274-d99b70e72a13 | 7a6d8fac-ec90-4637-b895-ce7235164374 | 00000000-0000-0000-0000-000000000000 | 
 d4b8c0be-3471-4ea5-8b14-f28b1b264d46 | b2a0afce-a186-4b45-8f31-7feec53aa320 | 00000000-0000-0000-0000-000000000000 | feb868bd-fc9b-4344-bdb6-70a99b85710f
 077694b4-112c-45a4-a27c-e56735f521f6 | df992779-4cd0-4910-b026-2b66f1102326 | 00000000-0000-0000-0000-000000000000 | 
 1e40d9d4-0e96-4859-a410-123f86545de6 | f63a93c1-c4a1-4c9a-9ca1-19252ba1c116 | 00000000-0000-0000-0000-000000000000 | 
 3ba8247b-7cf0-4139-bc75-2180e94abac1 | 4e555357-e060-4b5c-89c5-c28f4fb288bb | 00000000-0000-0000-0000-000000000000 | c59cfd05-a7b5-45a8-a6c6-3f2496aa3655
 eebe6ffd-7913-4c2e-a09e-e9f38640a9b6 | acaceabd-f284-43d7-9c1d-10a33b10b295 | 3ba8247b-7cf0-4139-bc75-2180e94abac1 | 7e33b1e7-ad7c-41dd-a434-11c1b02155fd
 38cd42a0-009e-45e6-a9a3-a85c570ba926 | e1444310-55bb-4565-95fb-76d754510bc7 | 3ba8247b-7cf0-4139-bc75-2180e94abac1 | 2b63ffe0-7550-4e8e-a74f-fbe4d0657fe2
 b17c31a0-8f86-438e-98a7-981f9f39f975 | 731d74da-91f8-4b0f-8f6b-7c2a84a545c0 | 3ba8247b-7cf0-4139-bc75-2180e94abac1 | 1f7669de-f501-43de-8012-47d8f515c99d
 89abe529-c9e2-4ccb-bcdf-c660b7bcabe0 | 4548a8f7-1a3e-4b3d-aa5b-c8a856e7c433 | 3ba8247b-7cf0-4139-bc75-2180e94abac1 | 4b3031a6-b7cd-4bfc-8f9c-6adbca0c3fa1
 244db4f9-bad8-4e69-a194-37b2ddbf5a57 | 7287d495-4a9f-4ad2-97cd-4679a405b1f1 | 3ba8247b-7cf0-4139-bc75-2180e94abac1 | 5e376c53-c003-4f6e-b069-1d0ef970c677
 6b6bc13d-51dc-4df5-bca4-dbd7d6d274cc | f31f5cb1-920f-4b8c-8fb9-a5b35968029a | 00000000-0000-0000-0000-000000000000 | 
 f552052d-6a78-48fa-8055-d5d83e6e0f90 | 013431e6-af7d-46f1-9a18-a519fe005b7a | 00000000-0000-0000-0000-000000000000 | 
 ed078137-444d-48a4-a871-27e59f9f8096 | 8b455f77-4454-415c-8cc3-bc205fb4b3fc | 00000000-0000-0000-0000-000000000000 | 
 db7e2725-5e20-4788-b2d6-99e8223c5a80 | c2491039-a3e9-40e9-9151-05d0cea823d8 | 00000000-0000-0000-0000-000000000000 | 
 1fc2c229-3810-4ad0-8b57-bec38120aca6 | 32594c8b-f5b1-4001-b800-03e500d0228d | 00000000-0000-0000-0000-000000000000 | 
 14924fb9-8054-4510-a24f-db58b33b96ad | 49ccba24-fded-4add-81a7-3d577c965ea6 | 00000000-0000-0000-0000-000000000000 | 
 30305bac-4743-4dba-9aa7-7dfd41845a3a | 4783b632-a499-4957-8194-c9f9c1185b30 | 00000000-0000-0000-0000-000000000000 | ac6013aa-d51b-46b2-b8da-b841b65941e4
 1b55fe54-b1c6-4436-a583-b2912bf3ec33 | c930050a-f1c5-476f-8204-c9eaf7778d86 | 00000000-0000-0000-0000-000000000000 | ac6013aa-d51b-46b2-b8da-b841b65941e4
 596875ca-697e-416f-a6da-23f97dab28c1 | 4783b632-a499-4957-8194-c9f9c1185b30 | 30305bac-4743-4dba-9aa7-7dfd41845a3a | 541cb413-32a4-40b4-9069-c10865a6c82b
 ebdf5d9f-d65b-46e9-a8af-1fb8746c83fd | c3fa183c-5183-41d9-b98d-713a72d23a6a | 3ba8247b-7cf0-4139-bc75-2180e94abac1 | 78c5e360-48e1-411a-8150-d9a87bc1e995
 cf8612dd-f816-41f6-a099-91d918fe8d0e | c930050a-f1c5-476f-8204-c9eaf7778d86 | 1b55fe54-b1c6-4436-a583-b2912bf3ec33 | 541cb413-32a4-40b4-9069-c10865a6c82b
 2c75f30d-3baf-4587-810a-d68a13002cd5 | 70d69e67-6fc1-4938-a69d-c56dc82b0c37 | 00000000-0000-0000-0000-000000000000 | 
 0deaf598-93c9-445c-bf8a-a1d61127a1fc | 75f5347a-5b6f-4381-a509-6115eb5495d3 | 00000000-0000-0000-0000-000000000000 | 
 c1119154-d1c8-4264-8e18-3c302273902c | 897cfa43-01a4-4bd9-921b-c57569794d43 | 00000000-0000-0000-0000-000000000000 | 
 62b13f6d-3e4a-414b-b7d3-fcd4059f5dbe | c39facc3-acce-4d80-8d3f-16ee7f70aeab | 00000000-0000-0000-0000-000000000000 | 
 3aab040c-3783-4efe-b903-ac701c603e61 | 9a30a40c-2c77-4f5e-ace7-66307c78acee | 00000000-0000-0000-0000-000000000000 | 
 f353f5aa-0997-4ef5-988b-513731e28132 | af106a90-7ea9-4317-be1d-55afee364db4 | 00000000-0000-0000-0000-000000000000 | 
(32 rows)


3. Fill the storage domain that there is 0GB free space. Create a new preallocated disk using all available space in the ISCSI storage domain 

From engine log:

2020-04-12 14:50:35,157+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-35) [] EVENT_ID: IRS_DISK_SPACE_LOW_ERROR(201), Critical, Low disk space. iscsi_0 domain has 0 GB of free space.


4. Try to migrate one of the disks. It will fail during the snapshot operation since there is no free space in the source storage domain.

The operation failed with this error in the engine log as expected:

2020-04-12 15:04:30,212+03 WARN  [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default task-66) [aa411a31-d90f-468e-905c-d2dd373cdc24] Validation of action 'RemoveSnapshot' failed for user admin@internal-authz. Reasons: VAR__TYPE__SNAPSHOT,VAR__ACTION__REMOVE,ACTION_TYPE_FAILED_DISK_SPACE_LOW_ON_STORAGE_DOMAIN,$storageName iscsi_0

5. Check the structure of the image in the database. The other disk will be pointing to an inexistent snapshot.

At this point - still got 2 snapshots from step 2 and same images:

engine=# select image_guid, image_group_id, parentid, vm_snapshot_id from images where image_group_id in ('c930050a-f1c5-476f-8204-c9eaf7778d86', '4783b632-a499-4957-8194-c9f9c1185b30');

              image_guid              |            image_group_id            |               parentid               |            vm_snapshot_id            
--------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------
 30305bac-4743-4dba-9aa7-7dfd41845a3a | 4783b632-a499-4957-8194-c9f9c1185b30 | 00000000-0000-0000-0000-000000000000 | ac6013aa-d51b-46b2-b8da-b841b65941e4
 1b55fe54-b1c6-4436-a583-b2912bf3ec33 | c930050a-f1c5-476f-8204-c9eaf7778d86 | 00000000-0000-0000-0000-000000000000 | ac6013aa-d51b-46b2-b8da-b841b65941e4
 596875ca-697e-416f-a6da-23f97dab28c1 | 4783b632-a499-4957-8194-c9f9c1185b30 | 30305bac-4743-4dba-9aa7-7dfd41845a3a | 541cb413-32a4-40b4-9069-c10865a6c82b
 cf8612dd-f816-41f6-a099-91d918fe8d0e | c930050a-f1c5-476f-8204-c9eaf7778d86 | 1b55fe54-b1c6-4436-a583-b2912bf3ec33 | 541cb413-32a4-40b4-9069-c10865a6c82b
(4 rows)


6. Try to delete the snapshot created in [2]. One will fail with NPE.

Failed to remove the snapshot from step 2 (snap_1) due to low space on storage - no NPE has seen in the logs

From the UI:
"Cannot remove Snapshot. Low disk space on Storage Domain iscsi_0."

From the engine log:

2020-04-12 15:04:30,054+03 INFO  [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default task-66) [aa411a31-d90f-468e-905c-d2dd373cdc24] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccde52a
1-3fff-45f2-93c1-e231eaa453f5=VM]', sharedLocks=''}'
2020-04-12 15:04:30,212+03 WARN  [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default task-66) [aa411a31-d90f-468e-905c-d2dd373cdc24] Validation of action 'RemoveSnapshot' failed for user admin@i
nternal-authz. Reasons: VAR__TYPE__SNAPSHOT,VAR__ACTION__REMOVE,ACTION_TYPE_FAILED_DISK_SPACE_LOW_ON_STORAGE_DOMAIN,$storageName iscsi_0
2020-04-12 15:04:30,213+03 INFO  [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default task-66) [aa411a31-d90f-468e-905c-d2dd373cdc24] Lock freed to object 'EngineLock:{exclusiveLocks='[ccde52a1-3
fff-45f2-93c1-e231eaa453f5=VM]', sharedLocks=''}'

7. Power off VM and free some space from the storage. Try to delete the snapshot again:


22020-04-12 15:08:22,181+03 INFO  [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default task-62) [33b806e7-263d-498d-aeca-6973c66a0b6a] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccde52a
1-3fff-45f2-93c1-e231eaa453f5=VM]', sharedLocks=''}'

2020-04-12 15:08:22,275+03 INFO  [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default task-62) [33b806e7-263d-498d-aeca-6973c66a0b6a] Running command: RemoveSnapshotCommand internal: false. Entit
ies affected :  ID: ccde52a1-3fff-45f2-93c1-e231eaa453f5 Type: VMAction group MANIPULATE_VM_SNAPSHOTS with role type USER

2020-04-12 15:08:22,289+03 INFO  [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default task-62) [33b806e7-263d-498d-aeca-6973c66a0b6a] Lock freed to object 'EngineLock:{exclusiveLocks='[ccde52a1-3
fff-45f2-93c1-e231eaa453f5=VM]', sharedLocks=''}'

.....

2020-04-12 15:09:16,345+03 INFO  [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-33) [33b806e7-263d-498d-aeca-6973c66a0b6a] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' successfully.
2020-04-12 15:09:16,414+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-33) [33b806e7-263d-498d-aeca-6973c66a0b6a] EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_SUCCESS(356), Snapshot 'snap_1' deletion for VM 'new_vm_verfication_bug_shir' has been completed.

Snapshot after delete:
engine=# select snapshot_id, vm_id, status, description from snapshots where vm_id='ccde52a1-3fff-45f2-93c1-e231eaa453f5';
             snapshot_id              |                vm_id                 | status | description 
--------------------------------------+--------------------------------------+--------+-------------
 541cb413-32a4-40b4-9069-c10865a6c82b | ccde52a1-3fff-45f2-93c1-e231eaa453f5 | OK     | Active VM
(1 row)


images:
engine=# select image_guid, image_group_id, parentid, vm_snapshot_id from images where image_group_id in ('c930050a-f1c5-476f-8204-c9eaf7778d86', '4783b632-a499-4957-8194-c9f9c1185b30');
              image_guid              |            image_group_id            |               parentid               |            vm_snapshot_id            
--------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------
 30305bac-4743-4dba-9aa7-7dfd41845a3a | 4783b632-a499-4957-8194-c9f9c1185b30 | 00000000-0000-0000-0000-000000000000 | 541cb413-32a4-40b4-9069-c10865a6c82b
 1b55fe54-b1c6-4436-a583-b2912bf3ec33 | c930050a-f1c5-476f-8204-c9eaf7778d86 | 00000000-0000-0000-0000-000000000000 | 541cb413-32a4-40b4-9069-c10865a6c82b
(2 rows)

Comment 19 errata-xmlrpc 2020-08-04 13:16:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3247


Note You need to log in before you can comment on or make changes to this bug.