Bug 2081294 - Snapshot failure causes inconsistency
Summary: Snapshot failure causes inconsistency
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.4.10.7
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.5.1
: ---
Assignee: Benny Zlotnik
QA Contact: Avihai
URL:
Whiteboard:
Depends On:
Blocks: 2001923
TreeView+ depends on / blocked
 
Reported: 2022-05-03 10:20 UTC by Jean-Louis Dupond
Modified: 2022-06-27 06:45 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-27 06:45:56 UTC
oVirt Team: Storage
Embargoed:
pm-rhel: ovirt-4.5?


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github oVirt ovirt-engine pull 403 0 None Draft core: do not remove snapshot from DB if related images were not removed 2022-05-31 13:07:43 UTC
Red Hat Issue Tracker RHV-45916 0 None None None 2022-05-03 10:22:55 UTC

Description Jean-Louis Dupond 2022-05-03 10:20:29 UTC
Description of problem:
During the night a snapshot was created of a VM to backup the VM.

For this, the following volume was created:
2022-05-03 02:33:52,613+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-296) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='d497efe5-2344-4d58-8985-7b053d3c35a3', ignoreFailoverLimit='false', storageDomainId='17f5688c-11d0-4708-a52c-55ee43936f74', imageGroupId='345ddb
52-54d9-4827-a76b-9bdae75103c3', imageSizeInBytes='107785224192', volumeFormat='COW', newImageId='f6819ec6-c6eb-4584-a130-fa21f695402b', imageType='Sparse', newImageDescription='', imageInitialSizeInBytes='0', imageId='ef934191-9eb9-4a06-b01b-b084fccdc730', sourceImageGroupId='345ddb52-54d9-4827-a76b-9bdae75103c3', shouldAddBitmaps='false'}), log id: 5715a8cf

But now, for some reason the snapshot didn't complete (seems like vm was half hanging and qemu-guest-utils couldn't freeze fs?).

The snapshot entries at 03:30 were:
265d1066-ec34-4d79-96df-1ac2baecf54b    808827db-1b8b-4563-b085-7b34fec4dde1    ACTIVE  OK
631d19a6-23ea-44d6-8a78-efebfcdea0c2    808827db-1b8b-4563-b085-7b34fec4dde1    NEXT_RUN        OK
4eef41e8-09cf-4db4-b6b9-8a3c9ac9ed58    808827db-1b8b-4563-b085-7b34fec4dde1    REGULAR LOCKED  Backup Snapshot 2022-05-03 02:33:52.727+02

So at 10:15 I did shutdown the vm, which just hang. So I did a poweroff at 10:19.

But it seems like by powering off the VM, causing the snapshot to fail, it also removed the snapshot entry.
But the volume was never merged/removed.

2022-05-03 10:19:08,078+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHostJobsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-57) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] START, GetHostJobsVDSCommand(HostName = ovn001, GetHostJobsVDSCommandParameters:{hostId='7f6bd5e0-59c1-42e3-84e4-40cef0dc684d', type='virt', jobIds='[5612d87b-2638-4b6f-8bc8-5aa7de8b27a7]'}), log id: 5fa636ec
2022-05-03 10:19:08,081+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHostJobsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-57) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] FINISH, GetHostJobsVDSCommand, return: {5612d87b-2638-4b6f-8bc8-5aa7de8b27a7=HostJobInfo:{id='5612d87b-2638-4b6f-8bc8-5aa7de8b27a7', type='virt', description='snapshot_vm', status='failed', progress='null', error='VDSError:{code='SNAPSHOT_FAILED', message='Snapshot failed'}'}}, log id: 5fa636ec
2022-05-03 10:19:08,081+02 INFO  [org.ovirt.engine.core.bll.VirtJobCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-57) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] Command CreateLiveSnapshotForVm id: 'dc262ce6-25f7-4ace-8ee6-c9e8d8285f88': job '5612d87b-2638-4b6f-8bc8-5aa7de8b27a7' execution was completed with VDSM job status 'failed'
2022-05-03 10:19:08,084+02 INFO  [org.ovirt.engine.core.bll.VirtJobCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-57) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] Command CreateLiveSnapshotForVm id: 'dc262ce6-25f7-4ace-8ee6-c9e8d8285f88': execution was completed, the command status is 'FAILED'
2022-05-03 10:19:09,092+02 INFO  [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-67) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] Command 'CreateSnapshotForVm' (id: 'efe4fbe4-4f96-4dbe-befc-2ea7f709b9de') waiting on child command id: 'dc262ce6-25f7-4ace-8ee6-c9e8d8285f88' type:'CreateLiveSnapshotForVm' to complete
2022-05-03 10:19:09,093+02 ERROR [org.ovirt.engine.core.bll.snapshots.CreateLiveSnapshotForVmCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-67) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateLiveSnapshotForVmCommand' with failure.
2022-05-03 10:19:10,126+02 INFO  [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-45) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] Command 'CreateSnapshotForVm' id: 'efe4fbe4-4f96-4dbe-befc-2ea7f709b9de' child commands '[51a8ff55-adae-4fba-a591-55eb19306626, dc262ce6-25f7-4ace-8ee6-c9e8d8285f88]' executions were completed, status 'FAILED'
2022-05-03 10:19:11,140+02 ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand' with failure.
2022-05-03 10:19:11,142+02 ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotDiskCommand' with failure.
2022-05-03 10:19:11,146+02 ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand' with failure.
2022-05-03 10:19:11,146+02 WARN  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] Polling tasks. Task ID 'a321a7c1-1e7c-4619-983f-e1015a79b3dc' doesn't exist in the manager -> assuming 'finished'.
2022-05-03 10:19:11,148+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMRevertTaskVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] START, SPMRevertTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='d497efe5-2344-4d58-8985-7b053d3c35a3', ignoreFailoverLimit='false', taskId='a321a7c1-1e7c-4619-983f-e1015a79b3dc'}), log id: 356bd2d0
2022-05-03 10:19:11,150+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMRevertTaskVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] START, HSMRevertTaskVDSCommand(HostName = ovn001, HSMTaskGuidBaseVDSCommandParameters:{hostId='7f6bd5e0-59c1-42e3-84e4-40cef0dc684d', taskId='a321a7c1-1e7c-4619-983f-e1015a79b3dc'}), log id: 70c13524
2022-05-03 10:19:11,154+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMRevertTaskVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] Trying to revert unknown task 'a321a7c1-1e7c-4619-983f-e1015a79b3dc'
2022-05-03 10:19:11,154+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMRevertTaskVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] FINISH, HSMRevertTaskVDSCommand, return: , log id: 70c13524
2022-05-03 10:19:11,154+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMRevertTaskVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [ab4fc986-ee3d-4e12-aaa0-4ce727dcab78] FINISH, SPMRevertTaskVDSCommand, return: , log id: 356bd2d0
2022-05-03 10:19:11,207+02 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [] Lock freed to object 'EngineLock:{exclusiveLocks='[808827db-1b8b-4563-b085-7b34fec4dde1=VM]', sharedLocks=''}'
2022-05-03 10:19:11,213+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Failed to complete snapshot 'Backup Snapshot' creation for VM 'srv001'.
2022-05-03 10:19:11,213+02 WARN  [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [] Command 'CreateSnapshotForVm' id: 'efe4fbe4-4f96-4dbe-befc-2ea7f709b9de' end method execution failed, as the command isn't marked for endAction() retries silently ignoring

So it started using the volume created by the snapshot. But the VM had no snapshot entry (except the next config 'snapshot').

Booting the VM again directly afterwards, giving the following disk chain:
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native' iothread='1'/>
      <source dev='/rhev/data-center/mnt/blockSD/17f5688c-11d0-4708-a52c-55ee43936f74/images/345ddb52-54d9-4827-a76b-9bdae75103c3/f6819ec6-c6eb-4584-a130-fa21f695402b' index='1'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore type='block' index='3'>
        <format type='qcow2'/>
        <source dev='/rhev/data-center/mnt/blockSD/17f5688c-11d0-4708-a52c-55ee43936f74/images/345ddb52-54d9-4827-a76b-9bdae75103c3/ef934191-9eb9-4a06-b01b-b084fccdc730'>
          <seclabel model='dac' relabel='no'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <serial>345ddb52-54d9-4827-a76b-9bdae75103c3</serial>
      <boot order='1'/>
      <alias name='ua-345ddb52-54d9-4827-a76b-9bdae75103c3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>


At 10:22:30 I created a 'test' snapshot, and that worked fine.
For that snapshot it created:
2022-05-03 10:22:29,981+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2879918) [19736cd4-90ce-4bfa-8150-924f20f66360] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='d497efe5-2344-4d58-8985-7b053d3c35a3', ignoreFailoverLimit='false', storageDomainId='17f5688c-11d0-4708-a52c-55ee43
936f74', imageGroupId='345ddb52-54d9-4827-a76b-9bdae75103c3', imageSizeInBytes='107785224192', volumeFormat='COW', newImageId='a6a8ed5e-3233-421a-b124-b635822ca927', imageType='Sparse', newImageDescription='', imageInitialSizeInBytes='0', imageId='f6819ec6-c6eb-4584-a130-fa21f695402b', sourceImageGroupId='345ddb52-54d9-4827-a76b-9bdae75103c3', shouldAddBitmaps='false'}), log id: 
561dd5f8

Chain:
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native' iothread='1'/>
      <source dev='/rhev/data-center/mnt/blockSD/17f5688c-11d0-4708-a52c-55ee43936f74/images/345ddb52-54d9-4827-a76b-9bdae75103c3/a6a8ed5e-3233-421a-b124-b635822ca927' index='4'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore type='block' index='1'>
        <format type='qcow2'/>
        <source dev='/rhev/data-center/mnt/blockSD/17f5688c-11d0-4708-a52c-55ee43936f74/images/345ddb52-54d9-4827-a76b-9bdae75103c3/f6819ec6-c6eb-4584-a130-fa21f695402b'>
          <seclabel model='dac' relabel='no'/>
        </source>
        <backingStore type='block' index='3'>
          <format type='qcow2'/>
          <source dev='/rhev/data-center/mnt/blockSD/17f5688c-11d0-4708-a52c-55ee43936f74/images/345ddb52-54d9-4827-a76b-9bdae75103c3/ef934191-9eb9-4a06-b01b-b084fccdc730'>
            <seclabel model='dac' relabel='no'/>
          </source>
          <backingStore/>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <serial>345ddb52-54d9-4827-a76b-9bdae75103c3</serial>
      <boot order='1'/>
      <alias name='ua-345ddb52-54d9-4827-a76b-9bdae75103c3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>


At 10:25:06 I removed the snapshot.
2022-05-03 10:25:09,161+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (EE-ManagedExecutorService-commandCoordinator-Thread-7) [6005a860-d6b1-45a8-b554-f083e18350ba] START, MergeVDSCommand(HostName = ovn002, MergeVDSCommandParameters:{hostId='37150444-cf80-4831-bf49-457227d9a22e', vmId='808827db-1b8b-4563-b085-7b34fec4dde1', storagePoolId='d
497efe5-2344-4d58-8985-7b053d3c35a3', storageDomainId='17f5688c-11d0-4708-a52c-55ee43936f74', imageGroupId='345ddb52-54d9-4827-a76b-9bdae75103c3', imageId='a6a8ed5e-3233-421a-b124-b635822ca927', baseImageId='ef934191-9eb9-4a06-b01b-b084fccdc730', topImageId='f6819ec6-c6eb-4584-a130-fa21f695402b', bandwidth='0'}), log id: 37ad1664


But now that removal also did the following:
2022-05-03 10:26:03,282+02 INFO  [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6005a860-d6b1-45a8-b554-f083e18350ba] Merge command (jobId = 41e73779-e7b0-4711-99b8-861e94a382e8) has completed for images 'ef934191-9eb9-4a06-b01b-b084fccdc730'..'f6819ec6-c6eb-4584-a130-fa21f695402b'
2022-05-03 10:26:04,317+02 INFO  [org.ovirt.engine.core.bll.MergeStatusCommand] (EE-ManagedExecutorService-commandCoordinator-Thread-4) [6005a860-d6b1-45a8-b554-f083e18350ba] Successfully removed volume f6819ec6-c6eb-4584-a130-fa21f695402b from the chain


So it removed the volume created by the snapshot during the night.
Which is in fact good, as it cleans up unused/corrupt snapshots.

But it fails in the end, because of the following error:
2022-05-03 10:27:05,037+02 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18) [6005a860-d6b1-45a8-b554-f083e18350ba] Error invoking callback method 'onSucceeded' for 'SUCCEEDED' command '8dc08f54-4638-4e40-b63f-73e756274a2d'
2022-05-03 10:27:05,037+02 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18) [6005a860-d6b1-45a8-b554-f083e18350ba] Exception: java.lang.NullPointerException
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskCommandBase.isRemoveTopImageMemoryNeeded(RemoveSnapshotSingleDiskCommandBase.java:267)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskCommandBase.removeTopImageMemoryIfNeeded(RemoveSnapshotSingleDiskCommandBase.java:290)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskCommandBase.handleBackwardMerge(RemoveSnapshotSingleDiskCommandBase.java:257)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskCommandBase.lambda$syncDbRecords$1(RemoveSnapshotSingleDiskCommandBase.java:182)
        at org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:181)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskCommandBase.syncDbRecords(RemoveSnapshotSingleDiskCommandBase.java:172)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand.onSucceeded(RemoveSnapshotSingleDiskLiveCommand.java:232)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback.onSucceeded(RemoveSnapshotSingleDiskLiveCommandCallback.java:27)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.endCallback(CommandCallbacksPoller.java:69)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:166)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
        at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:360)
        at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:511)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
        at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)


It fails at https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/snapshots/RemoveSnapshotSingleDiskCommandBase.java#L267
Because there is no snapshot record anymore, 'snapshot' object is null.


The VM disk ended up like this:
engine=# select * from images where image_group_id = '345ddb52-54d9-4827-a76b-9bdae75103c3';
              image_guid              |       creation_date        |     size     |               it_guid                |               parentid               | imagestatus |        lastmodified        |            vm_snapshot_id            | volume_type | volume_format |            image_group_id            |         _create_date          |         _update_date          | activ
e | volume_classification | qcow_compat 
--------------------------------------+----------------------------+--------------+--------------------------------------+--------------------------------------+-------------+----------------------------+--------------------------------------+-------------+---------------+--------------------------------------+-------------------------------+-------------------------------+------
--+-----------------------+-------------
 ef934191-9eb9-4a06-b01b-b084fccdc730 | 2020-12-14 16:19:54+01     | 107785224192 | 00000000-0000-0000-0000-000000000000 | 00000000-0000-0000-0000-000000000000 |           1 | 2022-05-03 02:33:52.702+02 | 4eef41e8-09cf-4db4-b6b9-8a3c9ac9ed58 |           2 |             4 | 345ddb52-54d9-4827-a76b-9bdae75103c3 | 2020-12-14 16:19:52.615006+01 | 2022-05-03 02:33:52.702965+02 | f    
  |                     1 |           2
 a6a8ed5e-3233-421a-b124-b635822ca927 | 2022-05-03 10:22:31+02     | 107785224192 | 00000000-0000-0000-0000-000000000000 | f6819ec6-c6eb-4584-a130-fa21f695402b |           1 | 2022-05-03 10:22:29.975+02 | 609f6753-d33d-464e-92a9-94be0dbeaab6 |           2 |             4 | 345ddb52-54d9-4827-a76b-9bdae75103c3 | 2022-05-03 10:22:30.0999+02   | 2022-05-03 10:22:47.940862+02 | t    
  |                     0 |           2
 f6819ec6-c6eb-4584-a130-fa21f695402b | 2022-05-03 02:33:52.604+02 | 107785224192 | 00000000-0000-0000-0000-000000000000 | ef934191-9eb9-4a06-b01b-b084fccdc730 |           1 | 2022-05-03 10:22:30.099+02 | 265d1066-ec34-4d79-96df-1ac2baecf54b |           2 |             4 | 345ddb52-54d9-4827-a76b-9bdae75103c3 | 2022-05-03 02:33:52.702965+02 | 2022-05-03 10:22:30.0999+02   | f    
  |                     1 |           0
(3 rows)


   image:    345ddb52-54d9-4827-a76b-9bdae75103c3

             - ef934191-9eb9-4a06-b01b-b084fccdc730
               status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, type: SPARSE, capacity: 107785224192, truesize: 106837311488

             - a6a8ed5e-3233-421a-b124-b635822ca927
               status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE, capacity: 107785224192, truesize: 2147483648
               
               
So I did shutdown the VM, removed the 'f6819ec6-c6eb-4584-a130-fa21f695402b' image:
engine=# delete from images where image_guid = 'f6819ec6-c6eb-4584-a130-fa21f695402b';
DELETE 1

And pointed the parent to the correct volume:
engine=# update images set parentid = 'ef934191-9eb9-4a06-b01b-b084fccdc730' where image_guid = 'a6a8ed5e-3233-421a-b124-b635822ca927';
UPDATE 1


This made it possible to remove the snapshot via the UI. And boot the VM again.

Comment 1 Benny Zlotnik 2022-05-03 15:40:33 UTC
Thanks for the detailed report!

Looks like bug 2001923, we can keep this as the upstream clone

Comment 2 RHEL Program Management 2022-05-03 15:40:40 UTC
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.

Comment 4 Arik 2022-06-27 06:45:56 UTC
bz 2001923 has been verified


Note You need to log in before you can comment on or make changes to this bug.