Bug 1840609 - Wake up from hibernation failed:internal error: unable to execute QEMU command 'cont': Failed to get "write" lock.
Summary: Wake up from hibernation failed:internal error: unable to execute QEMU comman...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: General
Version: 4.40.17
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.4.1
: 4.40.21
Assignee: Liran Rotenberg
QA Contact: Qin Yuan
URL:
Whiteboard:
Depends On:
Blocks: 1842894
TreeView+ depends on / blocked
 
Reported: 2020-05-27 10:20 UTC by Qin Yuan
Modified: 2020-07-08 08:27 UTC (History)
7 users (show)

Fixed In Version: vdsm-4.40.21
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-08 08:27:36 UTC
oVirt Team: Virt
Embargoed:
pm-rhel: ovirt-4.4+


Attachments (Terms of Use)
Logs (175.55 KB, application/x-xz)
2020-05-27 10:20 UTC, Qin Yuan
no flags Details
libvirt debug logs (339.86 KB, application/x-xz)
2020-06-09 12:29 UTC, Liran Rotenberg
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 109831 0 master MERGED devices: remove backingstore in restore 2020-09-10 04:55:27 UTC

Description Qin Yuan 2020-05-27 10:20:37 UTC
Created attachment 1692634 [details]
Logs

Description of problem:
When run VM after create 2 memory snapshots, commit the second snapshot, and remove the two snapshots, the VM will
be started on one host, but fail with the following error, then it will be started on another host.

Engine log:
2020-05-27 04:55:33,670+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [607343cf] EVENT_ID: VM_DOWN_ERROR(119), VM test_snapshot_6_10 is down with error. Exit message: Wake up from hibernation failed:internal error: unable to execute QEMU command 'cont': Failed to get "write" lock.
2020-05-27 04:55:33,671+03 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [607343cf] add VM '11ff5492-7d78-4cb4-afd2-628ef1457793'(test_snapshot_6_10) to rerun treatment
2020-05-27 04:55:33,677+03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-11) [607343cf] Rerun VM '11ff5492-7d78-4cb4-afd2-628ef1457793'. Called from VDS 'host_mixed_2'
2020-05-27 04:55:33,691+03 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-76225) [607343cf] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM test_snapshot_6_10 on Host host_mixed_2.

VDSM log:
2020-05-27 04:55:31,481+0300 ERROR (vm/11ff5492) [virt.vm] (vmId='11ff5492-7d78-4cb4-afd2-628ef1457793') The vm start process failed (vm:871)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 816, in _startUnderlyingVm
    self._completeIncomingMigration()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3661, in _completeIncomingMigration
    self.cont(guestTimeSync=True)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 1465, in cont
    self._underlyingCont()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3757, in _underlyingCont
    self._dom.resume()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2174, in resume
    if ret == -1: raise libvirtError ('virDomainResume() failed', dom=self)
libvirt.libvirtError: internal error: unable to execute QEMU command 'cont': Failed to get "write" lock
2020-05-27 04:55:31,481+0300 INFO  (vm/11ff5492) [virt.vm] (vmId='11ff5492-7d78-4cb4-afd2-628ef1457793') Changed state to Down: internal error: unable to execute QEMU command 'cont': Failed to get "write" lock (code=1) (vm:1629)

/var/log/libvit/qemu/vm.log:
2020-05-27T01:55:30.914521Z qemu-kvm: Failed to get "write" lock
Is another process using the image [/rhev/data-center/f61083e2-6a2b-4a3a-9e72-03b8ec399dd2/06c92104-0ef7-4b79-9dc4-0a6c276ad36a/images/1a654d71-7e3f-4fd1-aa05-de97c7939b4c/07183b6f-2e3e-4af2-8352-ca11ff686d70]?


Version-Release number of selected component (if applicable):
vdsm-4.40.17-1.el8ev.x86_64
qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64
ovirt-engine-4.4.1-0.1.el8ev.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create VM from template (can use rhel6/7/8 template, the logs were taken when using latest-rhel-guest-image-6.10)
2. Run VM
3. Create memory snapshot1
4. Create memory snapshot2
5. Shutdown VM
6. Preview, commit memory snapshot2
7. Delete snapshot1
8. Delete snapshot2
9. Run VM

Actual results:
1. Failed to start VM on the first host because of the above error.

Expected results:
1. Shouldn't have the 'Failed to get "write" lock' error.

Additional info:
1. If don't remove snapshot1 and snapshot2, there will be no 'Failed to get "write" lock' error.
2. If only create one memory snapshot, then commit and remove it, there is no error.

Comment 1 Qin Yuan 2020-06-04 07:31:56 UTC
Also noticed there were two errors in engine log when deleting snapshot2:

2020-05-27 04:54:50,163+03 ERROR [org.ovirt.engine.core.utils.ovf.OvfManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-31) [252b3490-01a3-4f04-a70d-b54a587ddfa4] Error parsing OVF due to Error loading ovf, message null
2020-05-27 04:54:50,163+03 ERROR [org.ovirt.engine.core.bll.snapshots.ColdMergeSnapshotSingleDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-31) [252b3490-01a3-4f04-a70d-b54a587ddfa4] Failed to read snapshot 'defd577c-c204-432f-9883-ac9c448a35be' configuration


Another thing is rhel6 VM got stuck occasionally(4 out of 15 times) when run step9 which is to start VM after deleting snapshot2.

Comment 2 Liran Rotenberg 2020-06-09 12:29:00 UTC
Created attachment 1696309 [details]
libvirt debug logs

I couldn't see any problems with the engine or VDSM.

The only suspicious thing I saw is in libvirt debug log.
On the first host the VM is starting on(host1 in the logs), I could see these lines:
2020-06-09 12:12:46.288+0000: 177879: debug : qemuSetupImageCgroupInternal:139 : Not updating cgroups for disk path '<null>', type: file
2020-06-09 12:12:46.288+0000: 177879: debug : qemuSetupImagePathCgroup:75 : Allow path /rhev/data-center/3b67fb92-906b-11ea-bb36-482ae35a5f83/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/0191
384a-3e0a-472f-a889-d95622cb6916/7f553f44-db08-480e-8c86-cbdeccedfafe, perms: rw
2020-06-09 12:12:46.288+0000: 177879: debug : qemuSetupImagePathCgroup:75 : Allow path /rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6
047-46c9-aa81-ba6a12a9e8bd/images/0191384a-3e0a-472f-a889-d95622cb6916/7f553f44-db08-480e-8c86-cbdeccedfafe, perms: r

This is strange, since the NFS shared storage under /rhev/data-center/mnt is set with 'r' only, which mean read only.
This is also correlate with QEMU failing to write.

When it the re-run mechanism starts, the VM running on host2. There we see as usual:
2020-06-09 12:13:01.839+0000: 15781: debug : qemuSetupImagePathCgroup:75 : Allow path /rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-60
47-46c9-aa81-ba6a12a9e8bd/images/0191384a-3e0a-472f-a889-d95622cb6916/7f553f44-db08-480e-8c86-cbdeccedfafe, perms: rw

Libvirt setting read and write permissions to the destination.

Comment 3 Liran Rotenberg 2020-06-09 12:48:58 UTC
Forgot the version I see this:
libvirt-daemon-6.0.0-22.module+el8.2.1+6815+1c792dc8.x86_64
qemu-kvm-4.2.0-22.module+el8.2.1+6758+cb8d64c2.x86_64
kernel-4.18.0-193.7.1.el8_2.x86_64

Comment 4 Liran Rotenberg 2020-06-10 14:56:35 UTC
From libvirt perspective the above comment doesn't seem to relate.
In the re-run we don't load up the memory disk. I found a small difference in the domxml:

*** Single disk:
active vm - 1337c745-16f8-4398-b398-729c8fb8e5ac

xml:
    </disk>
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads' iothread='1'/>
      <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/1337c745-16f8-4398-b398-729c8fb8e5ac' index='1'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>a632d4d8-8a7b-40ef-9fa8-c323a1f99900</serial>
      <boot order='1'/>
      <alias name='ua-a632d4d8-8a7b-40ef-9fa8-c323a1f99900'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
..
    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
        <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName>
        <ovirt-vm:imageID>a632d4d8-8a7b-40ef-9fa8-c323a1f99900</ovirt-vm:imageID>
        <ovirt-vm:poolID>3b67fb92-906b-11ea-bb36-482ae35a5f83</ovirt-vm:poolID>
        <ovirt-vm:volumeID>1337c745-16f8-4398-b398-729c8fb8e5ac</ovirt-vm:volumeID>
        <ovirt-vm:specParams>
            <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>
        </ovirt-vm:specParams>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>a632d4d8-8a7b-40ef-9fa8-c323a1f99900</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/1337c745-16f8-4398-b398-729c8fb8e5ac.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/1337c745-16f8-4398-b398-729c8fb8e5ac</ovirt-vm:path>
                <ovirt-vm:volumeID>1337c745-16f8-4398-b398-729c8fb8e5ac</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>
    </ovirt-vm:device>


creating snap with memory:
active vm - d9ed0b23-009f-4ce9-93a4-cf9fbf354670
snap1 - 1337c745-16f8-4398-b398-729c8fb8e5ac

Under the xml:
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads' iothread='1'/>
      <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/d9ed0b23-009f-4ce9-93a4-cf9fbf354670' index='3'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/1337c745-16f8-4398-b398-729c8fb8e5ac'>
          <seclabel model='dac' relabel='no'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <serial>a632d4d8-8a7b-40ef-9fa8-c323a1f99900</serial>
      <boot order='1'/>
      <alias name='ua-a632d4d8-8a7b-40ef-9fa8-c323a1f99900'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

...
    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
        <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName>
        <ovirt-vm:imageID>a632d4d8-8a7b-40ef-9fa8-c323a1f99900</ovirt-vm:imageID>
        <ovirt-vm:poolID>3b67fb92-906b-11ea-bb36-482ae35a5f83</ovirt-vm:poolID>
        <ovirt-vm:volumeID>d9ed0b23-009f-4ce9-93a4-cf9fbf354670</ovirt-vm:volumeID>
        <ovirt-vm:specParams>
            <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>
        </ovirt-vm:specParams>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>a632d4d8-8a7b-40ef-9fa8-c323a1f99900</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/1337c745-16f8-4398-b398-729c8fb8e5ac.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/1337c745-16f8-4398-b398-729c8fb8e5ac</ovirt-vm:path>
                <ovirt-vm:volumeID>1337c745-16f8-4398-b398-729c8fb8e5ac</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>a632d4d8-8a7b-40ef-9fa8-c323a1f99900</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/d9ed0b23-009f-4ce9-93a4-cf9fbf354670.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/d9ed0b23-009f-4ce9-93a4-cf9fbf354670</ovirt-vm:path>
                <ovirt-vm:volumeID>d9ed0b23-009f-4ce9-93a4-cf9fbf354670</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>

Shutdown vm
Preview of snap1- no changes in the snapshot volumes
active vm before preview - d9ed0b23-009f-4ce9-93a4-cf9fbf354670
snap1 - 1337c745-16f8-4398-b398-729c8fb8e5ac

Commit of snap1:
active vm - 2f568c1d-5260-406e-88a1-ecd81f8f9178
snap 1 - 1337c745-16f8-4398-b398-729c8fb8e5ac

Delete snap1:
active vm - 1337c745-16f8-4398-b398-729c8fb8e5ac

Which is basically fine - we are having the exact same configurations as our initial run without any snapshots to the VM.

    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
        <ovirt-vm:imageID>a632d4d8-8a7b-40ef-9fa8-c323a1f99900</ovirt-vm:imageID>
        <ovirt-vm:poolID>3b67fb92-906b-11ea-bb36-482ae35a5f83</ovirt-vm:poolID>
        <ovirt-vm:volumeID>1337c745-16f8-4398-b398-729c8fb8e5ac</ovirt-vm:volumeID>
        <ovirt-vm:specParams>
            <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>
        </ovirt-vm:specParams>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>a632d4d8-8a7b-40ef-9fa8-c323a1f99900</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/1337c745-16f8-4398-b398-729c8fb8e5ac.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/1337c745-16f8-4398-b398-729c8fb8e5ac</ovirt-vm:path>
                <ovirt-vm:volumeID>1337c745-16f8-4398-b398-729c8fb8e5ac</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>
    </ovirt-vm:device>
...
    </disk>
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads' iothread='1'/>
      <source file='/rhev/data-center/3b67fb92-906b-11ea-bb36-482ae35a5f83/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/a632d4d8-8a7b-40ef-9fa8-c323a1f99900/1337c745-16f8-4398-b398-729c8fb8e5ac' index='1'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>a632d4d8-8a7b-40ef-9fa8-c323a1f99900</serial>
      <boot order='1'/>
      <alias name='ua-a632d4d8-8a7b-40ef-9fa8-c323a1f99900'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

No backStore pointing to the same volume! Everything is fine.

*** Multiple snapshots when we restore to the middle one in the chain (the bug scenario):
active vm - 81663fb3-95b7-4aa4-a62b-9d9a847983f7
snap2 - abeab988-484b-4ee1-82a1-e23be04422cb
snap1 - 77203775-a005-4af6-924c-e79c5cf3a18f

xml:
    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
        <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName>
        <ovirt-vm:imageID>bbe17afa-b26e-42ce-9177-1e52abf0e26c</ovirt-vm:imageID>
        <ovirt-vm:poolID>3b67fb92-906b-11ea-bb36-482ae35a5f83</ovirt-vm:poolID>
        <ovirt-vm:volumeID>81663fb3-95b7-4aa4-a62b-9d9a847983f7</ovirt-vm:volumeID>
        <ovirt-vm:specParams>
            <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>
        </ovirt-vm:specParams>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>bbe17afa-b26e-42ce-9177-1e52abf0e26c</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/77203775-a005-4af6-924c-e79c5cf3a18f.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/77203775-a005-4af6-924c-e79c5cf3a18f</ovirt-vm:path>
                <ovirt-vm:volumeID>77203775-a005-4af6-924c-e79c5cf3a18f</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>bbe17afa-b26e-42ce-9177-1e52abf0e26c</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/81663fb3-95b7-4aa4-a62b-9d9a847983f7.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/81663fb3-95b7-4aa4-a62b-9d9a847983f7</ovirt-vm:path>
                <ovirt-vm:volumeID>81663fb3-95b7-4aa4-a62b-9d9a847983f7</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>bbe17afa-b26e-42ce-9177-1e52abf0e26c</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/abeab988-484b-4ee1-82a1-e23be04422cb.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/abeab988-484b-4ee1-82a1-e23be04422cb</ovirt-vm:path>
                <ovirt-vm:volumeID>abeab988-484b-4ee1-82a1-e23be04422cb</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>
    </ovirt-vm:device>
...
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads' iothread='1'/>
      <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/81663fb3-95b7-4aa4-a62b-9d9a847983f7' index='4'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore type='file' index='3'>
        <format type='qcow2'/>
        <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/abeab988-484b-4ee1-82a1-e23be04422cb'>
          <seclabel model='dac' relabel='no'/>
        </source>
        <backingStore type='file' index='1'>
          <format type='qcow2'/>
          <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/77203775-a005-4af6-924c-e79c5cf3a18f'>
            <seclabel model='dac' relabel='no'/>
          </source>
          <backingStore/>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <serial>bbe17afa-b26e-42ce-9177-1e52abf0e26c</serial>
      <boot order='1'/>
      <alias name='ua-bbe17afa-b26e-42ce-9177-1e52abf0e26c'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

Shutting down the VM.
Preview snap2.
active vm - 81663fb3-95b7-4aa4-a62b-9d9a847983f7
snap2 - abeab988-484b-4ee1-82a1-e23be04422cb
snap1 - 77203775-a005-4af6-924c-e79c5cf3a18f

Commit snap2.
active vm - d7616476-fb89-4e84-8b2d-df89dfd9d85b
snap2 - abeab988-484b-4ee1-82a1-e23be04422cb
snap1 - 77203775-a005-4af6-924c-e79c5cf3a18f

Note that from this point when we run the VM we will be looking at a state where we have one snapshot.
The active vm is snap2(abeab988-484b-4ee1-82a1-e23be04422cb) and we have one snapshot, snap1 which is: 77203775-a005-4af6-924c-e79c5cf3a18f

Deleting snap1.
active vm - d7616476-fb89-4e84-8b2d-df89dfd9d85b
snap2 - 77203775-a005-4af6-924c-e79c5cf3a18f

Deleting snap2.
active vm - 77203775-a005-4af6-924c-e79c5cf3a18f

We will start the VM and our active VM volume will be 77203775-a005-4af6-924c-e79c5cf3a18f but we also supposingly haveing snap1 which is 77203775-a005-4af6-924c-e79c5cf3a18f.

        <disk device="disk" snapshot="no" type="file">
            <driver cache="none" error_policy="stop" io="threads" iothread="1" name="qemu" type="qcow2" />
            <source file="/rhev/data-center/3b67fb92-906b-11ea-bb36-482ae35a5f83/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/77203775-a005-4af6-924c-e79c5cf3a18f">
                <seclabel model="dac" relabel="no" />
            </source>
            <backingStore type="file">
                <format type="qcow2" />
                <source file="/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/bbe17afa-b26e-42ce-9177-1e52abf0e26c/77203775-a005-4af6-924c-e79c5cf3a18f">
                    <seclabel model="dac" relabel="no" />
                </source>
                <backingStore />
            </backingStore>
            <target bus="virtio" dev="vda" />
            <serial>bbe17afa-b26e-42ce-9177-1e52abf0e26c</serial>
            <boot order="1" />
            <alias name="ua-bbe17afa-b26e-42ce-9177-1e52abf0e26c" />
            <address bus="0x04" domain="0x0000" function="0x0" slot="0x00" type="pci" />
        </disk>
...
            <ovirt-vm:device devtype="disk" name="vda">
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>bbe17afa-b26e-42ce-9177-1e52abf0e26c</ovirt-vm:imageID>
                <ovirt-vm:poolID>3b67fb92-906b-11ea-bb36-482ae35a5f83</ovirt-vm:poolID>
                <ovirt-vm:volumeID>77203775-a005-4af6-924c-e79c5cf3a18f</ovirt-vm:volumeID>
            </ovirt-vm:device>

We now pointing to the same volume twice, resulting in lock problem and seeing: 
Wake up from hibernation failed:internal error: unable to execute QEMU command 'cont': Failed to get "write" lock.


*** When we shut-down the VM between the live memory snapshot creation:
After creating the first snapshot, snap1:
active vm - 709bb73b-09ff-4e9b-b598-aa23bf907971
snap1 - 7eef4567-3587-4c71-912c-853e261b1ee6

xml:
    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
        <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName>
        <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
        <ovirt-vm:poolID>3b67fb92-906b-11ea-bb36-482ae35a5f83</ovirt-vm:poolID>
        <ovirt-vm:volumeID>709bb73b-09ff-4e9b-b598-aa23bf907971</ovirt-vm:volumeID>
        <ovirt-vm:specParams>
            <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>
        </ovirt-vm:specParams>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/709bb73b-09ff-4e9b-b598-aa23bf907971.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/709bb73b-09ff-4e9b-b598-aa23bf907971</ovirt-vm:path>
                <ovirt-vm:volumeID>709bb73b-09ff-4e9b-b598-aa23bf907971</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6</ovirt-vm:path>
                <ovirt-vm:volumeID>7eef4567-3587-4c71-912c-853e261b1ee6</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>
    </ovirt-vm:device>
...
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads' iothread='1'/>
      <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/709bb73b-09ff-4e9b-b598-aa23bf907971' index='3'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6'>
          <seclabel model='dac' relabel='no'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <serial>3403f68a-7fc9-4ad8-b4d3-77838799ae82</serial>
      <boot order='1'/>
      <alias name='ua-3403f68a-7fc9-4ad8-b4d3-77838799ae82'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

Shutting down the VM and starting it again.

    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
        <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
        <ovirt-vm:poolID>3b67fb92-906b-11ea-bb36-482ae35a5f83</ovirt-vm:poolID>
        <ovirt-vm:volumeID>709bb73b-09ff-4e9b-b598-aa23bf907971</ovirt-vm:volumeID>
        <ovirt-vm:specParams>
            <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>
        </ovirt-vm:specParams>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/709bb73b-09ff-4e9b-b598-aa23bf907971.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/709bb73b-09ff-4e9b-b598-aa23bf907971</ovirt-vm:path>
                <ovirt-vm:volumeID>709bb73b-09ff-4e9b-b598-aa23bf907971</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6</ovirt-vm:path>
                <ovirt-vm:volumeID>7eef4567-3587-4c71-912c-853e261b1ee6</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>
    </ovirt-vm:device>
...
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads' iothread='1'/>
      <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/709bb73b-09ff-4e9b-b598-aa23bf907971' index='1'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore type='file' index='3'>
        <format type='qcow2'/>
        <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6'>
          <seclabel model='dac' relabel='no'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <serial>3403f68a-7fc9-4ad8-b4d3-77838799ae82</serial>
      <boot order='1'/>
      <alias name='ua-3403f68a-7fc9-4ad8-b4d3-77838799ae82'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

Only the index of the source file and backingStore seems to change.
Creating the second snapshot with memory, snap2.
active vm - a8a2e336-acac-4bcd-b67e-2d9c7fbb4a27
snap2 - 709bb73b-09ff-4e9b-b598-aa23bf907971
snap1 - 7eef4567-3587-4c71-912c-853e261b1ee6

xml:
    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
        <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName>
        <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
        <ovirt-vm:poolID>3b67fb92-906b-11ea-bb36-482ae35a5f83</ovirt-vm:poolID>
        <ovirt-vm:volumeID>a8a2e336-acac-4bcd-b67e-2d9c7fbb4a27</ovirt-vm:volumeID>
        <ovirt-vm:specParams>
            <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>
        </ovirt-vm:specParams>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/709bb73b-09ff-4e9b-b598-aa23bf907971.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/709bb73b-09ff-4e9b-b598-aa23bf907971</ovirt-vm:path>
                <ovirt-vm:volumeID>709bb73b-09ff-4e9b-b598-aa23bf907971</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6</ovirt-vm:path>
                <ovirt-vm:volumeID>7eef4567-3587-4c71-912c-853e261b1ee6</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/a8a2e336-acac-4bcd-b67e-2d9c7fbb4a27.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/a8a2e336-acac-4bcd-b67e-2d9c7fbb4a27</ovirt-vm:path>
                <ovirt-vm:volumeID>a8a2e336-acac-4bcd-b67e-2d9c7fbb4a27</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>
    </ovirt-vm:device>
...
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads' iothread='1'/>
      <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/a8a2e336-acac-4bcd-b67e-2d9c7fbb4a27' index='4'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/709bb73b-09ff-4e9b-b598-aa23bf907971'>
          <seclabel model='dac' relabel='no'/>
        </source>
        <backingStore type='file' index='3'>
          <format type='qcow2'/>
          <source file='/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6'>
            <seclabel model='dac' relabel='no'/>
          </source>
          <backingStore/>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <serial>3403f68a-7fc9-4ad8-b4d3-77838799ae82</serial>
      <boot order='1'/>
      <alias name='ua-3403f68a-7fc9-4ad8-b4d3-77838799ae82'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

Shutting down the VM.
Preview snap2.
active vm - a8a2e336-acac-4bcd-b67e-2d9c7fbb4a27
snap2 - 709bb73b-09ff-4e9b-b598-aa23bf907971
snap1 - 7eef4567-3587-4c71-912c-853e261b1ee6

Commit snap2.
active vm - 73c8a7e5-f29a-459a-89b6-63368fd99fbd
snap2 - 709bb73b-09ff-4e9b-b598-aa23bf907971
snap1 - 7eef4567-3587-4c71-912c-853e261b1ee6

Deleting snap1.
active vm - 73c8a7e5-f29a-459a-89b6-63368fd99fbd
snap2 - 7eef4567-3587-4c71-912c-853e261b1ee6

Deleting snap2.
active vm - 7eef4567-3587-4c71-912c-853e261b1ee6

Starting the VM.
    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
        <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
        <ovirt-vm:poolID>3b67fb92-906b-11ea-bb36-482ae35a5f83</ovirt-vm:poolID>
        <ovirt-vm:volumeID>7eef4567-3587-4c71-912c-853e261b1ee6</ovirt-vm:volumeID>
        <ovirt-vm:specParams>
            <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>
        </ovirt-vm:specParams>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>4fd23357-6047-46c9-aa81-ba6a12a9e8bd</ovirt-vm:domainID>
                <ovirt-vm:imageID>3403f68a-7fc9-4ad8-b4d3-77838799ae82</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_GE_compute-ge-4_nfs__0/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6</ovirt-vm:path>
                <ovirt-vm:volumeID>7eef4567-3587-4c71-912c-853e261b1ee6</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>
    </ovirt-vm:device>
...
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads' iothread='1'/>
      <source file='/rhev/data-center/3b67fb92-906b-11ea-bb36-482ae35a5f83/4fd23357-6047-46c9-aa81-ba6a12a9e8bd/images/3403f68a-7fc9-4ad8-b4d3-77838799ae82/7eef4567-3587-4c71-912c-853e261b1ee6' index='1'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>3403f68a-7fc9-4ad8-b4d3-77838799ae82</serial>
      <boot order='1'/>
      <alias name='ua-3403f68a-7fc9-4ad8-b4d3-77838799ae82'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

No backingStore, the VM goes up.

Bottom line - the problem is when we are pointing to the same volume in source file and backingStore path of the disk.

A question arising is why when we shut-down the VM in between taking the snapshots everything fine? Is it the index number?
But, changing the volume ID of the active vm to the deleted snapshot and restoring the VM metadata(restoring with memory), can cause a problem as seen above.

From storage team point of view- what do you think? Note this also might be related to other flows of memory snapshots + merge.

Comment 5 Benny Zlotnik 2020-06-10 19:37:23 UTC
I suspect this issue is related to the errors mentioned in comment #1 I'll have a look next week

Comment 6 Benny Zlotnik 2020-06-16 09:59:52 UTC
So the merge error is an issue but it is not related to problem (and it's probably simple to fix), so I think the only issue here is that the memory configuration xml is "corrected" incorrectly in vdsm. Not sure what is the best course of action here though, whether it is to remove the backing file during correction in[1], invalidate the memory entirely in this case or something else

[1] https://github.com/oVirt/vdsm/blob/master/lib/vdsm/virt/vm.py#L2623

Comment 7 Qin Yuan 2020-07-07 07:40:05 UTC
Verified with:
vdsm-4.40.22-1.el8ev.x86_64
ovirt-engine-4.4.1.7-0.3.el8ev.noarch

Steps:
The same as steps in comment #0

Results:
1. When restore the VM, it can be started on the first host, there is no "Wake up from hibernation failed:internal error: unable to execute QEMU command 'cont': Failed to get "write" lock." error.
2. Tested 10 times, RHEL6 VM didn't get stuck when run step9

Comment 8 Sandro Bonazzola 2020-07-08 08:27:36 UTC
This bugzilla is included in oVirt 4.4.1 release, published on July 8th 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.1 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.