Bug 1821164 - Failed snapshot creation can cause data corruption of other VMs
Summary: Failed snapshot creation can cause data corruption of other VMs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: unspecified
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ovirt-4.4.0
: 4.4.1
Assignee: Liran Rotenberg
QA Contact: Shir Fishbain
URL:
Whiteboard:
Depends On:
Blocks: 1842375
TreeView+ depends on / blocked
 
Reported: 2020-04-06 08:32 UTC by Roman Hodain
Modified: 2020-11-26 08:45 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
While the RHV Manager is creating a virtual machine (VM) snapshot, it can time out and fail while trying to freeze the file system. If this happens, more than one VM can write data to the same logical volume and corrupt the data on it. In the current release, you can prevent this condition by configuring the Manager to freeze the VM's guest filesystems before creating a snapshot. To enable this behavior, run the engine-config tool and set the `LiveSnapshotPerformFreezeInEngine` key-value pair to `true`.
Clone Of:
: 1842375 (view as bug list)
Environment:
Last Closed: 2020-08-04 13:22:22 UTC
oVirt Team: Virt
Target Upstream Version:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 5219611 0 None None None 2020-07-20 22:41:30 UTC
Red Hat Product Errata RHSA-2020:3247 0 None None None 2020-08-04 13:22:43 UTC
oVirt gerrit 108539 0 master MERGED core: snapshot: allow force freeze in engine 2021-02-16 08:12:22 UTC
oVirt gerrit 108572 0 master MERGED core: snapshot: allow inconsistent snapshot 2021-02-16 08:12:23 UTC
oVirt gerrit 108666 0 ovirt-engine-4.3 MERGED core: snapshot: allow inconsistent snapshot 2021-02-16 08:12:23 UTC
oVirt gerrit 108673 0 ovirt-engine-4.3 MERGED core: snapshot: allow force freeze in engine 2021-02-16 08:12:24 UTC

Description Roman Hodain 2020-04-06 08:32:34 UTC
Description of problem:
When a snapshot creation fails on timeout, the engine will trigger rollback of the operation and remove the snapshot volumes even though the snapshot is finished on the hypervisor.

Version-Release number of selected component (if applicable):
4.3.7

How reproducible:
100%

Steps to Reproduce:
1. Trigger live snapshot of a VM which is not running on SPM
2. Make freeze fs to get stuck for 10 min (To breach all the timeouts)


Actual results:
The related volumes are removed on the SPM, but the VM finished the snapshot and using it.

Expected results:
The snapshot is either stopped completely or the volumes are not removed without the confirmation that they are not used by any VM.

Additional info:

This is very dangerous situation as other VMs can allocate the extends of the removed LVs. This will cause data corruption as two VMs may write to the same are.

Comment 1 Roman Hodain 2020-04-06 08:35:21 UTC
This issue may get fixed by 

    Bug 1749284 - Change the Snapshot operation to be asynchronous

But there may still be potential for this behaviour if we do not handle all the corner cases properly.

Comment 9 Tal Nisan 2020-04-14 10:22:07 UTC
Benny, you think it will be possible to check the Domain XML dump to figure out if the VM is currently using an image that we are going to rollback and in that case roll forward?

Comment 10 Benny Zlotnik 2020-04-14 14:06:25 UTC
(In reply to Tal Nisan from comment #9)
> Benny, you think it will be possible to check the Domain XML dump to figure
> out if the VM is currently using an image that we are going to rollback and
> in that case roll forward?

It's strange because we have this check[1], I checked the logs and it seems the xml dump didn't contain the new volumes so from engine POV they weren't used (they are part of the dump later on, when live merge runs), I didn't find logs from host when the VM runs so not entirely sure what happened



[1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/snapshots/CreateSnapshotCommand.java#L216

Comment 11 Michal Skrivanek 2020-04-14 14:51:22 UTC
well, there's timing to consider between getVolumeChain() and actual removal. Would be best to have such a check on vdsm side, perhaps? As a safeguard in case engine decides to delete an active volume....for whatever reason.

Comment 12 Benny Zlotnik 2020-04-14 15:24:08 UTC
(In reply to Michal Skrivanek from comment #11)
> well, there's timing to consider between getVolumeChain() and actual
> removal. Would be best to have such a check on vdsm side, perhaps? As a
> safeguard in case engine decides to delete an active volume....for whatever
> reason.

yes, read the bug over, and if the snapshot creation didn't reach the call to libvirt we'll still see the original chain (in the previous bug freeze passed fine, the memory dump took too long)... so we can't really do this reliably in the engine

Comment 13 Benny Zlotnik 2020-04-14 15:41:45 UTC
Do we have a way to tell if a volume is used by a VM in vdsm though? Image removal is an SPM operation
Maybe we can acquire a volume lease and inquire when trying to delete

Comment 21 Roman Hodain 2020-04-23 08:46:41 UTC
Here is one observation.

The snapshot creation continued after we received:

    2020-03-30 20:45:13,918+0200 WARN  (jsonrpc/0) [virt.vm] (vmId='cdb7c691-41be-4f96-808c-4d4421462a36') Unable to freeze guest filesystems: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': timeout when try to receive Frozen event from VSS provider: Unspecified error (vm:4262)

This is generated by the qemu agent. The agent waits for the fsFreeze even for 10s, but this message was reported minutes after the fsFreeze was initiated. So the guest agent may get stuck even before triggering the freeze. Would it be better not to rely on the agent and simply fail the fsFreeze according to a timeout suitable for the vdsm workflow? We can see that this operation can be blocking.

Comment 24 Shir Fishbain 2020-06-02 08:50:56 UTC
The snapshot creation completed successfully and ready to be used
Verified with the following steps:
1. Adding sleep to the host at /usr/lib/python2.7/site-packages/vdsm/virt/vm.py
2. Restart vdsmd
3. On the engine engine-config -s LiveSnapshotPerformFreezeInEngine=true
engine-config -s LiveSnapshotTimeoutInMinutes=1
4. Restart ovirt-engine service
5. Run the new VM on the host [1]
6. Create a snapshot without memory

**From this moment the LiveSnapshotPerformFreezeInEngine configured by default to true.
Versions:
ovirt-engine-4.4.1.1-0.5.el8ev.noarch
vdsm-4.40.18-1.el8ev.x86_64

from engine.log:
2020-06-02 11:42:58,879+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FreezeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-172) [4bd62674-8346-4a68-b88e-6e65ae59bdd9] START, FreezeVDSCommand(HostName = host_mixed_2, VdsAndVmIDVDSParametersBase:{hostId='562abf2c-fd8d-4280-80bd-454bfbf61328', vmId='d56f0bd6-656f-456a-b181-d85de806621e'}), log id: 4809807d
2020-06-02 11:45:58,982+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FreezeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-172) [4bd62674-8346-4a68-b88e-6e65ae59bdd9] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.FreezeVDSCommand' return value 'StatusOnlyReturn [status=Status [code=5022, message=Message timeout which can be caused by communication issues]]'
2020-06-02 11:45:58,983+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FreezeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-172) [4bd62674-8346-4a68-b88e-6e65ae59bdd9] HostName = host_mixed_2
2020-06-02 11:45:58,984+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FreezeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-172) [4bd62674-8346-4a68-b88e-6e65ae59bdd9] FINISH, FreezeVDSCommand, return: , log id: 4809807d

2020-06-02 11:46:04,068+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ThawVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-39) [4bd62674-8346-4a68-b88e-6e65ae59bdd9] START, ThawVDSCommand(HostName = host_mixed_2, VdsAndVmIDVDSParametersBase:{hostId='562abf2c-fd8d-4280-80bd-454bfbf61328', vmId='d56f0bd6-656f-456a-b181-d85de806621e'}), log id: 7c683e21
2020-06-02 11:46:08,478+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ThawVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-39) [4bd62674-8346-4a68-b88e-6e65ae59bdd9] FINISH, ThawVDSCommand, return: , log id: 7c683e21

Comment 34 errata-xmlrpc 2020-08-04 13:22:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3247


Note You need to log in before you can comment on or make changes to this bug.