Bug 1842375
Summary: | Failed snapshot creation can cause data corruption of other VMs [RHV clone - 4.3.10] | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | RHV bug bot <rhv-bugzilla-bot> |
Component: | ovirt-engine | Assignee: | Liran Rotenberg <lrotenbe> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Shir Fishbain <sfishbai> |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | unspecified | CC: | aefrat, aoconnor, bzlotnik, jortialc, lsvaty, michal.skrivanek, mkalinin, mlehrer, pkovar, sfishbai, tnisan |
Target Milestone: | ovirt-4.3.10 | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
Unsuccessful freeze command from the VDSM reached the timeout to the snapshot command of 3 minutes.
Consequence:
Snapshot command don't start. The engine assumes that the volume chain did update when checking for volume usage. But in this case he has no reliable way for to tell if the volume is in use or not, making it possible for data corruption.
Fix:
A new value is set in engine-config: '
'. This value if set to true will perform the freeze command from the engine. This will prevent the situation above.
Result:
The freeze if 'LiveSnapshotPerformFreezeInEngine' is set to true will happen in the engine, before calling to snapshot command. In this case, no data corruption is possible.
|
Story Points: | --- |
Clone Of: | 1821164 | Environment: | |
Last Closed: | 2020-06-09 10:20:28 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1821164 | ||
Bug Blocks: |
Description
RHV bug bot
2020-06-01 06:55:57 UTC
This issue may get fixed by Bug 1749284 - Change the Snapshot operation to be asynchronous But there may still be potential for this behaviour if we do not handle all the corner cases properly. (Originally by Roman Hodain) Benny, you think it will be possible to check the Domain XML dump to figure out if the VM is currently using an image that we are going to rollback and in that case roll forward? (Originally by Tal Nisan) (In reply to Tal Nisan from comment #9) > Benny, you think it will be possible to check the Domain XML dump to figure > out if the VM is currently using an image that we are going to rollback and > in that case roll forward? It's strange because we have this check[1], I checked the logs and it seems the xml dump didn't contain the new volumes so from engine POV they weren't used (they are part of the dump later on, when live merge runs), I didn't find logs from host when the VM runs so not entirely sure what happened [1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/snapshots/CreateSnapshotCommand.java#L216 (Originally by Benny Zlotnik) well, there's timing to consider between getVolumeChain() and actual removal. Would be best to have such a check on vdsm side, perhaps? As a safeguard in case engine decides to delete an active volume....for whatever reason. (Originally by michal.skrivanek) (In reply to Michal Skrivanek from comment #11) > well, there's timing to consider between getVolumeChain() and actual > removal. Would be best to have such a check on vdsm side, perhaps? As a > safeguard in case engine decides to delete an active volume....for whatever > reason. yes, read the bug over, and if the snapshot creation didn't reach the call to libvirt we'll still see the original chain (in the previous bug freeze passed fine, the memory dump took too long)... so we can't really do this reliably in the engine (Originally by Benny Zlotnik) Do we have a way to tell if a volume is used by a VM in vdsm though? Image removal is an SPM operation Maybe we can acquire a volume lease and inquire when trying to delete (Originally by Benny Zlotnik) Here is one observation. The snapshot creation continued after we received: 2020-03-30 20:45:13,918+0200 WARN (jsonrpc/0) [virt.vm] (vmId='cdb7c691-41be-4f96-808c-4d4421462a36') Unable to freeze guest filesystems: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': timeout when try to receive Frozen event from VSS provider: Unspecified error (vm:4262) This is generated by the qemu agent. The agent waits for the fsFreeze even for 10s, but this message was reported minutes after the fsFreeze was initiated. So the guest agent may get stuck even before triggering the freeze. Would it be better not to rely on the agent and simply fail the fsFreeze according to a timeout suitable for the vdsm workflow? We can see that this operation can be blocking. (Originally by Roman Hodain) The snapshot creation completed successfully and ready to be used Verified with the following steps: 1. Adding sleep to the host at /usr/lib/python2.7/site-packages/vdsm/virt/vm.py 2. Restart vdsmd 3. On the engine engine-config -s LiveSnapshotPerformFreezeInEngine=true 4. Restart ovirt-engine service 5. Run the new VM on the host [1] 6. Create a snapshot without memory **For this moment the LiveSnapshotPerformFreezeInEngine configured by default to true. ovirt-engine-4.3.10.4-0.1.el7.noarch vdsm-4.30.46-1.el7ev.x86_64 libvirt-4.5.0-33.el7_8.1.x86_64 |