Bug 1654891 - Engine sent duplicate SnapshotVDSCommand, causing data corruption
Summary: Engine sent duplicate SnapshotVDSCommand, causing data corruption
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.2.7
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ovirt-4.3.0
: ---
Assignee: Eyal Shenitzky
QA Contact: Elad
URL:
Whiteboard:
Depends On: 1640225
Blocks: 1658589
TreeView+ depends on / blocked
 
Reported: 2018-11-29 23:33 UTC by Germano Veit Michel
Modified: 2022-03-13 16:39 UTC (History)
17 users (show)

Fixed In Version: ovirt-engine-4.3.0_rc
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1658589 (view as bug list)
Environment:
Last Closed: 2019-05-08 12:39:01 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1640225 0 unspecified CLOSED Failed to migrate VM disk: unsupported configuration: nothing selected for snapshot 2021-02-22 00:41:40 UTC
Red Hat Issue Tracker RHV-45231 0 None None None 2022-03-13 16:39:57 UTC
Red Hat Knowledge Base (Solution) 3728131 0 None None None 2018-12-03 06:05:38 UTC
Red Hat Product Errata RHEA-2019:1085 0 None None None 2019-05-08 12:39:14 UTC
oVirt gerrit 96139 0 master MERGED core: persist CreateSnapshotForVmCommand#performNextOperation 2018-12-12 10:25:56 UTC
oVirt gerrit 96165 0 ovirt-engine-4.2 MERGED core: persist CreateSnapshotForVmCommand#performNextOperation 2018-12-13 08:29:33 UTC

Internal Links: 1640225

Description Germano Veit Michel 2018-11-29 23:33:21 UTC
Description of problem:

The VM was apparently healthy. Then the user initiates a snapshot operation.

1. The engine creates the new volume (SPM). All good.
2. The engine sends SnapshotVDSCommand to the host running the VM. The VM starts using the new leaf volume. All good
3. The engine sends the exact same SnapshotVDSCommand again(!!!) It fails.
4. The engine tells the SPM to delete the LV that the VM just switched to in [2]. 

Now those extents are free but the VM is using them. A new volume create on the SD will overlap with the extents of the image used by the VM, causing data corruption for both.

Engine side:

1. Creates volume (IRS)

2018-11-26 03:35:15,453+13 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-24) [4b9931a3-c789-4af4-be9c-9bec8b6b8ba2] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='58117215-0261-0167-0311-0000000002b6', ignoreFailoverLimit='false', storageDomainId='0b0107b2-9d2a-4871-b4d6-76bdffba84cd', imageGroupId='5cbd57df-bf57-4fd3-a209-039e9e8d9629', imageSizeInBytes='34359738368', volumeFormat='COW', newImageId='1da1d6d6-e9a1-43ac-9882-4e787961f4ea', imageType='Sparse', newImageDescription='', imageInitialSizeInBytes='0', imageId='452916ee-5837-455a-bfd0-f9a236ab37f6', sourceImageGroupId='5cbd57df-bf57-4fd3-a209-039e9e8d9629'}), log id: 2b40013a

2. 1st snapshot (succeeds)

2018-11-26 03:35:32,208+13 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-81) [4b9931a3-c789-4af4-be9c-9bec8b6b8ba2] START, SnapshotVDSCommand(HostName = blade04, SnapshotVDSCommandParameters:{hostId='c4127bdd-d2b9-4eab-a551-9ad4936fbdd5', vmId='67711673-32a1-4803-873c-bfd029cfd1ca'}), log id: 32a6dad7

3. 2nd Snapshot (fails)
2018-11-26 03:35:38,564+13 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-60) [4b9931a3-c789-4af4-be9c-9bec8b6b8ba2] START, SnapshotVDSCommand(HostName = blade04, SnapshotVDSCommandParameters:{hostId='c4127bdd-d2b9-4eab-a551-9ad4936fbdd5', vmId='67711673-32a1-4803-873c-bfd029cfd1ca'}), log id: 3b23ffdd

2018-11-26 03:35:38,607+13 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-60) [4b9931a3-c789-4af4-be9c-9bec8b6b8ba2] Failed in 'SnapshotVDS' method

4. Destroy Volume (IRS)

2018-11-26 03:35:39,980+13 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DestroyImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [4b9931a3-c789-4af4-be9c-9bec8b6b8ba2] START, DestroyImageVDSCommand( DestroyImageVDSCommandParameters:{storagePoolId='58117215-0261-0167-0311-0000000002b6', ignoreFailoverLimit='false', storageDomainId='0b0107b2-9d2a-4871-b4d6-76bdffba84cd', imageGroupId='5cbd57df-bf57-4fd3-a209-039e9e8d9629', imageId='00000000-0000-0000-0000-000000000000', imageList='[1da1d6d6-e9a1-43ac-9882-4e787961f4ea]', postZero='false', force='false'}), log id: 194548ed

VDSM side:

2018-11-26 03:35:32,221+1300 INFO  (jsonrpc/6) [api.virt] START snapshot(snapDrives=[{u'baseVolumeID': u'452916ee-5837-455a-bfd0-f9a236ab37f6', u'domainID': u'0b0107b2-9d2a-4871-b4d6-76bdffba84cd', u'volumeID': u'1da1d6d6-e9a1-43ac-9882-4e787961f4ea', u'imageID': u'5cbd57df-bf57-4fd3-a209-039e9e8d9629'}], snapMemory=None, frozen=False) from=::ffff:10.1.224.10,52876, flow_id=4b9931a3-c789-4af4-be9c-9bec8b6b8ba2, vmId=67711673-32a1-4803-873c-bfd029cfd1ca (api:46)

2018-11-26 03:35:36,399+1300 INFO  (jsonrpc/6) [api.virt] FINISH snapshot return={'status': {'message': 'Done', 'code': 0}, 'quiesce': False} from=::ffff:10.1.224.10,52876, flow_id=4b9931a3-c789-4af4-be9c-9bec8b6b8ba2, vmId=67711673-32a1-4803-873c-bfd029cfd1ca (api:52)

Here comes the second snapshot:

2018-11-26 03:35:38,562+1300 INFO  (jsonrpc/4) [api.virt] START snapshot(snapDrives=[{u'baseVolumeID': u'452916ee-5837-455a-bfd0-f9a236ab37f6', u'domainID': u'0b0107b2-9d2a-4871-b4d6-76bdffba84cd', u'volumeID': u'1da1d6d6-e9a1-43ac-9882-4e787961f4ea', u'imageID': u'5cbd57df-bf57-4fd3-a209-039e9e8d9629'}], snapMemory=None, frozen=False) from=::ffff:10.1.224.10,52876, flow_id=4b9931a3-c789-4af4-be9c-9bec8b6b8ba2, vmId=67711673-32a1-4803-873c-bfd029cfd1ca (api:46)

2018-11-26 03:35:38,563+1300 INFO  (jsonrpc/4) [virt.vm] (vmId='67711673-32a1-4803-873c-bfd029cfd1ca') <?xml version='1.0' encoding='utf-8'?>
<domainsnapshot><disks /></domainsnapshot> (vm:4600)

And it fails:

2018-11-26 03:35:38,602+1300 INFO  (jsonrpc/4) [api.virt] FINISH snapshot return={'status': {'message': 'Snapshot failed', 'code': 48}} from=::ffff:10.1.224.10,52876, flow_id=4b9931a3-c789-4af4-be9c-9bec8b6b8ba2, vmId=67711673-32a1-4803-873c-bfd029cfd1ca (api:52)

Version-Release number of selected component (if applicable):
ovirt-engine-4.2.7.5-0.1.el7ev.noarch
vdsm-4.20.43-1.el7ev.x86_64

How reproducible:
Unknown

Steps to Reproduce:
Unknown

Actual results:
VM using stale LV, data corruption

Comment 4 Germano Veit Michel 2018-11-30 03:31:06 UTC
There is a single POST to snapshots of this VM via API, just in case...

[26/Nov/2018:03:35:15 +1300] "POST /ovirt-engine/api/vms/67711673-32a1-4803-873c-bfd029cfd1ca/snapshots HTTP/1.1" 201 13478

Comment 5 Germano Veit Michel 2018-11-30 05:24:58 UTC
No progress, but just to document... 

1) I tried to send a POST to /snapshots with the same disk duplicated as below, the engine fails with Internal Sever Error. It does not trigger this bug.

<snapshot>
   <description>test2</description>
   <persist_memorystate>false</persist_memorystate>
   <disk_attachments>
      <disk_attachment>
         <disk id="0764dcb3-8b21-4cda-9899-5d18ecefc67f"/>
      </disk_attachment>
      <disk_attachment>
         <disk id="0764dcb3-8b21-4cda-9899-5d18ecefc67f"/>
      </disk_attachment>
  </disk_attachments>
</snapshot>

So I'm not sure this could have been triggered by an incorrect API request.

2) Also, the SnapshotVDS command seems to depend on what is returned here, it doesn't look duplicate:

engine=> select count(*) from all_disks_for_vms  where vm_id = '67711673-32a1-4803-873c-bfd029cfd1ca';
 count 
-------
     1

Comment 7 Elad 2018-12-02 15:21:21 UTC
Following an offline conversation, this issue was reported here - bug 1640225 (upstream).

Comment 8 Nir Soffer 2018-12-02 18:08:15 UTC
Germano, did you see any disconnection from vdsm or any other error since the first
snapshot request was sent?

This may be also an infra issue, if infra try to resend the same command twice.

Martin, do we have protection from sending the same command twice?

On storage side we may also need to have protection from receiving response more
then once. If we sent a command and received a response, the command should change
the internal state so another response for the same command should be dropped.

Comment 9 Germano Veit Michel 2018-12-02 22:44:46 UTC
(In reply to Nir Soffer from comment #8)
> Germano, did you see any disconnection from vdsm or any other error since
> the first
> snapshot request was sent?

Hi Nir,

No, not on these logs and also not on BZ1640225 which is a very similar issue. The difference between this and BZ1640225 is that the latter is during a LSM, so the parent of CreateSnapshotForVm command is LiveMigrateDisk.

Comment 10 Benny Zlotnik 2018-12-03 11:50:51 UTC
I haven't dug too deep yet, but given 2 different threads executed each of the SnapshotVDS commands, the only explanation coming to mind is that both of them ran the CreateSnapshotForVmCommand#performNextOperation, which might be due to some race condition in the polling stage, causing two scheduling threads to see the command as "SUCCEEDED" at the same time, and execute its "next operation" at the same time.

Though it seems kind of strange given there was a 6 second delay between each snapshot attempt, but this could be due to the system being under load

Comment 18 RHV bug bot 2019-01-15 23:35:10 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops

Comment 20 Elad 2019-02-24 08:42:56 UTC
Moving to VERIFIED based on latest 4.3 regression cycles results.

Used:
ovirt-engine-4.3.0.4-0.1.el7.noarch
vdsm-4.30.8-2.el7ev.x86_64
libvirt-4.5.0-10.el7_6.4.x86_64
qemu-img-rhev-2.12.0-21.el7.x86_64

Comment 22 errata-xmlrpc 2019-05-08 12:39:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1085


Note You need to log in before you can comment on or make changes to this bug.