Bug 1420337 - Data loss when previewing a snapshot from a 3.6 upgrade to 4.1 (?)
Summary: Data loss when previewing a snapshot from a 3.6 upgrade to 4.1 (?)
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.1.0.4
Hardware: Unspecified
OS: Unspecified
urgent
urgent vote
Target Milestone: ovirt-4.1.1
: ---
Assignee: Daniel Erez
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-08 13:30 UTC by Carlos Mestre González
Modified: 2017-03-07 11:37 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-27 14:26:29 UTC
oVirt Team: Storage
gklein: ovirt-4.1+
gklein: blocker+


Attachments (Terms of Use)
engine.log and vdsm.log (1.38 MB, application/x-gzip)
2017-02-08 13:40 UTC, Carlos Mestre González
no flags Details

Description Carlos Mestre González 2017-02-08 13:30:31 UTC
Description of problem:
We have this test flow to test snapshot merge live that I will describe:

1. Create a vm from a template and add multiple disks (4), create a fs on those disks
2. Start the vm and add 1 file on each disk, create a snapshot (0)
3. add 1 file on each disk, create a snapshot (1)
4. add 1 file on each disk, create a snapshot (2)
5. Delete the middle snapshot (1).
6. Preview snapshot (2) and check the files are there.

This works in base scenario, but it fails when the scenario is from an upgrade from 3.6 to 4.1, files created on step 4 exist but are empty.

Version-Release number of selected component (if applicable):
rhevm-4.1.0.4-0.1.el7.noarch

How reproducible:
100%

Steps to Reproduce:
follow what I previously set there.

Additional info:
So if I try to make a memory snapshot of the snapshot (2) it will say:
Custom compatibility version will be set when restoring memory from different cluster version.Previewing memory may cause data loss when excluding disks!

This scenario works on a base case when the env was 4.1 from the beginning, but fails here.

Comment 1 Carlos Mestre González 2017-02-08 13:37:01 UTC
I'll add some logs. The environment were this happen is available with the vms in case you need to take a look.

Comment 2 Carlos Mestre González 2017-02-08 13:40:31 UTC
Created attachment 1248613 [details]
engine.log and vdsm.log


vm name: vm_TestCase6038_REST_ISCSI_081431  vm id: c192bb84-f02a-4fed-a72c-feca7a66801a

snapshot that was preview id: "bb0a69d1-03d7-46c3-8483-f0eb285d9863" description: snapshot_6038_iscsi_2

Comment 3 Daniel Erez 2017-02-08 21:17:49 UTC
Hi Carlos,

* In which step did you perform the upgrade?
* IIUC, the issue was in memory snapshot?

Comment 4 Carlos Mestre González 2017-02-09 09:14:01 UTC
(In reply to Daniel Erez from comment #3) 
> * In which step did you perform the upgrade?
Before this case run. This was an env that was 3.6 and run our tier1 when we shipped 3.6.10, then it was upgrade to 4.0 and run some automation, finally it got upgrade to 4.1 and has been running our tier1, seeing this failure that I don't see running in a clean 4.1 installation.

> * IIUC, the issue was in memory snapshot?
No, sorry my mistake in the description. The snapshots taken are memory snapshot, the preview of the snapshot is without the memory. I posted the message just because I notice and maybe the custom compatibility message was relevant even if it's not a memory snapshot been previewed.

Comment 5 Daniel Erez 2017-02-12 07:51:40 UTC
We couldn't find any unexpected issues in engine nor vdsm. The fact that the files exists on the guest but with empty content hints that this might be a guest issue.

As discussed, can you please reproduce the scenario upgrading from 3.6 to 4.0, and from 4.0 to 4.1, on a clean environment and describe the steps to reproduce.

Comment 6 Daniel Erez 2017-02-27 14:26:29 UTC
No update for a while. Closing for now. Please reopen if indeed reproduced.

Comment 7 Carlos Mestre González 2017-03-07 11:37:46 UTC
could not reproduce.


Note You need to log in before you can comment on or make changes to this bug.