Bug 1330978

Summary: Cold Merge: VM Configuration in snapshots is wrong
Product: [oVirt] ovirt-engine Reporter: Ala Hino <ahino>
Component: BLL.StorageAssignee: Ala Hino <ahino>
Status: CLOSED CURRENTRELEASE QA Contact: Kevin Alon Goldblatt <kgoldbla>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.6.3CC: ahino, amureini, bugs, tnisan
Target Milestone: ovirt-4.0.1Flags: rule-engine: ovirt-4.0.z+
rule-engine: planning_ack+
tnisan: devel_ack+
acanan: testing_ack+
Target Release: 4.0.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-07-21 15:03:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
snap1 vm config before delete
none
snap2 vm config before delete
none
snap3 vm config before delete
none
snap2 vm config after delete of snap1
none
snap3 vm config after delete of snap1
none
vdsm server and engine logs none

Description Ala Hino 2016-04-27 11:45:05 UTC
Created attachment 1151318 [details]
snap1 vm config before delete

Description of problem:
Image parent Id points to the wrong parent after live merge completes

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Create a vm
2. Create 3 snapshots: s1, s2, s3
3. Delete base snapshot

Actual results:
s3 vm config contains wrong data: image parent Id still points to s2 volume rather than to s1 (remember, after live merge, s2 data is copied to s1 and s2 gets s1 Ids)

Expected results:
Image parent Id to be correct


Additional info:
The attached xml files demonstrate the issue.
Before deleting snap1:
snap1: image id is: 66fa58c4, parent is: 000000
snap2: image id is: dc517a4b, parent is: 66fa58c4
snap3: image id is: 1cf0dd9a, parent is: dc517a4b

After deleting snap1:
snap2: image id is: 66fa58c4, parent is: 000000
snap3: image id is: 1cf0dd9a, parent is: dc517a4b

Expected that in snap3, parent Id to be: 66fa58c4

Comment 1 Ala Hino 2016-04-27 11:46:02 UTC
Created attachment 1151319 [details]
snap2 vm config before delete

Comment 2 Ala Hino 2016-04-27 11:46:34 UTC
Created attachment 1151320 [details]
snap3 vm config before delete

Comment 3 Ala Hino 2016-04-27 11:47:09 UTC
Created attachment 1151321 [details]
snap2 vm config after delete of snap1

Comment 4 Ala Hino 2016-04-27 11:47:28 UTC
Created attachment 1151322 [details]
snap3 vm config after delete of snap1

Comment 5 Allon Mureinik 2016-04-27 11:52:04 UTC
The real question is why we even need the parent there?

Comment 6 Sandro Bonazzola 2016-05-02 09:50:54 UTC
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.

Comment 7 Yaniv Lavi 2016-05-23 13:14:37 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 8 Ala Hino 2016-06-02 13:39:33 UTC
(In reply to Allon Mureinik from comment #5)
> The real question is why we even need the parent there?

We have the disk config here and part of it is the parent.
In any case, the snapshot OVF is not updated after cold merge so no info is updated. More specifically, image size is not update and after cold merge, the presented size is wrong.

BZ 1333342 covers image size behavior after live merge.

Comment 11 Allon Mureinik 2016-07-07 12:08:55 UTC
Ala, can we have some doctext about this?

Comment 12 Kevin Alon Goldblatt 2016-07-18 14:20:54 UTC
Tested with the following code:
---------------------------------------
vdsm-4.18.4-2.el7ev.x86_64
rhevm-4.0.2-0.2.rc1.el7ev.noarch

Tested using the following scenario:
---------------------------------------
Steps to Reproduce:
1. Create a vm
2. Create 3 snapshots: s1, s2, s3
3. Delete base snapshot


My results below.

Ala it seems that I got the same results as you. Please check and give your input
Before Snapshots
----------------------------------
PU_00000000-0000-0000-0000-000000000000  xxx 533e1517-d294-4e04-b769-ccf20cc1a6f7   5.00g (The Parent volume)

After snapshot s1
---------------------------------
PU_00000000-0000-0000-0000-000000000000  xxxx 533e1517-d294-4e04-b769-ccf20cc1a6f7   5.00g (Parent volume)
PU_533e1517-d294-4e04-b769-ccf20cc1a6f7  xxxx e4c7904f-4360-46ea-9e65-79f51cb4c2b1   1.00g (s1 points to parent volume)


After snapshot s2
--------------------------------
PU_00000000-0000-0000-0000-000000000000  xxxx 533e1517-d294-4e04-b769-ccf20cc1a6f7   5.00g (Parent volume)
PU_533e1517-d294-4e04-b769-ccf20cc1a6f7  xxxx e4c7904f-4360-46ea-9e65-79f51cb4c2b1   1.00g (s1 points to parent volume)
PU_e4c7904f-4360-46ea-9e65-79f51cb4c2b1  xxxx 75008a9d-4aac-4df5-9922-54d1cdd0e8f0   1.00g (s2 points to s1 image)



After snapshot s3
-------------------------------
PU_00000000-0000-0000-0000-000000000000  xxxx 533e1517-d294-4e04-b769-ccf20cc1a6f7   5.00g (Parent volume)
PU_533e1517-d294-4e04-b769-ccf20cc1a6f7  xxxx e4c7904f-4360-46ea-9e65-79f51cb4c2b1   1.00g (s1 points to parent volume)
PU_e4c7904f-4360-46ea-9e65-79f51cb4c2b1  xxxx 75008a9d-4aac-4df5-9922-54d1cdd0e8f0   1.00g (s2 points to s1 image)
PU_75008a9d-4aac-4df5-9922-54d1cdd0e8f0  xxxx ff87e4b9-327e-4c30-879c-ec35a74007fb   1.00g (s3 points to s2 image)


After deleting snapshot s1
------------------------------
PU_00000000-0000-0000-0000-000000000000  xxxx 533e1517-d294-4e04-b769-ccf20cc1a6f7   5.00g (Parent volume)
PU_533e1517-d294-4e04-b769-ccf20cc1a6f7  xxxx 75008a9d-4aac-4df5-9922-54d1cdd0e8f0   1.00g (s2 points to parent volume)
PU_75008a9d-4aac-4df5-9922-54d1cdd0e8f0  xxxx ff87e4b9-327e-4c30-879c-ec35a74007fb   1.00g (s3 points to s2 image)

Comment 13 Kevin Alon Goldblatt 2016-07-18 14:31:02 UTC
Created attachment 1181126 [details]
vdsm server and engine logs

Added logs for the Need_Info

Comment 14 Ala Hino 2016-07-18 14:58:56 UTC
Looks good to me.

Another way to verify this is based on the snapshot disk actual size:
VM -> Snapshots -> Disks (sub-tab in the lower right panel).

Create snapshots with data and the delete a snapshot while the VM is down. Before this fix that size wouldn't change but now, the size reflects the actual size