Created attachment 1151318 [details] snap1 vm config before delete Description of problem: Image parent Id points to the wrong parent after live merge completes Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. Create a vm 2. Create 3 snapshots: s1, s2, s3 3. Delete base snapshot Actual results: s3 vm config contains wrong data: image parent Id still points to s2 volume rather than to s1 (remember, after live merge, s2 data is copied to s1 and s2 gets s1 Ids) Expected results: Image parent Id to be correct Additional info: The attached xml files demonstrate the issue. Before deleting snap1: snap1: image id is: 66fa58c4, parent is: 000000 snap2: image id is: dc517a4b, parent is: 66fa58c4 snap3: image id is: 1cf0dd9a, parent is: dc517a4b After deleting snap1: snap2: image id is: 66fa58c4, parent is: 000000 snap3: image id is: 1cf0dd9a, parent is: dc517a4b Expected that in snap3, parent Id to be: 66fa58c4
Created attachment 1151319 [details] snap2 vm config before delete
Created attachment 1151320 [details] snap3 vm config before delete
Created attachment 1151321 [details] snap2 vm config after delete of snap1
Created attachment 1151322 [details] snap3 vm config after delete of snap1
The real question is why we even need the parent there?
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.
oVirt 4.0 beta has been released, moving to RC milestone.
(In reply to Allon Mureinik from comment #5) > The real question is why we even need the parent there? We have the disk config here and part of it is the parent. In any case, the snapshot OVF is not updated after cold merge so no info is updated. More specifically, image size is not update and after cold merge, the presented size is wrong. BZ 1333342 covers image size behavior after live merge.
Ala, can we have some doctext about this?
Tested with the following code: --------------------------------------- vdsm-4.18.4-2.el7ev.x86_64 rhevm-4.0.2-0.2.rc1.el7ev.noarch Tested using the following scenario: --------------------------------------- Steps to Reproduce: 1. Create a vm 2. Create 3 snapshots: s1, s2, s3 3. Delete base snapshot My results below. Ala it seems that I got the same results as you. Please check and give your input Before Snapshots ---------------------------------- PU_00000000-0000-0000-0000-000000000000 xxx 533e1517-d294-4e04-b769-ccf20cc1a6f7 5.00g (The Parent volume) After snapshot s1 --------------------------------- PU_00000000-0000-0000-0000-000000000000 xxxx 533e1517-d294-4e04-b769-ccf20cc1a6f7 5.00g (Parent volume) PU_533e1517-d294-4e04-b769-ccf20cc1a6f7 xxxx e4c7904f-4360-46ea-9e65-79f51cb4c2b1 1.00g (s1 points to parent volume) After snapshot s2 -------------------------------- PU_00000000-0000-0000-0000-000000000000 xxxx 533e1517-d294-4e04-b769-ccf20cc1a6f7 5.00g (Parent volume) PU_533e1517-d294-4e04-b769-ccf20cc1a6f7 xxxx e4c7904f-4360-46ea-9e65-79f51cb4c2b1 1.00g (s1 points to parent volume) PU_e4c7904f-4360-46ea-9e65-79f51cb4c2b1 xxxx 75008a9d-4aac-4df5-9922-54d1cdd0e8f0 1.00g (s2 points to s1 image) After snapshot s3 ------------------------------- PU_00000000-0000-0000-0000-000000000000 xxxx 533e1517-d294-4e04-b769-ccf20cc1a6f7 5.00g (Parent volume) PU_533e1517-d294-4e04-b769-ccf20cc1a6f7 xxxx e4c7904f-4360-46ea-9e65-79f51cb4c2b1 1.00g (s1 points to parent volume) PU_e4c7904f-4360-46ea-9e65-79f51cb4c2b1 xxxx 75008a9d-4aac-4df5-9922-54d1cdd0e8f0 1.00g (s2 points to s1 image) PU_75008a9d-4aac-4df5-9922-54d1cdd0e8f0 xxxx ff87e4b9-327e-4c30-879c-ec35a74007fb 1.00g (s3 points to s2 image) After deleting snapshot s1 ------------------------------ PU_00000000-0000-0000-0000-000000000000 xxxx 533e1517-d294-4e04-b769-ccf20cc1a6f7 5.00g (Parent volume) PU_533e1517-d294-4e04-b769-ccf20cc1a6f7 xxxx 75008a9d-4aac-4df5-9922-54d1cdd0e8f0 1.00g (s2 points to parent volume) PU_75008a9d-4aac-4df5-9922-54d1cdd0e8f0 xxxx ff87e4b9-327e-4c30-879c-ec35a74007fb 1.00g (s3 points to s2 image)
Created attachment 1181126 [details] vdsm server and engine logs Added logs for the Need_Info
Looks good to me. Another way to verify this is based on the snapshot disk actual size: VM -> Snapshots -> Disks (sub-tab in the lower right panel). Create snapshots with data and the delete a snapshot while the VM is down. Before this fix that size wouldn't change but now, the size reflects the actual size