Bug 1308375 - Live snapshot deletion causing actual disk size to grow
Live snapshot deletion causing actual disk size to grow
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
Unspecified Unspecified
urgent Severity high
: ovirt-3.6.3
: 3.6.0
Assigned To: Ala Hino
: Reopened
Depends On:
  Show dependency treegraph
Reported: 2016-02-14 17:11 EST by Marina
Modified: 2016-03-10 01:56 EST (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2016-03-09 14:47:26 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
vdsm log from my testing (15.14 MB, text/plain)
2016-02-14 17:26 EST, Marina
no flags Details
engine.log from my testing (1.49 MB, text/plain)
2016-02-14 17:27 EST, Marina
no flags Details
messages file from the host from my reproducer (66.22 KB, text/plain)
2016-02-14 17:40 EST, Marina
no flags Details

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 527613 None None None 2016-02-14 17:51 EST
oVirt gerrit 53318 None None None 2016-02-17 04:31 EST
Red Hat Product Errata RHBA-2016:0362 normal SHIPPED_LIVE vdsm 3.6.0 bug fix and enhancement update 2016-03-09 18:49:32 EST

  None (edit)
Description Marina 2016-02-14 17:11:02 EST
Description of problem:
Live snapshot deletion causing actual disk size to grow (rather then getting back to the original size).

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Create a VM with 10G defined disk space on preallocated disk and start the VM. 
2. Create a live snapshot on the preallocated disk while the vm was running. This adds 1G to the actual size of the disk (as expected, not sure if as preferred behaviour).
3. Perform live snapshot deletion.

Actual results:
Actual size of the disk grew to 12G. As well as the corresponding LV.

Expected results:
Actual size should go back to the original value before snapshot creation, i.e. to 10G

Additional info:
If you keep going and create the live snapshot again and then delete, the actual size of the disk will continue growing by 2G after each such iteration. Until you run out of space on your domain.
Comment 3 Marina 2016-02-14 17:26 EST
Created attachment 1127125 [details]
vdsm log from my testing
Comment 4 Marina 2016-02-14 17:27 EST
Created attachment 1127126 [details]
engine.log from my testing
Comment 7 Marina 2016-02-14 17:40 EST
Created attachment 1127128 [details]
messages file from the host from my reproducer
Comment 8 Allon Mureinik 2016-02-15 07:00:11 EST
On block storage, we have to resize the snapshot in order to allow the merge to succeed.

Tacking the watermark should allow us to do a better job with it. It's currently targeted for 4.0 (see bug 1168327), and once we implement it we'll re-consider if it's feasible to backport to a 3.6.z branch.

*** This bug has been marked as a duplicate of bug 1168327 ***
Comment 9 Allon Mureinik 2016-02-15 10:09:04 EST
I missed the fact the original disk was PREALLOCATED.
This is a different than bug 1168327, reopening.

This does, however, seem like the usecase described in https://gerrit.ovirt.org/#/c/53317/.

Adam/Ala, can you please confirm (and if so - push forwards with this patch).
Comment 10 Marina 2016-02-15 11:18:09 EST
Would it fix all snapshot deletion issue, the patch?
Comment 15 Elad 2016-02-23 04:38:37 EST
MArina, Ala,

On a block storage, created a VM with 10G preallocated disk attached, started it, created a live snapshot and live merged it. After live merge finished, I ended up with an image which its volume is 1G bigger than its creation size (before I took the snapshot). This means that the image actual size still grows after live merge (by 1G instead of 2G as before the fix). 

Is this the desired behaviour?
Comment 16 Marina 2016-02-23 10:44:03 EST
Hm, Elad, I do not think this is right. Why would it remain bigger if the snapshot is gone? (in general, why the size of preallocated disk grows with each snapshot creation? but this is a separate discussion).

For my understanding - once the snapshot was deleted, the extra size allocated with its creation should go as well. I.e. if the disk's original size is 10G, after creating a snapshot and deleting the snapshot, it should go back to 10G.
Comment 17 Elad 2016-02-25 04:05:34 EST
What's your input on this Ala?
Comment 18 Ala Hino 2016-02-25 10:12:13 EST
Elad, let's meet on Sunday and see what's going on.
Basically, I tried to do the same but couldn't see that base image size grows.
Comment 22 Elad 2016-02-28 05:14:58 EST
Tested using latest code:

1) Created a VM with 10G disk attached
2) Started the VM and created a snapshot. Image actual size increased to 11G
3) Deleted the snapshot while the VM was running (live merge)

Image size got decreased to 10G as expected.

Before snapshot creation:
687c921c-b6e7-4062-bfaa-85a94ecc5577  10.00g IU_c2392de3-cb92-467d-9fc1-e7972bd398cc,MD_7,PU_00000000-0000-0000-0000-000000000000

After snapshot creation:
 687c921c-b6e7-4062-bfaa-85a94ecc5577  10.00g IU_c2392de3-cb92-467d-9fc1-e7972bd398cc,MD_7,PU_00000000-0000-0000-0000-000000000000
8aec505e-fe09-483a-b2dc-d56bf3026b46   1.00g IU_c2392de3-cb92-467d-9fc1-e7972bd398cc,MD_8,PU_687c921c-b6e7-4062-bfaa-85a94ecc5577

After snapshot deletion:
687c921c-b6e7-4062-bfaa-85a94ecc5577  10.00g IU_c2392de3-cb92-467d-9fc1-e7972bd398cc,MD_7,PU_00000000-0000-0000-0000-000000000000
Comment 24 errata-xmlrpc 2016-03-09 14:47:26 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

Comment 25 Allon Mureinik 2016-03-10 01:56:46 EST
RHEV 3.6.0 has been released, setting status to CLOSED

Note You need to log in before you can comment on or make changes to this bug.