Bug 1333342

Summary: snapshot disk actual size is not refreshing after merge
Product: [oVirt] ovirt-engine Reporter: enax
Component: GeneralAssignee: Ala Hino <ahino>
Status: CLOSED CURRENTRELEASE QA Contact: Kevin Alon Goldblatt <kgoldbla>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.6.5CC: acanan, amureini, bugs, enax, michal.skrivanek, stirabos, tnisan
Target Milestone: ovirt-4.0.0-rcFlags: rule-engine: ovirt-4.0.0+
rule-engine: planning_ack+
amureini: devel_ack+
acanan: testing_ack+
Target Release: 4.0.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: Base volume actual size was set after live merge Consequence: Wrong actual size presented after live merge Fix: Set actual as retrieved from vdsm Result: Correct actual size is presented after live merge
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-08-01 12:27:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
snap_disk_size
none
vdsm.log
none
engine.log
none
vdsm2.log
none
engine2.log none

Description enax 2016-05-05 10:32:55 UTC
Created attachment 1154158 [details]
snap_disk_size

Description of problem:

If i'm deleting a snapshot from snap chain, it's merging up, but that snapshot actual size is not refreshing.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. create a vm with thin provisioned disk
1. write some data to disk e.g. dd 10G to file
2. create a snap
3. write more data to disk e.g. dd 10G to file2
4. create a snap
5. delete the first snap

Actual results:

second snap size is 10G

Expected results:

20G

Additional info:

Comment 1 Tal Nisan 2016-05-06 11:45:09 UTC
Are you deleting the snapshot while the VM is running or while it's down?
Please attach VDSM & Engine logs

Comment 2 enax 2016-05-09 10:15:27 UTC
(In reply to Tal Nisan from comment #1)
> Are you deleting the snapshot while the VM is running or while it's down?
> Please attach VDSM & Engine logs

I made new tests, the results are same, doesn't matter the vm is up or down.
The vdsm.log and engine.log contain two online snapshot creation and one merge.
I found a "volume does not exist" error message in vdsm log.

When I made the offline test I got some interesting warging, you can find it in vdsm2 and engin2 log.

eb83a36b-ad09-452d-8a62-0f559542e7b4::WARNING::2016-05-09 09:43:34,734::image::1320::Storage.Image::(merge) Auto shrink after merge failed
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 1318, in merge
    newVol.shrinkToOptimalSize()
  File "/usr/share/vdsm/storage/blockVolume.py", line 333, in shrinkToOptimalSize
    qemuimg.FORMAT.QCOW2)
  File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 156, in check
    raise QImgError(rc, out, err)
QImgError: ecode=3, stdout=['', '1209 leaked clusters were found on the image.', 'This means waste of disk space, but no harm to data.', '184448/1638400 = 11.26% allocated, 0.92% fragmented...

Comment 3 enax 2016-05-09 10:16:07 UTC
Created attachment 1155227 [details]
vdsm.log

Comment 4 enax 2016-05-09 10:16:34 UTC
Created attachment 1155228 [details]
engine.log

Comment 5 enax 2016-05-09 10:17:03 UTC
Created attachment 1155229 [details]
vdsm2.log

Comment 6 enax 2016-05-09 10:17:24 UTC
Created attachment 1155230 [details]
engine2.log

Comment 7 Yaniv Lavi 2016-05-09 10:53:35 UTC
Moving to first RC, since things should not be targeted to second one at this point.

Comment 8 Ala Hino 2016-06-02 13:41:19 UTC
I will fix this BZ for live merge.
For cold merge, refer to BZ 1330978.

Comment 9 Allon Mureinik 2016-06-06 12:55:22 UTC
Ala, this is a user-visible issue. Please add some doctext for it.

Comment 10 Kevin Alon Goldblatt 2016-07-19 13:31:47 UTC
Tested with the following code:
---------------------------------------
vdsm-4.18.4-2.el7ev.x86_64
rhevm-4.0.2-0.2.rc1.el7ev.noarch

Tested using the following scenario:
---------------------------------------
Steps to Reproduce:
1. create a vm with thin provisioned disk
1. write some data to disk e.g. dd 10G to file
2. create a snap
3. write more data to disk e.g. dd 10G to file2
4. create a snap
5. delete the first snap

Actual results:

second snap size is now 20G

Expected results:

20G

Moving to VERIFIED!