Bug 1333342 - snapshot disk actual size is not refreshing after merge
Summary: snapshot disk actual size is not refreshing after merge
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: General
Version: 3.6.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.0.0-rc
: 4.0.0
Assignee: Ala Hino
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-05 10:32 UTC by enax
Modified: 2016-08-01 12:27 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Base volume actual size was set after live merge Consequence: Wrong actual size presented after live merge Fix: Set actual as retrieved from vdsm Result: Correct actual size is presented after live merge
Clone Of:
Environment:
Last Closed: 2016-08-01 12:27:33 UTC
oVirt Team: Storage
rule-engine: ovirt-4.0.0+
rule-engine: planning_ack+
amureini: devel_ack+
acanan: testing_ack+


Attachments (Terms of Use)
snap_disk_size (64.65 KB, image/png)
2016-05-05 10:32 UTC, enax
no flags Details
vdsm.log (134.25 KB, application/x-gzip)
2016-05-09 10:16 UTC, enax
no flags Details
engine.log (18.61 KB, application/x-gzip)
2016-05-09 10:16 UTC, enax
no flags Details
vdsm2.log (406.72 KB, application/x-gzip)
2016-05-09 10:17 UTC, enax
no flags Details
engine2.log (95.35 KB, application/x-gzip)
2016-05-09 10:17 UTC, enax
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 58560 0 master MERGED core: Fix image size after live merge 2016-06-06 12:44:07 UTC
oVirt gerrit 58671 0 ovirt-engine-4.0 MERGED core: Fix image size after live merge 2016-06-06 12:51:43 UTC

Description enax 2016-05-05 10:32:55 UTC
Created attachment 1154158 [details]
snap_disk_size

Description of problem:

If i'm deleting a snapshot from snap chain, it's merging up, but that snapshot actual size is not refreshing.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. create a vm with thin provisioned disk
1. write some data to disk e.g. dd 10G to file
2. create a snap
3. write more data to disk e.g. dd 10G to file2
4. create a snap
5. delete the first snap

Actual results:

second snap size is 10G

Expected results:

20G

Additional info:

Comment 1 Tal Nisan 2016-05-06 11:45:09 UTC
Are you deleting the snapshot while the VM is running or while it's down?
Please attach VDSM & Engine logs

Comment 2 enax 2016-05-09 10:15:27 UTC
(In reply to Tal Nisan from comment #1)
> Are you deleting the snapshot while the VM is running or while it's down?
> Please attach VDSM & Engine logs

I made new tests, the results are same, doesn't matter the vm is up or down.
The vdsm.log and engine.log contain two online snapshot creation and one merge.
I found a "volume does not exist" error message in vdsm log.

When I made the offline test I got some interesting warging, you can find it in vdsm2 and engin2 log.

eb83a36b-ad09-452d-8a62-0f559542e7b4::WARNING::2016-05-09 09:43:34,734::image::1320::Storage.Image::(merge) Auto shrink after merge failed
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 1318, in merge
    newVol.shrinkToOptimalSize()
  File "/usr/share/vdsm/storage/blockVolume.py", line 333, in shrinkToOptimalSize
    qemuimg.FORMAT.QCOW2)
  File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 156, in check
    raise QImgError(rc, out, err)
QImgError: ecode=3, stdout=['', '1209 leaked clusters were found on the image.', 'This means waste of disk space, but no harm to data.', '184448/1638400 = 11.26% allocated, 0.92% fragmented...

Comment 3 enax 2016-05-09 10:16:07 UTC
Created attachment 1155227 [details]
vdsm.log

Comment 4 enax 2016-05-09 10:16:34 UTC
Created attachment 1155228 [details]
engine.log

Comment 5 enax 2016-05-09 10:17:03 UTC
Created attachment 1155229 [details]
vdsm2.log

Comment 6 enax 2016-05-09 10:17:24 UTC
Created attachment 1155230 [details]
engine2.log

Comment 7 Yaniv Lavi 2016-05-09 10:53:35 UTC
Moving to first RC, since things should not be targeted to second one at this point.

Comment 8 Ala Hino 2016-06-02 13:41:19 UTC
I will fix this BZ for live merge.
For cold merge, refer to BZ 1330978.

Comment 9 Allon Mureinik 2016-06-06 12:55:22 UTC
Ala, this is a user-visible issue. Please add some doctext for it.

Comment 10 Kevin Alon Goldblatt 2016-07-19 13:31:47 UTC
Tested with the following code:
---------------------------------------
vdsm-4.18.4-2.el7ev.x86_64
rhevm-4.0.2-0.2.rc1.el7ev.noarch

Tested using the following scenario:
---------------------------------------
Steps to Reproduce:
1. create a vm with thin provisioned disk
1. write some data to disk e.g. dd 10G to file
2. create a snap
3. write more data to disk e.g. dd 10G to file2
4. create a snap
5. delete the first snap

Actual results:

second snap size is now 20G

Expected results:

20G

Moving to VERIFIED!


Note You need to log in before you can comment on or make changes to this bug.