Bug 1792905 - Sparsification is not reflected on image size of qcow volumes
Summary: Sparsification is not reflected on image size of qcow volumes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.3.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.4.4
: 4.4.4
Assignee: Arik
QA Contact: Evelina Shames
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-20 11:04 UTC by Marian Jankular
Modified: 2021-02-20 09:33 UTC (History)
16 users (show)

Fixed In Version: ovirt-engine-4.4.4.4
Doc Type: Bug Fix
Doc Text:
Previously, users could invoke the 'sparsify' operation on thin-provisioned (qcow) disks with a single volume. While the freed space was reclaimed by the storage device, the image size didn't change and users could see this as a failure to sparsify the image. In this release, sparsifying a thin-provisioned disks with a single volume is now blocked.
Clone Of:
Environment:
Last Closed: 2021-02-02 13:58:29 UTC
oVirt Team: Virt
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:0312 0 None None None 2021-02-02 13:58:44 UTC
oVirt gerrit 112517 0 master MERGED core: block sparsifying qcow volumes 2021-02-18 09:15:05 UTC

Description Marian Jankular 2020-01-20 11:04:03 UTC
Description of problem:
virt-sparsify not working

Version-Release number of selected component (if applicable):
libguestfs-1.40.2-5.el7_7.1.x86_64
libguestfs-tools-c-1.40.2-5.el7_7.1.x86_64
python-libguestfs-1.40.2-5.el7_7.1.x86_64
vdsm-4.30.33-1.el7ev.x86_64

How reproducible:
everytime

Steps to Reproduce:
1. create thinprovisioned disk for vm in fc storage
2. create big file with dd
3. remove the big file
4. shutdown vm and sparsify disk

Actual results:
disk size remain the same

Expected results:
disk will be sparsified

Additional info:

Comment 1 Marian Jankular 2020-01-20 12:56:24 UTC
same results for nfs based storage domain

Comment 2 Ryan Barry 2020-01-21 00:51:31 UTC
The actual request here seems to be for trim support on storage domains, and NFS clients don't need to be aware of the server implementing this.

For block SDs, RHV should support this. If sparsify doesn't work, we should fix it, though

Comment 3 Marian Jankular 2020-01-21 15:19:19 UTC
please disregard comment #1, i was using nfs v4.1 and we do support it only with 4.2

Comment 5 Marian Jankular 2020-04-27 13:24:55 UTC
update:

I just find out the following:

1, created vmA (initial actual size 2 gb)
2, created 2 x 10 gb file in vmA (actual size is 22 gb)
3, removed those files in vmA
4, power off vmA
5, sparsify disk of vmA, finished successfully and actual size is 22 gb
6, cloned vmA to vmB, actual size of the vmB disk is 2 gb

Comment 6 Ryan Barry 2020-04-27 14:01:34 UTC
(In reply to Marian Jankular from comment #4)
> Hello,
> 
> is there any progress on this? 
> 
> Customer replied that he needs sparsify function urgently as he needs to
> reclaim space.

Not targeted to 4.4.2, once 8.2 AV stabilizes and we can take another look.

Is your last workaround good enough?

Comment 11 Beni Pelled 2020-08-18 09:48:52 UTC
Same issue with ISCSI Disk,

Verified with:
- ovirt-engine-4.4.1.10-0.1.el8ev.noarch
- libvirt-6.0.0-25.module+el8.2.1+7154+47ffd890.x86_64
- vdsm-4.40.22-1.el8ev.x86_64


Reproduce steps:
1. Create and start a VM with iscsi disk.
2. Create a big file with 'dd if=/dev/urandom of=/var/tmp/test_file bs=100M count=100'.
3. Power off the VM and sparsify the disk.

Result:
- Once the file is created (section 2), actual_size & total_size [1] grows accordingly but doesn't shrink after sparsify

[1] https://<ENGINE_FQDN>/ovirt-engine/api/disks/<DISK_ID>

Comment 12 Tomáš Golembiovský 2020-09-07 16:04:42 UTC
I tried to reproduce it with iSCSI and I could not get it working either. And virt-sparsify seems to work properly. I tried also to run 'fstrim' from inside the VM with no result. The disk is thin provisioned and the 'Enable Discard' flag for the disk of the VM is set. Is there anything else that needs to be configured for discard to work? If not it seems like a storage bug. FYI 'Dicscard After Delete' can be set for the domain but has no effect on the overall result here.

Comment 13 Richard W.M. Jones 2020-09-17 08:59:17 UTC
I don't know what's causing the problem but I will say that
discard is a very complex topic.  It must be enabled at every
level in the stack to work, and when it fails it does so silently.
So good luck determining what the problem is.  FWIW virt-sparsify
works fine for me and we've had no other reports of bugs.

Comment 14 Liran Rotenberg 2020-09-21 11:45:25 UTC
We had long ago a BZ 1516689 which resulted in a discussion about discard, at the bottom line it ended up in a update to the sparsify feature page.

Comment 15 Tomáš Golembiovský 2020-09-22 11:39:25 UTC
I checked the bug 1516689 and the feature page and this bug looks like a duplicate of the older bug. Apparently it is not easy to check if the blocks were discarded (you need to verify with your storage) on block storage domain or even reuse the free blocks in some cases. I wonder why it was not documented as suggested in one of the comments in bug 1516689. I think this can be closed as a duplicate or we can turn it to documentation bug and let storage document it properly so that users can get correct expectations about the behavior.

Comment 16 Arik 2020-09-24 19:08:06 UTC
Right, indeed seems like a duplicate of bz 1516689. Would it make sense for VDSM to report an error if discard is not supported for the disk as suggested in https://bugzilla.redhat.com/show_bug.cgi?id=1516689#c31?

Comment 18 Tomáš Golembiovský 2020-10-13 11:58:01 UTC
After investigation and internal discussion we have decided to disable sparsify for qcow disks. Even though the sparsification works and the underlying storage actually discarded blocks, the metadata in the intermediate layers do not reflect this --

- LVM layout that we use on block storage does not reflect the change in used blocks (LVM pools would help here, but it has other drawbacks so the change will not happen)
- qcow itself does not reduce the apparent size (defragmentig it would have performance impact and/or impact on storage)

This means oVirt does not have a way how to get proper information about the actual disk size. So even though the feature may have some benefits we are not entirely sure we could resolve the situation just by improving the documentation, hence the decision to disable the feature.

Comment 19 Tomáš Golembiovský 2020-10-13 12:17:23 UTC
For the sake of completeness let me add that we have one option how to fix the situation -- instead of doing sparsification in-place we could do it from old disk to new disk. This would have impact on a) performance: because you need to copy the data to new disk;  as well as b) storage: you need to make sure storage domain has at least as much free space as is current size of the disk to accommodate the worst case situation. It would also be non-trivial changes in engine and vdsm. All in all it would be more of a new feature rather than just a bug fix.

Comment 28 Evelina Shames 2020-12-24 06:46:28 UTC
Verified on ovirt-engine-4.4.4.5-0.10.el8ev with the following steps:
1. create thinprovisioned disk for vm in iscsi storage
2. create big file with dd
3. remove the big file
4. shutdown vm and sparsify disk

expected results:
'sparsify' on thin-provisioned (qcow) volumes is blocked.

Actual results:
'Error while executing action: Cannot sparsify Virtual Disk. Sparsifying is not supported for QCOW2 disk'

Moving to 'Verified'.

Comment 32 errata-xmlrpc 2021-02-02 13:58:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHV Engine and Host Common Packages 4.4.z [ovirt-4.4.4]), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0312


Note You need to log in before you can comment on or make changes to this bug.