Description of problem: We recently planned migrating from an existing storage backend (glusterfs) to a new one (iSCSI). On glusterfs, all our disks are thin provisioned, however, when moving disks to iSCSI, the following warning is shown: The following disks will become preallocated, and may consume considerably more space on the target: local-disk Indeed, after migration disks are preallocated. Thin provisioning is especially valuable to us because we use our infrastructure for teaching and a lot of VM pools are created for students, most of them with enough storage to not need to extend them afterwards, but practically only 10% of disk space is used actually (around 600GB), so if we migrated all machines we'd be using around 6TB, which is a lot of wasted space. This have been reported on list and Pavel Gashev suggested moving the file to a file based storage and then return it the block storage: > Please note that while disk moving keeps disk format, disk copying changes > format. So when you copy a thin provisioned disk to iSCSI storage it's being > converted to cow. The issue is that size of converted lv still looks like > preallocated. You can decrease it manually via lvchange, or you can move it > to a file based storage and back. Moving disks keeps disk format, but fixes > its size. Also, please consider adding an option to move storage of all machines in a VM pool at once (maybe allowing to specify a maximum of VMs to migrate at a time?).
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.
oVirt 4.0 beta has been released, moving to RC milestone.
Maor, I think that's a duplicate of another bug on you, can you track the other bug please?
It could be those two: https://bugzilla.redhat.com/show_bug.cgi?id=1358717 https://bugzilla.redhat.com/show_bug.cgi?id=1419240
Moving out all non blocker\exceptions.
(In reply to Maor from comment #5) > It could be those two: > https://bugzilla.redhat.com/show_bug.cgi?id=1358717 > https://bugzilla.redhat.com/show_bug.cgi?id=1419240 Now that those bugs are fixed, there are a few points needs to be clarified: 1. The fix for those bugs are dependent on qemuimg map which supports SEEK_HOLE and SEEK_DATA, this will allow map to detect sparseness. 2. Need to add a dropbox in the GUI to determine whether we want to use sparse or preallocate
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.
Raising priority here, we hit the same issue on production environment (NFS <-> iSCSI), this is causing big amount of allocated space to be used, or complicated WAs which requires shutdown of VMs.
Verified on engine 4.3.2.1.
This bugzilla is included in oVirt 4.3.2 release, published on March 19th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.