Bug 1314959 - Keep thin provisioning when migrating from thin provisioned glusterfs to iSCSI
Summary: Keep thin provisioning when migrating from thin provisioned glusterfs to iSCSI
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 3.6.3.3
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ovirt-4.3.2
: 4.3.2.1
Assignee: Eyal Shenitzky
QA Contact: Evelina Shames
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-05 06:04 UTC by nicolas
Modified: 2019-03-19 10:03 UTC (History)
5 users (show)

Fixed In Version: ovirt-engine-4.3.2.1
Clone Of:
Environment:
Last Closed: 2019-03-19 10:03:24 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.3?
rule-engine: planning_ack?
pm-rhel: devel_ack+
lsvaty: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 97822 0 master MERGED core: set raw/sparse file base disks to cow if move to block base domain 2020-09-12 05:57:50 UTC

Description nicolas 2016-03-05 06:04:00 UTC
Description of problem:

We recently planned migrating from an existing storage backend (glusterfs) to a new one (iSCSI). On glusterfs, all our disks are thin provisioned, however, when moving disks to iSCSI, the following warning is shown:

   The following disks will become preallocated, and may consume considerably more space on the target: local-disk 

Indeed, after migration disks are preallocated.

Thin provisioning is especially valuable to us because we use our infrastructure for teaching and a lot of VM pools are created for students, most of them with enough storage to not need to extend them afterwards, but practically only 10% of disk space is used actually (around 600GB), so if we migrated all machines we'd be using around 6TB, which is a lot of wasted space.

This have been reported on list and Pavel Gashev suggested moving the file to a file based storage and then return it the block storage:

> Please note that while disk moving keeps disk format, disk copying changes 
> format. So when you copy a thin provisioned disk to iSCSI storage it's being 
> converted to cow. The issue is that size of converted lv still looks like
> preallocated. You can decrease it manually via lvchange, or you can move it 
> to a file based storage and back. Moving disks keeps disk format, but fixes 
> its size.

Also, please consider adding an option to move storage of all machines in a VM pool at once (maybe allowing to specify a maximum of VMs to migrate at a time?).

Comment 1 Sandro Bonazzola 2016-05-02 09:54:28 UTC
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.

Comment 2 Yaniv Lavi 2016-05-23 13:16:24 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 3 Yaniv Lavi 2016-05-23 13:22:35 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 4 Tal Nisan 2017-02-07 15:32:20 UTC
Maor, I think that's a duplicate of another bug on you, can you track the other bug please?

Comment 6 Yaniv Lavi 2017-02-23 11:24:58 UTC
Moving out all non blocker\exceptions.

Comment 7 Maor 2017-04-20 17:09:48 UTC
(In reply to Maor from comment #5)
> It could be those two:
> https://bugzilla.redhat.com/show_bug.cgi?id=1358717
> https://bugzilla.redhat.com/show_bug.cgi?id=1419240

Now that those bugs are fixed, there are a few points needs to be clarified:
1. The fix for those bugs are dependent on qemuimg map which supports SEEK_HOLE and SEEK_DATA, this will allow map to detect sparseness.
2. Need to add a dropbox in the GUI to determine whether we want to use sparse or preallocate

Comment 8 Sandro Bonazzola 2019-01-28 09:36:52 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 9 Lukas Svaty 2019-02-11 09:59:05 UTC
Raising priority here, we hit the same issue on production environment (NFS <-> iSCSI), this is causing big amount of allocated space to be used, or complicated WAs which requires shutdown of VMs.

Comment 10 Evelina Shames 2019-03-17 14:52:38 UTC
Verified on engine 4.3.2.1.

Comment 11 Sandro Bonazzola 2019-03-19 10:03:24 UTC
This bugzilla is included in oVirt 4.3.2 release, published on March 19th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.