Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1178021 - RHEV: Faulty storage allocation checks when adding a VM Pool with VMs.
RHEV: Faulty storage allocation checks when adding a VM Pool with VMs.
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.4.0
Unspecified Unspecified
unspecified Severity high
: ---
: 3.5.0
Assigned To: Vered Volansky
Kevin Alon Goldblatt
storage
: Regression
Depends On: 1143888
Blocks: 960934
  Show dependency treegraph
 
Reported: 2015-01-01 08:41 EST by Allon Mureinik
Modified: 2016-02-10 11:40 EST (History)
14 users (show)

See Also:
Fixed In Version: org.ovirt.engine-root-3.5.0-30
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1143888
Environment:
Last Closed: 2015-02-15 04:15:07 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
amureini: needinfo-


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 33525 None None None Never
oVirt gerrit 33694 None None None Never
oVirt gerrit 36553 master MERGED core: Adding a pool with vm storage allocation Never
oVirt gerrit 36585 ovirt-engine-3.5 MERGED core: Adding a pool with vm storage allocation Never
oVirt gerrit 36617 master MERGED core: Storage allocation validation fix on new VMs Never
oVirt gerrit 36686 ovirt-engine-3.5 MERGED core: Storage allocation validation fix on new VMs Never
Red Hat Product Errata RHBA-2015:0230 normal SHIPPED_LIVE Red Hat Enterprise Virtualization Manager 3.5.0-1 ASYNC 2015-02-16 14:50:27 EST

  None (edit)
Description Allon Mureinik 2015-01-01 08:41:13 EST
+++ This bug was initially created as a clone of Bug #1143888 +++

AddVmPoolWithVmsCommand - VMs in the pool are added with empty disks (Thinly provisioned from the template). There are no memory volumes not snapshots.
Storage allocation validation should be applied - StorageDomainValidator.HadSpaceForNewDisks(). Current validation is using old deprecated code.

Verification of this bug should follow the following table:

      | File Domain                             | Block Domain
 -----|-----------------------------------------|-------------
 qcow | 1M (header size)                        | 1G
 -----|-----------------------------------------|-------------
 raw  | preallocated: disk capacity (getSize()) | disk capacity
      | thin (sparse): 1M                       | (there is no raw sparse on 
                                                   block domains)


Verification should include a storage domain with and without enough space for all the vm (with disks) in the pool .
In case of insufficient space a relevant CDA message should pop.

--- Additional comment from Vered Volansky on 2014-09-29 10:33:19 IST ---

Since this is thinly provisioned, only sparse-qcow is in use here (from above table). Note in this flow the template already exist, so no allocation checks for it, and the allocation checks taken from the table above should be applied to all VMs (empty volume space* numOfVms).

-------------------------------------------------------------------------------
This bug is a RHEV tracker for the QE team to verify against RHEVM 3.5.0
Comment 1 Elad 2015-01-01 11:11:04 EST
For some reason, when creating a pool of VMs out of a template, the allocation check is being done according to the virtual size of the disk. 
For example, pool creation for 5 VMs out of a template, which its disk virtual size is 5G (actual 1G since it's sparse), on a domain that has 20G free space will be blocked on CDA of not enough free space on domain. 

I Conversed with Allon. We agreed that this is not the desired behaviour. 

Allon, 
Please advise how to proceed, thanks.
Comment 2 Elad 2015-01-01 11:14:31 EST
Also, the validation should consider also the value of FreeSpaceCriticalLowInGB, which is 5G by default (can be changed). In my setup, the value is the default (5G)
Comment 4 Allon Mureinik 2015-01-04 03:41:21 EST
Vered is investigating, moving the needinfo to her.
Comment 5 Allon Mureinik 2015-01-11 07:20:08 EST
(In reply to Elad from comment #1)
> For some reason, when creating a pool of VMs out of a template, the
> allocation check is being done according to the virtual size of the disk. 
> For example, pool creation for 5 VMs out of a template, which its disk
> virtual size is 5G (actual 1G since it's sparse), on a domain that has 20G
> free space will be blocked on CDA of not enough free space on domain. 
> 
> I Conversed with Allon. We agreed that this is not the desired behaviour. 
> 
> Allon, 
> Please advise how to proceed, thanks.

Additional insight: the situation described in comment 1 is a faulty allocation check that takes into consideration the size of the template's disk, instead of just a thin QCOW layer on top of it per VM in the pool.

The meaning of this is that if the template uses preallocated disks, you must have enough space for another preallocated disk per VM in the pool, effectively killing the notion of over-committing storage.
Thus, marking as a REGRESSION.
Comment 6 Kevin Alon Goldblatt 2015-01-20 11:57:40 EST
Tested with 3.5 v3.18
Moving to verified
Comment 8 Eyal Edri 2015-02-15 04:15:07 EST
bugs were moved by ERRATA to RELEASE PENDING bug not closed probably due to errata error.
closing as 3.5.0 is released.

Note You need to log in before you can comment on or make changes to this bug.