Bug 1181133 - Redundant Storage allocation check when running VM as stateless
Summary: Redundant Storage allocation check when running VM as stateless
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: General
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-3.6.0-rc
: 3.6.0
Assignee: Vered Volansky
QA Contact: Elad
URL:
Whiteboard: storage
Depends On:
Blocks: 960934
TreeView+ depends on / blocked
 
Reported: 2015-01-12 13:22 UTC by Vered Volansky
Modified: 2016-03-10 13:47 UTC (History)
9 users (show)

Fixed In Version: 3.6.0-5
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-27 07:51:53 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-3.6.0+
ylavi: planning_ack+
rule-engine: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 37446 0 master ABANDONED core: Removed redundant storage allocation for stateless VM Never
oVirt gerrit 43926 0 master MERGED core: Fix RunVmStateless storage space validation Never

Description Vered Volansky 2015-01-12 13:22:25 UTC
RunVm as stateless validation validates only the plugged disks.
Then the command's execute calls CreateAllSnapshotsFromVmCommand, which, in turn calls its CDA. In this CDA all the disks, plugged and unplugged, are validated for snapshot-space, as all will later be allocated in the command's execute method.

The correct behavior for this command as it is, is snapshot the entire vm (all disks, plugged and unplugged). Should only the plugged disks have snpahotes, 
and any of these disks are later plugged during the stateless running of the VM, data will actually be written to the disk (and not to the snapshot, to be deleted later on). This is not the desired behaviour.

The validation for running VM as stateless should be for all the disks.
This should should either happen once in the CreateAllSnapshotsFromVmCommand.CDA, or directly in the RunVmCommand.CDA.
Implementation should make sure of a coherent flow for the user.
Not sure the first option, which looks as the more elegant one, will be able to do that, needs some more looking into.

Comment 1 Allon Mureinik 2015-02-02 22:43:56 UTC
Vered, please add this bug to the storage allocation checks tracker.
Thanks!

Comment 2 Vered Volansky 2015-05-20 05:07:19 UTC
The problem of running several VMs arose on review. Backend.runInternalMultipleActions is the method being called in this case, and it runs all the CDAs as a bulk in one thread, then spawning threads for each command, so there is nowhere to actually propagate the error message to.
This course of action was confirmed with Moti, as well as the fact that nothing has been missed and the only way to actually get the error message on CDA is add the functionality to RunVmCommand's CDA. There are no plans or ability to add the desired functionality at this time. This will be possible when the MultipleActionsRunner is replaced.
This calls for another (simple) solution, therefore moving the bz back to assigned.

Comment 3 Elad 2015-11-11 15:44:36 UTC
Tested the following:
- Had a storage domain (iscsi) with 15G free space
- Created a VM and configured it as stateless
- Created and attached 8 preallocated disks to the VM, plugged only one of them. After disks creation, the storage domain had 7G free space
- Tried to start the VM

Results: 
The attempt to start the VM got blocked on CDA of not enough free space on the domain because the allocation check was for all the 8 disks which creates additional 8 1G volumes during the stateless VM creation while the domain had only 7G free space left. 
So Allocation check is done for plugged and unplugged disks indeed.


2015-11-11 15:32:33,853 WARN  [org.ovirt.engine.core.bll.RunVmCommand] (ajp-/127.0.0.1:8702-4) [78821e52] CanDoAction of action 'RunVm' failed for user admin@internal. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,ACTIO
N_TYPE_FAILED_DISK_SPACE_LOW_ON_STORAGE_DOMAIN,$storageName iscsi22


Verified using rhevm-3.6.0.2-0.1.el6.noarch

Comment 4 Sandro Bonazzola 2015-11-27 07:51:53 UTC
Since oVirt 3.6.0 has been released, moving from verified to closed current release.


Note You need to log in before you can comment on or make changes to this bug.