Bug 1282217 - No error is issued when updating amount of pre started vms for a pool to an amount which cluster's hosts has no memory for
No error is issued when updating amount of pre started vms for a pool to an a...
Status: CLOSED DEFERRED
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt (Show other bugs)
3.6.0.2
Unspecified Unspecified
low Severity low (vote)
: ovirt-4.2.0
: ---
Assigned To: bugs@ovirt.org
sefi litmanovich
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-15 10:47 EST by sefi litmanovich
Modified: 2017-09-04 07:56 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-09-04 07:56:35 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Virt
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rule-engine: ovirt‑4.2?
slitmano: planning_ack?
slitmano: devel_ack?
slitmano: testing_ack?


Attachments (Terms of Use)
engine log (240.91 KB, application/x-gzip)
2015-11-15 10:47 EST, sefi litmanovich
no flags Details

  None (edit)
Description sefi litmanovich 2015-11-15 10:47:15 EST
Created attachment 1094491 [details]
engine log

Description of problem:

When updating the amount of pre started VMs of a pool, to an amount which will require more memory than the host has, not all the VMs will go up and there will be no error issued in audit log and in engine.log.
In engine.log there's only INFO messages:

2015-11-15 17:00:36,136 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler_Worker-45) [] VmPool '2d669f40-4fb4-4e7e-8460-6a4aade30464' is missing 1 prestarted Vms, attempting to prestart 1 Vms
2015-11-15 17:00:36,217 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (DefaultQuartzScheduler_Worker-45) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='9c5784de-8294-45e9-a109-67c5e8547b0b'}), log id: 6b4e0e76
2015-11-15 17:00:36,217 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (DefaultQuartzScheduler_Worker-45) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 6b4e0e76
2015-11-15 17:00:36,228 INFO  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (DefaultQuartzScheduler_Worker-45) [] Candidate host 'host_mixed_3' ('f44b4375-9794-4ec2-9c80-8fa7c2fe83f3') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'Memory' (correlation id: null)

important notes:

1.When attempting to run a VM manually in this case (host has not enough memory) an informative 'Failed Operation' window pops up as expected.

2. If the initial setting of pre started VMs would exceed the available host memory an error will be issued regarding the 1st VM that fails to start e.g:
"Failed to complete starting of VM {vm_name}."
This might also be a non sufficient behaviour as I would think that it would be nice if user would be informed that there's not enough memory on the host (like he's informed if he tries to start a VM manually) and that x remaining pre started VMs cannot be started --> if this should be a separate bug let me know and I'll open it.
I would also automatically set the pre started VMs amount to the maximum possible but this is not a must I guess.



Version-Release number of selected component (if applicable):

rhevm-3.6.0.3-0.1.el6.noarch

How reproducible:
always

Steps to Reproduce:
1. Create a VM pool with ({host_available_memory}/1024MB + 1) VMs (each 1024MB).
2. Edit pool and set the number of pre started VMs to some value lower than threshold e.g. if host has 8GB set 7 pre started VMs .
3. After all the VMs started edit pool again and change the value to the maximum amount (the number of all vms in pool).

Actual results:

Not all VMs are started, no error message or warning is issued and user has no idea why the VMs aren't running.
in engine.log no ERROR can be found.

Expected results:

few options available here:

1. at first place when defining X pre started VMs engine will check capabilities and policies of the cluster and in case they do not allow that amount of VMs issue the warning upon pool update just like the message user gets when trying to run a VM which cannot run.

2. during iteration over VMs when attempting to start them, upon a VM that cannot start issue an error for the VM start action and issue a warning regarding the limited capabilities.



Additional info:
Comment 1 Michal Skrivanek 2016-01-29 07:55:02 EST
the retry logic in VmPoolMonitor seems not to be working correctly
Comment 2 Moran Goldboim 2016-03-24 05:41:46 EDT
postponing for future version.
needs to be considered use-case as part of a bigger capacity planing mechanism solution for cluster level.
Comment 3 Red Hat Bugzilla Rules Engine 2016-03-24 05:41:49 EDT
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.
Comment 4 Michal Skrivanek 2016-12-21 04:08:09 EST
The bug was not addressed in time for 4.1. Postponing to 4.2
Comment 6 Michal Skrivanek 2017-09-04 07:56:35 EDT
we didn't get to this for almost 2 years, closing
If you feel it's important feel free to reopen

Note You need to log in before you can comment on or make changes to this bug.