Bug 1553425 - Number of "Prestarted VMs" is ignored and all VMs of Pool starts after editing existing Pool.
Summary: Number of "Prestarted VMs" is ignored and all VMs of Pool starts after editin...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.1.8
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ovirt-4.3.0
: 4.3.0
Assignee: Shmuel Melamud
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks: 1576752
TreeView+ depends on / blocked
 
Reported: 2018-03-08 20:44 UTC by Ameya Charekar
Modified: 2022-03-13 15:00 UTC (History)
13 users (show)

Fixed In Version: ovirt-engine-4.3.0_alpha
Doc Type: No Doc Update
Doc Text:
This release ensures that the number of virtual machines configured to pre-start in a virtual machine pool start after editing an existing virtual machine pool.
Clone Of:
: 1576752 (view as bug list)
Environment:
Last Closed: 2019-05-08 12:37:22 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-45184 0 None None None 2022-03-13 15:00:47 UTC
Red Hat Knowledge Base (Solution) 3402791 0 None None None 2018-04-07 13:40:37 UTC
Red Hat Product Errata RHEA-2019:1085 0 None None None 2019-05-08 12:37:42 UTC
oVirt gerrit 90874 0 master MERGED core: vmpools: keep a list of prestarted VMs before WaitForLaunch 2020-03-15 21:40:20 UTC

Description Ameya Charekar 2018-03-08 20:44:06 UTC
Description of problem:
Changing value of "Prestarted VMs" of existing Pool is ignored and instead all VMs of pool are powering on.

Version-Release number of selected component (if applicable):
rhevm-4.1.8.2-0.1.el7.noarch

How reproducible:
Sometimes

Steps to Reproduce:
1. Create VM Pool with 0 "Prestarted VMs"
2. Edit pool and change "Prestarted VMs" so that it is less than Assigned VMs
3. All VMs of pool are being started ignoring number of "Prestarted VMs" 

Actual results:
All VMs are powering up.

Expected results:
Only number of "Prestarted VMs" should start.

Additional info:

Comment 9 Michal Skrivanek 2018-04-12 15:14:10 UTC
the default for VmPoolMonitorIntervalInMinutes is 5 mins. It is assumed that all VMs are attempted to start in that period. Apparently it was changed to 1 minute which may not be enough. 
There is a bug, perhaps, that the assumption is not correct and it may take longer time to start all the VMs if the number of VMs to prestart is high and system is busy, but it may "go away" if you just change the interval back to 5 mins (or increase more)

Comment 11 Marina Kalinin 2018-04-16 15:02:37 UTC
Apparently the customer has set the following values, back on their 3.4 or 3.5 environment:
          option_name           | option_value 
--------------------------------+--------------
 VmPoolMonitorMaxAttempts       | 3
 VmPoolMonitorBatchSize         | 50
 VmPoolMonitorIntervalInMinutes | 1"


Probably those values interfere with the required behavior.
The idea back then was to prestart as many VMs as possible. Which sounds different from their request today.
I will check more and update the bug later.

Comment 12 Roman Hodain 2018-04-19 12:52:50 UTC
I have reviewed the data the problem is related to the number of threads which are trying to start the VMs here is a grep from the engine logs related to pool 
67e3ea67-811d-4703-919a-269af29c21a5 (MyPool) [1]. The number of VMs in the pool is 33 and the pool is set to prestart 20 VMs. The operation ended up with 28 prestarted VMs. Some of the VMs failed with lack of memory on the hypervisors so the entire VMPool was not started.

One of the possible reasons possible ways to reproduce is to edit the pool twice and click ok.

[1]:
engine.log:
2018-04-19 09:28:55,441+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [15f44bcd] VmPool '67e3ea67-811d-4703-919a-269af29c21a5' is missing 20 prestarted VMs, attempting to prestart 20 VMs
2018-04-19 09:28:58,497+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [15f44bcd] Running VM 'MyPool-21' as stateless
2018-04-19 09:29:07,785+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [b956bed] Running VM 'MyPool-21' as stateless succeeded
2018-04-19 09:29:09,362+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [b956bed] Running VM 'MyPool-27' as stateless
2018-04-19 09:29:24,966+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [1f5b258a] Running VM 'MyPool-27' as stateless succeeded
2018-04-19 09:29:26,062+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [1f5b258a] Running VM 'MyPool-28' as stateless
2018-04-19 09:29:44,524+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [713d03c] Running VM 'MyPool-28' as stateless succeeded
2018-04-19 09:29:47,034+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [713d03c] Running VM 'MyPool-29' as stateless
2018-04-19 09:30:06,566+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [1a1c3ceb] Running VM 'MyPool-29' as stateless succeeded
2018-04-19 09:30:10,235+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [1a1c3ceb] Running VM 'MyPool-32' as stateless
2018-04-19 09:30:24,748+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [530d5796] Running VM 'MyPool-32' as stateless succeeded
2018-04-19 09:30:27,143+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [530d5796] Running VM 'MyPool-33' as stateless
2018-04-19 09:30:27,450+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler8) [51fb1f71] VmPool '67e3ea67-811d-4703-919a-269af29c21a5' is missing 20 prestarted VMs, attempting to prestart 20 VMs
2018-04-19 09:30:34,102+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler8) [51fb1f71] Running VM 'MyPool-1' as stateless
2018-04-19 09:30:40,739+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [1df3a518] Running VM 'MyPool-33' as stateless succeeded
2018-04-19 09:30:41,264+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler8) [d0ecbe1] Running VM 'MyPool-1' as stateless succeeded
2018-04-19 09:30:44,241+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler8) [d0ecbe1] Running VM 'MyPool-2' as stateless
2018-04-19 09:30:48,853+02 INFO  [org.ovirt.engine.core.bll.VmPoolMonitor] (DefaultQuartzScheduler5) [1df3a518] Running VM 'MyPool-3' as stateless

Comment 20 meital avital 2018-12-13 15:39:36 UTC
Verified on: 4.3.0-0.4.master.20181207184726.git7928cae.el7


Verification Steps:

Scenario 1:
1. Create VM Pool with 6 VMs and 0 "Prestarted VMs"
2. Edit pool and change "Prestarted VMs" to 3
3. Only the 3 VMs started 


Scenario 2:
1. Create VM Pool with 6 VMs and 0 "Prestarted VMs"
2. Edit pool and change "Prestarted VMs" to 6
3. All 6 VMs started 
4. Power off all 6 VMs
5. Only 3 VMs Prestarted

Comment 22 errata-xmlrpc 2019-05-08 12:37:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1085

Comment 23 Daniel Gur 2019-08-28 13:15:00 UTC
sync2jira

Comment 24 Daniel Gur 2019-08-28 13:20:03 UTC
sync2jira


Note You need to log in before you can comment on or make changes to this bug.