Bug 1325468 (autostart-w-engine) - [RFE] Autostart of VMs that are down (with Engine assistance - Engine has to be up)
Summary: [RFE] Autostart of VMs that are down (with Engine assistance - Engine has to ...
Keywords:
Status: CLOSED ERRATA
Alias: autostart-w-engine
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.2.0
Hardware: All
OS: All
high
medium
Target Milestone: ovirt-4.4.0
: ---
Assignee: Andrej Krejcir
QA Contact: Polina
URL:
Whiteboard:
: 1108678 ovirt_auto_start_vm_local_dc RHEV_auto_start_vms_local_dc 1607510 (view as bug list)
Depends On: 1801439
Blocks: 1607510 1670339
TreeView+ depends on / blocked
 
Reported: 2016-04-09 00:44 UTC by biholcomb
Modified: 2023-10-06 17:32 UTC (History)
26 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
After a high-availability virtual machine (HA VM) crashes, the RHV Manager tries to restart it indefinitely. At first, with a short delay between restarts. After a specified number of failed retries, the delay is longer. Also, the Manager starts crashed HA VMs in order of priority, delaying lower-priority VMs until higher-priority VMs are 'Up.' The current release adds new configuration options: * `RetryToRunAutoStartVmShortIntervalInSeconds`, the short delay, in seconds. The default value is `30`. * `RetryToRunAutoStartVmLongIntervalInSeconds`, the long delay, in seconds. The default value is `1800`, which equals 30 minutes. * `NumOfTriesToRunFailedAutoStartVmInShortIntervals`, the number of restart tries with short delays before switching to long delays. The default value is `10` tries. * `MaxTimeAutoStartBlockedOnPriority`, the maximum time, in minutes, before starting a lower-priority VM. The default value is `10` minutes.
Clone Of:
: 1607510 (view as bug list)
Environment:
Last Closed: 2020-08-04 13:16:05 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 817363 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Bugzilla 1108678 1 None None None 2021-09-09 11:37:17 UTC
Red Hat Knowledge Base (Solution) 358303 0 None None None 2019-09-13 12:55:50 UTC
Red Hat Knowledge Base (Solution) 2889011 0 None None None 2019-09-13 12:55:41 UTC
Red Hat Product Errata RHSA-2020:3247 0 None None None 2020-08-04 13:16:42 UTC
oVirt gerrit 102522 0 'None' MERGED core: Improvements in AutoStartVmsRunner 2021-02-08 06:35:45 UTC
oVirt gerrit 102785 0 'None' MERGED core: Don't stop trying to start HA VMs 2021-02-08 06:35:45 UTC
oVirt gerrit 103233 0 'None' MERGED core: consider priority for auto-start 2021-02-08 06:35:46 UTC

Internal Links: 817363 1108678 1166657

Description biholcomb 2016-04-09 00:44:39 UTC
This is being submitted after a thread on ovirt users mailing list.  Coming from a VMware and Hyper-V background I could not find a way to have the Engine start specified VMs in a given order.  A question was posted which lead to a thread indicating so such ability exists in oVirt.  Nir Soffer suggest I post an RFE so here it is.

I'm running oVirt 3.6.4 with a host that is a physical server and I am using hosted-engine deployment.

Summary of request:  VMware allows specifying a list of VMs that are to be started and allows the order to be set.  This is host independent so when the hosts in a cluster are started the VMs are started in the order specified.  Shutdown is done in reverse order.

Why is this needed.  From one of my posts

"The reason for the order is that you need some servers such as DNS, file servers, up first before other systems can resolve addresses or mount shares.  For Windows you need domain controllers running before the other windows systems that are part of the domain.  For applications such as Lotus Notes the servers had to come up in the correct order."

Michael Kleinpaste added this (my response included).

"On Fri, 2016-04-08 at 16:40 +0000, Michael Kleinpaste wrote:
Actually this is used pretty regularly in VMware environments.  For instance I've seen MSSQL systems running under AD credentials so they can access UNC shares.  If the AD domain controllers aren't up prior to the database server starting up the MSSQL service won't start because it can't authenticate the user on the service.\

That was exactly what I had in one of my jobs where I had an AD domain for a University and we had several MSSQL servers that needed the AD credentials.  Plus I had an IBM FileNet ECM system with several servers that had to be started in the correct order.  With AD you need to have the domain controllers up and running or nothing works.  Our Linux systems alsso had some that depended on others.  If they weren't started in the correct order they had to be restarted again.  And it's not really a VMware specific need but any environment whether it's physical or virtual needs as startup sequence.  Before we virtualized we had a written procedure on what servers came up and in what order (and the reverse for a shutdown of the site).  When moving to virtualization the virtualization system replaces people flipping switches with it's startup/shutdown order or it should."

More discussion about what is needed.

Quote from Nir's response.

"Lets say you have a way to order the vms when some vms are down.

What will happen to when dns, file servers or domain controller will crash?
Do you have to restart all the vms depending on them or they can
recover and detect that vm they depend on were restarted?

Seems that what you need is a systemd for your data center - every
host define the host it depends on, and the system generate the
correct order to start the vms, starting vms in the same time when
possible.

Please reply also to the list, this thread may be useful to others.

Nir"

My response (and others have indicated the need for this).

"I think we need to make sure we distinguish between the start up phase and the running phase which is what happens after everyone is up and running happily.  Based on my experience the crash of a server is considered separately from startup. 

As mentioned in my response to the list to someone else's comment the startup sequence is the same as people flipping switches on physical servers following a documented procedure to start or shutdown a datacenter as we did in the "old days".  With physical boxes we had to shutdown/startup manually and we did it in a sequence that we had written down.  With virtualization and since we had VMware we could automate that process so as we spun up the hosts VMware started spinning up the guests in the order specified so we got our DNS boxes up first, then others.  For Active Directory we started the domain controllers first, then other servers such as file servers, application servers in the sequence needed for the applications to run.

Crashing of DNS, File Server, Domain Controllers crashing after the datacenter is up and running is handled (or should be) by redundancy of servers or process the service provides.  You have multiple DNS servers and the resolver will try the secondary/tertiary/whatever if the primary is down.  File servers are the same.  For Gluster, CephFS, MS DFS you have (or should have setup) the ability to keep running if one of the servers goes down.  A redundant file server setup will handle a server crash.  For Domain Controllers you should have at least two (we had six in our environment) and when one goes down the others keep the domain running by shifting the services to others and continuing to provide authentication, etc.  Generally what we did when a domain controller crashed is fix it if possible and if it was not fixable pull it's pieces out of the domain and spin up a new one.   Same for DNS or file servers.  When they go down find out why, fix it or replace the server, and get the service redundant again.  Also, oVirt has the watchdog function so if a VM goes down it will try and restart it.  If it can't restart then we're dealing with a crashed server which we should have provided for in our data center design.

My wish is to have oVirt allow us to do what VMware does and allow us to say start/don't start these servers up in the order I specify so that when the Engine is ready it looks at the list and begins turning on VMs in the order specified.  Shutdown is done in the reverse order).  For large datacenters with many VMS manual startup is a pain.  Once it's running rely on good practices of having redundant servers (i.e. more than one DNS), file servers that can handle the failure of a server, multiple domain controllers which is not something we need to burden oVirt with.  Handling of failures needs to be done by the people in charge who are supposed to design the data center based on their risk/cost analysis. 

If I  understand what you said - I'm not sure we have to define a dependancy list where we define dependancies like we would with systemd or a package manger.  It doesn't need to be complex since just a simple list of priority: VM id/name pairs will work. All we need is to be able to say "start these VMs when the Engine is ready".  The order or dependency is set by where I put the VM in the list.  If it's at the top start it first, then move to the second one, and so on.  There can be settings for delay between starting VMs and how long to wait for a VM to come up before assuming it's dead and moving on to the next.  Tasking oVirt with the job of figuring VM dependancies would be a nightmare for oVirt and whoever had to program it <G>.  We as data center administrators should be handling that task.  Yes, we manually set the list but trying to automate a dependancy chain would be pretty difficult.

I envision a web admin portal GUI where we define a simple list of VMs that we want the Engine to autostart. We need to be able to move them up/down the list.  The list is stored somewhere for the Engine (database?, whatever else the Engine has as storage areas - not really familiar with the Engine internals) and when the Engine is up and ready to start VMs it retrieves the list and starts at the first VM in the list and starts it, waits some time (0-?? seconds), and then moves on to the next one.  If the VM hasn't started in the set time the Engine moves on to the next one.

Note that Microsoft's Hyper-V also provides automatic VM startup but it is done on a VM level where you just tell the VM to start whenthe Hypr-V starts.  If you want sequencing you have to set time delays.  Auto  start up is better than nothing but Hyper-V is a nightmare trying to sequence VMs.

I think VMware did it right in allowing both autostart and being able to sequence the startup of VMS so it's host independent.  As information VMware also allowed delays between VM startups as does Hyper-V."

Comment 1 HWSD 2016-04-09 19:33:00 UTC
Bug #1166657 is similar.

Another case explain ed in ml:

"Pavel Gashev

I'd like to see the autostart feature as well. In my case I need to autostart a virtual router VM at remote site. The issue is that oVirt can't see the remote host until the virtual router is started on this host. So HA is not an option."

ovirt has an advanced HA system that start the Self Hosted Engine VM. 
A solution could be extend this HA to start others VM if engine is down or not started yet.

Comment 2 Derek Atkins 2016-10-18 18:20:00 UTC
(In reply to HWSD from comment #1)
> ovirt has an advanced HA system that start the Self Hosted Engine VM. 
> A solution could be extend this HA to start others VM if engine is down or
> not started yet.

Or a similar method that, once the engine is started, will ensure that additional VMs are started, and if necessary start them in the correct order.

The way VMware does it is that the UI lets you move a VM up and down in a list among three catgories.  The 1st category is "start VMs in order", and you move a VM up or down in that list to manually control the order.  The hypervisor will start the VMs and wait a specified period of time between (or possibly wait for the VM to come online).  The 2nd category is an unordered start, so it's effectively a checkbox to auto-start the VM.  The last category is a non-autostart list, which is effectively the checkbox unchecked.

It would be nice if oVirt had a similar feature:
* first, a checkbox to autostart a VM.  This effectively gets us categories 2 and 3.
* second, the ability to specify a start order on a select number of VMs.  This would be VMware category 1.

For most cases I think the engine can control this process.  However there may be cases (e.g. the virtual router) where a host-specific local VM may need to be started asynchronously from the engine.

For my particular case I'm just looking at a single-host/node ovirt system so everything is local/locally-hosted.

Comment 3 Red Hat Bugzilla Rules Engine 2016-12-27 16:38:14 UTC
This request has been proposed for two releases. This is invalid flag usage. The ovirt-future release flag has been cleared. If you wish to change the release flag, you must clear one release flag and then set the other release flag to ?.

Comment 4 Justin Zygmont 2017-04-26 08:08:58 UTC
I am also surprised this feature is missing.  I don't think the engine should control this however, because I want the VMs to autostart with the host regardless if the engine is online or not.  This is how VMware does it, and this would be especially true for standalone hosts.

Having to manually start each and every one of your VMs manually is just silly.

Comment 5 Yaniv Kaul 2017-06-06 19:13:41 UTC
Duplicate of bug 817363 ?

Comment 6 Sven Kieske 2017-06-07 07:52:21 UTC
(In reply to Yaniv Kaul from comment #5)
> Duplicate of bug 817363 ?

Nobody outside RH can tell, this bug is restricted. I cannot view it.

Comment 7 Moran Goldboim 2017-07-02 18:28:46 UTC
(In reply to Yaniv Kaul from comment #5)
> Duplicate of bug 817363 ?

This bug takes into account also the ordering of starting up of the VMs which seems to be an important aspect.

I guess we can use this one since it's more comprehensive. I think we can limit the initial implementation to 2 areas:
-autostart: will be depended on pin to host (later can be extended with sanlock HA)
-ordering: additional factor to be added to VM<->VM affinity

The basic requirements should be:
  preconditions:
    Autostrat:
      -VM is pinned to single host
    Ordering:
      -VM is part of affinity group (ordering/dependency should be added to affinity)
  UI/API:
    Autostart: VM flag which is enabled only if VM is pinned to host
    Ordering: should be part of affinity groups and relevant for VM to VM affinity.

Comment 9 Martin Sivák 2017-10-04 17:36:03 UTC
Moran and Yaniv, can we please use this bug to track only the "autostart WITH engine" case? The mentioned #817363 (which needs an upstream clone it seems) tracks the "autostart W/O engine" RFE.

comment #4 can be handled using hosted engine, first the engine is started by the hosted engine tooling and then the engine starts the HA+autostart VMs.

Comment 10 Chris Adams 2017-10-04 18:14:36 UTC
I see a need for the hosted-engine autostart described in comment #9. I had a situation last night where my primary oVirt cluster was hard-shutdown (fire in the building, fire department killed all power for safety).  When it came back, only the engine started, so I had to start around 80 VMs individually.  We have several old VMs (e.g. application servers that were on CentOS 5 and new CentOS 7 VMs had been built but the old VMs were kept around powered off for reference), so I had to know "these VMs should be up".

My ideal world would be that, at least optionally, the engine would keep track of which VMs were up, and in the event of a full unclean shutdown (power removed), the engine would attempt to start them after the cluster+engine was back up.  Ordering control would be better (like I needed to bring up DNS and database VMs first), but at least some method to start all the VMs that were running when the system failed would be good.

Comment 11 Scott Walker 2017-11-27 23:30:43 UTC
(In reply to Derek Atkins from comment #2)
> (In reply to HWSD from comment #1)
> > ovirt has an advanced HA system that start the Self Hosted Engine VM. 
> > A solution could be extend this HA to start others VM if engine is down or
> > not started yet.
> 
> Or a similar method that, once the engine is started, will ensure that
> additional VMs are started, and if necessary start them in the correct order.
> 
> The way VMware does it is that the UI lets you move a VM up and down in a
> list among three catgories.  The 1st category is "start VMs in order", and
> you move a VM up or down in that list to manually control the order.  The
> hypervisor will start the VMs and wait a specified period of time between
> (or possibly wait for the VM to come online).  The 2nd category is an
> unordered start, so it's effectively a checkbox to auto-start the VM.  The
> last category is a non-autostart list, which is effectively the checkbox
> unchecked.
> 
> It would be nice if oVirt had a similar feature:
> * first, a checkbox to autostart a VM.  This effectively gets us categories
> 2 and 3.
> * second, the ability to specify a start order on a select number of VMs. 
> This would be VMware category 1.
> 
> For most cases I think the engine can control this process.  However there
> may be cases (e.g. the virtual router) where a host-specific local VM may
> need to be started asynchronously from the engine.
> 
> For my particular case I'm just looking at a single-host/node ovirt system
> so everything is local/locally-hosted.

100% this, having all VMs start while nice, doesn't help in situations like wanting to make sure puppet is, then start DBs before web front ends, etc. Also comment #9 supports this more. I'm not adding more information simple adding to the "This would be amazing"

Comment 12 Michal Skrivanek 2017-12-06 13:18:27 UTC
(In reply to Chris Adams from comment #10)
btw if you just need to make sure a set of VMs is up and running you can just write a script using REST API to check they are up every few minutes (or just a single check, like for the # of running VMs) and start them when needed
Or use High Availability on those VMs.

Comment 13 Derek Atkins 2017-12-06 16:38:16 UTC
(In reply to Michal Skrivanek from comment #12)

> Or use High Availability on those VMs.

Except that HA does not work on a single-host, hosted-engine ovirt system (because there is no way to turn on HA in that situation -- or at least as of 4.0.x there was no way to turn it on; has that changed?).  So right now a script is the only solution, but a script cannot be managed through the ovirt engine UI.

Comment 14 Michal Skrivanek 2017-12-06 17:29:47 UTC
(In reply to Derek Atkins from comment #13)
> (In reply to Michal Skrivanek from comment #12)
> 
> > Or use High Availability on those VMs.
> 
> Except that HA does not work on a single-host, hosted-engine ovirt system
> (because there is no way to turn on HA in that situation -- or at least as
> of 4.0.x there was no way to turn it on; has that changed?).

HA can be enabled in 4.2 (since https://gerrit.ovirt.org/#/c/82014/), and should restart the VM on crash even when you have just a single host.
It may help, though it is possible it's still not going to work as intended when the system boots up. Would be great if you can try it out.

Comment 23 Martin Sivák 2018-04-03 15:09:07 UTC
Correct. This bug only covers the WITH engine available cases which is useful in combination with hosted engine.

The case of no engine is tracked in bug 817363.

Comment 24 Michal Skrivanek 2019-07-25 11:48:22 UTC
current design is to piggy back to HA VM functionality, all non-running HA VMs with lease terminated improperly(without a clean shut down) will be started upon engine startup. Without any particular order

Comment 26 Ryan Barry 2019-09-13 12:54:54 UTC
*** Bug 1166657 has been marked as a duplicate of this bug. ***

Comment 27 Ryan Barry 2019-09-13 12:55:41 UTC
*** Bug 1108678 has been marked as a duplicate of this bug. ***

Comment 28 Ryan Barry 2019-09-13 12:55:51 UTC
*** Bug 1269908 has been marked as a duplicate of this bug. ***

Comment 29 Ryan Barry 2019-09-13 12:56:04 UTC
*** Bug 1607510 has been marked as a duplicate of this bug. ***

Comment 30 RHV bug bot 2019-12-13 13:15:10 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 31 RHV bug bot 2019-12-20 17:44:53 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 32 RHV bug bot 2020-01-08 14:49:01 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 33 RHV bug bot 2020-01-08 15:16:15 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 34 RHV bug bot 2020-01-24 19:50:54 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 35 Polina 2020-02-10 11:33:33 UTC
tested on http://bob-dr.lab.eng.brq.redhat.com/builds/4.4/rhv-4.4.0-18 according to the attached polarion cases.

please look at the case with the host storage blocking where HA Vms are restarted according to the Resume Behavior. In this case, sometimes the low_priority VMs are restarted before the medium.

I also see this behavior in another scenario (sent in email):
The test:
I have 
four HA VMs with high priority named high_1, high_2, high_3, high_4;
four HA VMs with medium priority named medium_1, medium_2, medium_3, medium_4;
four HA VMs with low priority named low_priority_1, low_priority_2, low_priority_3, low_priority_4.
Running on the same host1 (two other hosts are in maintenance).
Send poweroff with the powermgmnt to the host , wait for a while then start the host again.

The VMs are started, but sometimes the medium goes before the low (it is never messed with the high - all the high priority VMs are started first).

[root@compute-ge-6 ovirt-engine]# tail -f engine.log |grep "Trying to restart VM"
2020-02-09 15:27:21,080+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-89) [2a9cacc7] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM high_2 on Host host_mixed_1
2020-02-09 15:27:21,480+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-89) [79c46e1e] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM high_3 on Host host_mixed_1
2020-02-09 15:27:22,060+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-89) [2ff57f0a] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM high_4 on Host host_mixed_1
2020-02-09 15:28:31,578+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM high_4 on Host host_mixed_1
2020-02-09 15:28:31,597+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM high_2 on Host host_mixed_1
2020-02-09 15:28:31,615+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM high_3 on Host host_mixed_1
2020-02-09 15:28:32,517+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-83) [63b1aa62] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM high_1 on Host host_mixed_1
2020-02-09 15:29:46,857+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-37) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM high_1 on Host host_mixed_1
2020-02-09 15:29:47,824+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [55b821aa] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM medium_2 on Host host_mixed_1
2020-02-09 15:31:02,135+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-30) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM medium_2 on Host host_mixed_1
2020-02-09 15:31:03,050+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-52) [3b3fa31b] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM medium_4 on Host host_mixed_1
2020-02-09 15:32:17,504+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-55) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM medium_4 on Host host_mixed_1
2020-02-09 15:32:18,331+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70) [30747637] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM low_priority_1 on Host host_mixed_1
2020-02-09 15:33:32,776+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-8) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM low_priority_1 on Host host_mixed_1
2020-02-09 15:33:33,703+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [3c6aa9b2] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM medium_1 on Host host_mixed_1
2020-02-09 15:33:33,965+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [3a4b4b22] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM low_priority_2 on Host host_mixed_1
2020-02-09 15:33:34,299+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [4ff0c879] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM medium_3 on Host host_mixed_1
2020-02-09 15:33:34,559+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [a1bfcac] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM low_priority_3 on Host host_mixed_1
2020-02-09 15:33:34,828+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [4f5ef5cf] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM low_priority_4 on Host host_mixed_1
2020-02-09 15:34:33,908+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-7) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM medium_1 on Host host_mixed_1
2020-02-09 15:34:49,009+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-91) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM low_priority_4 on Host host_mixed_1
2020-02-09 15:34:49,026+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-91) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM low_priority_3 on Host host_mixed_1
2020-02-09 15:34:49,039+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-91) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM low_priority_2 on Host host_mixed_1
2020-02-09 15:34:49,062+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-91) [] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to restart VM medium_3 on Host host_mixed_1

I re-assign the bz for your investigation.

Comment 36 Polina 2020-02-17 12:26:07 UTC
re-tested as described in https://bugzilla.redhat.com/show_bug.cgi?id=1801439#c6. The behavior is correct

Comment 44 errata-xmlrpc 2020-08-04 13:16:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3247


Note You need to log in before you can comment on or make changes to this bug.