Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1411460

Summary: With vm_evenly_distributed cluster scheduling policy, all VMs are migrated to a single host when a host is placed in maintenance mode
Product: Red Hat Enterprise Virtualization Manager Reporter: Gordon Watson <gwatson>
Component: ovirt-engineAssignee: Martin Sivák <msivak>
Status: CLOSED ERRATA QA Contact: Artyom <alukiano>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.0.4CC: bgraveno, dfediuck, gklein, lsurette, mavital, mgoldboi, msivak, rbalakri, Rhev-m-bugs, srevivo, ykaul
Target Milestone: ovirt-4.1.1Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Previously, the 'VM evenly distributed' policy was not properly taking the pending virtual machines (scheduled, but not yet started) into account. Each scheduling run saw the same situation and selected the same host for the virtual machine it was scheduling. Now, the policy also counts pending virtual machines, so proper balancing is applied.
Story Points: ---
Clone Of:
: 1430296 (view as bug list) Environment:
Last Closed: 2017-04-25 00:55:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: SLA RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1430296    

Description Gordon Watson 2017-01-09 19:28:45 UTC
Description of problem:

Using the 'vm_evenly_distributed' cluster scheduling policy, when a host is placed in maintenance mode, all VMs are migrated to a single host, followed by them getting redistributed to balance the cluster.

If no policy is used, the migration distribution appears to be evenly distributed.

So, the question is, with a 'vm_evenly_distributed' policy, is this the correct behaviour ?


Version-Release number of selected component (if applicable):

- RHV 4.0.4
- RHEL 7.3 hosts
 

How reproducible:

100% in my testing.


Steps to Reproduce:

1. Configure the 'vm_evenly_distributed' cluster policy with the following;

HighVmCount=4
SpmVmGrace=5
MigrationThreshold=2

2. Three hosts, one with 'N' VMs (I had 14), the other two with none.
3. Place the host with 13 VMs into maintenance mode.
4. Observe the initial distribution of VMs (as after they've all been migrated, some will get migrated again to balance the load). In my case, all 14 went to one host.

AND;

1. Configure the 'vm_evenly_distributed' cluster policy with the following;

HighVmCount=4
SpmVmGrace=5
MigrationThreshold=2

2. Three hosts, two with 5 VMs one with 4.
3. Place the host with 4 VMs into maintenance mode.
4. Observe the initial distribution of VMs (as after they've all been migrated, some will get migrated again to balance the load). In my case, all 4 went to one host.


Actual results:

All VMs are initially migrated to one host.


Expected results:

One might expect that with a 'vm_evenly_distributed' policy, that the VMs would be evenly distributed in this scenario.

The policy does kick in after the initial migrations and redistributes the VMs on an evenly distributed basis. However, this results in additional work and overhead.


Additional info:

Comment 2 Martin Sivák 2017-01-10 11:54:26 UTC
Thanks for the information Gordon, we indeed have a bug there. The balancing rule works fine except for one small issue... we only count running VMs on destination hosts, but most of the VMs are only starting their migrations there and are missing from the computation.

We have a mechanism to track and count what we call pending VMs, we just somehow forgot to use it here.

Comment 5 Artyom 2017-01-26 09:23:10 UTC
Verified on rhevm-4.1.0.2-0.2.el7.noarch

Environment has 3 hosts(host_1, host_2, host_3), host_1 is SPM
1) Set scheduling policy to 'vm_evenly_distributed' with parameters
HighVmCount=4
SpmVmGrace=5
MigrationThreshold=2
2) Start 17 VM's
3) host_1 has 2 VM's
   host_2 has 8 VM's
   host_3 has 7 VM's
4) put host_2 to maintenance
5) host_1 has 6 VM's
   host_3 has 11 VM's