Bug 1674386
Summary: | VM in host affinity are processed always in the same order. | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Roman Hodain <rhodain> |
Component: | ovirt-engine | Assignee: | Andrej Krejcir <akrejcir> |
Status: | CLOSED ERRATA | QA Contact: | Polina <pagranat> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.2.8-2 | CC: | akrejcir, dfodor, emarcus, mavital, mkalinin, rbarry, Rhev-m-bugs, rhodain |
Target Milestone: | ovirt-4.3.5 | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | (missing build info) | Doc Type: | Bug Fix |
Doc Text: |
Previously, the Affinity Rules Enforcer tried to migrate only one Virtual Machine, but if the migration failed, it did not attempt another migration.
In this release, the Affinity Rules Enforcer tries to migrate multiple Virtual Machines until a migration succeeds.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-08-12 11:53:27 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | SLA | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1703275 |
Description
Roman Hodain
2019-02-11 08:57:15 UTC
What is the exact configuration of VMs and hosts? How much memory the VMs need and the hosts have? The engine checks if a VM can be migrated and if it cannot, it will try to migrate a different VM. This bug may be an edge case, where the VM can be migrated, but the best host for it is the one where it is currently running, so it is not moved. This issue will be solved by some of the patches that solve Bug 1651747. The other bug is in MODIFIED. (In reply to Andrej Krejcir from comment #6) > This issue will be solved by some of the patches that solve Bug 1651747. Shouldn't this be retargeted to TM 4.3.5 as 1651747 is? WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops Possible steps to verify: 1. Have 3 VMs (VM1, VM2, VM3) and 2 hosts (Host1, Host2). VM1 and VM2 are running on Host1, VM3 on Host2. 2. Create these VM to host affinity groups: - positive hard (VM3, Host2) - positive soft (VM1, Host1) - positive soft (VM2, Host1) 3. Create positive hard VM affinity group with all VMs. 4. Check the engine.log. Expected results: The affinity rules enforcer runs every minute by default. It should try to migrate both VM1 and VM2 every time it runs, but they will not migrate. The log should contain "Running command: BalanceVmCommand" for both VMs no more than a minute apart. The reason for this setup, is that VM1 and VM2 can be migrated, but because of their VM to host soft affinity, the scheduler chooses Host1 as the best host for them. As a result, when one of them is not migrated the affinity rules enforcer tries to migrate the other one. Verified according to steps described in https://bugzilla.redhat.com/show_bug.cgi?id=1674386#c14 29d8f9d8-23a0-4028-935c-96ac9bba8c27 - VM1 (golden_env_mixed_virtio_0) 932c013e-5f5d-4ea0-8382-b132dea73c33 - VM2 (golden_env_mixed_virtio_1) a4871027-040a-49fa-b869-673a31270f5f - VM3 (golden_env_mixed_virtio_2) once a minute there is BalanceVmCommand report in engine.log for VM1 and VM2. 2019-07-08 18:46:07,259+03 INFO [org.ovirt.engine.core.bll.BalanceVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-93) [33e8f107] Running command: BalanceVmCommand internal: true. Entities affected : ID: 29d8f9d8-23a0-4028-935c-96ac9bba8c27 2019-07-08 18:46:07,444+03 INFO [org.ovirt.engine.core.bll.BalanceVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-93) [4051b966] Running command: BalanceVmCommand internal: true. Entities affected : ID: 932c013e-5f5d-4ea0-8382-b132dea73c33 2019-07-08 18:46:07,292+03 WARN [org.ovirt.engine.core.bll.scheduling.policyunits.VmAffinityPolicyUnit] (EE-ManagedThreadFactory-engineScheduled-Thread-93) [33e8f107] Invalid affinity situation was detected while scheduling VMs: 'golden_env_mixed_virtio_0' (29d8f9d8-23a0-4028-935c-96ac9bba8c27). VMs belonging to the same positive enforcing affinity groups are running on more than one host. 2019-07-08 18:46:07,450+03 WARN [org.ovirt.engine.core.bll.scheduling.policyunits.VmAffinityPolicyUnit] (EE-ManagedThreadFactory-engineScheduled-Thread-93) [4051b966] Invalid affinity situation was detected while scheduling VMs: 'golden_env_mixed_virtio_1' (932c013e-5f5d-4ea0-8382-b132dea73c33). VMs belonging to the same positive enforcing affinity groups are running on more than one host. 2019-07-08 18:46:07,292+03 WARN [org.ovirt.engine.core.bll.scheduling.policyunits.VmAffinityPolicyUnit] (EE-ManagedThreadFactory-engineScheduled-Thread-93) [33e8f107] Invalid affinity situation was detected while scheduling VMs: 'golden_env_mixed_virtio_0' (29d8f9d8-23a0-4028-935c-96ac9bba8c27). VMs belonging to the same positive enforcing affinity groups are running on more than one host. 2019-07-08 18:46:07,450+03 WARN [org.ovirt.engine.core.bll.scheduling.policyunits.VmAffinityPolicyUnit] (EE-ManagedThreadFactory-engineScheduled-Thread-93) [4051b966] Invalid affinity situation was detected while scheduling VMs: 'golden_env_mixed_virtio_1' (932c013e-5f5d-4ea0-8382-b132dea73c33). VMs belonging to the same positive enforcing affinity groups are running on more than one host. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:2431 |