Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1630687

Summary: [RFE] provide affinity rules to the SPM role
Product: Red Hat Enterprise Virtualization Manager Reporter: Germano Veit Michel <gveitmic>
Component: ovirt-engineAssignee: Nobody <nobody>
Status: CLOSED DEFERRED QA Contact: meital avital <mavital>
Severity: medium Docs Contact:
Priority: low    
Version: 4.2.6CC: mkalinin, mtessun, rbarry, Rhev-m-bugs
Target Milestone: ---Keywords: FutureFeature
Target Release: ---Flags: lsvaty: testing_plan_complete-
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-25 12:45:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: SLA RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
workaround
none
workaround none

Description Germano Veit Michel 2018-09-19 06:12:32 UTC
Description of problem:

Currently the affinity rules can match a specific host. But it is also useful to match the role of the Host (SPM or not).

This is particularly useful when using the Backup API. For example, the Commvault VSA Proxy VM is highly IO intensive. During mass backup of VMs, the SPM is also busy with the creation/deletion of volumes/extensions for the new layers etc. If this VM is running on the SPM, all the IO load of the environment is concentrated on a single host, which is also a critical host (SPM). It can get into trouble due to too much IO, and we don't want the SPM to get into trouble, nor operations failing due to too much IO on the same host.

Ideally the load should be spread. The user should be able to specify a negative affinity between the SPM role and a particular VM.

Comment 1 Germano Veit Michel 2018-09-20 00:51:48 UTC
Created attachment 1484987 [details]
workaround

api script that migrates tagged VMs out of the SPM. Can be scheduled on cron to run before the backups start.

This way there is no need to disable SPM role on some hosts, or pin the VMs.

Comment 2 Germano Veit Michel 2018-09-20 02:34:47 UTC
Created attachment 1484993 [details]
workaround

Comment 3 Martin Tessun 2018-09-26 11:23:33 UTC
Hi Germano,

another workaround would be to pin the specif VM to hosts that have an SPM policy of Never.

So e.g. the cluster has the following hosts / SPM priorities:
host1 - Never
host2 - Never
host3 - Normal
host4 - Normal

you would pin the specific VM that should not run where the SPM runs to be running on host1 or host2.

Cheers,
Martin

Comment 4 Germano Veit Michel 2018-09-26 22:31:49 UTC
(In reply to Martin Tessun from comment #3)
> Hi Germano,
> 
> another workaround would be to pin the specif VM to hosts that have an SPM
> policy of Never.
> 
> So e.g. the cluster has the following hosts / SPM priorities:
> host1 - Never
> host2 - Never
> host3 - Normal
> host4 - Normal
> 
> you would pin the specific VM that should not run where the SPM runs to be
> running on host1 or host2.
> 
> Cheers,
> Martin

Hi Martin,

Yup, this is what I meant in comment #1. This is fine on larger environments with several hosts, but on smaller ones it is not ideal to restrict the SPM to just 1 or 2 hosts.

Thanks!

Comment 9 Martin Tessun 2019-10-25 12:45:44 UTC
Thanks for opening this RFE.

As there are many options (soft and hard) to achieve this, and this feature was not wildly requested, I am closing this RFE.

Workarounds:
- Have SPM role to never on the hosts that run VMs with high IO load
- Have SPM roles prioritized in smaller environment, having a VM affinity of the high IO load VMs to the hosts with low SPM priority