Bug 1254818

Summary: [RFE] : Need VM affinity rule for "hypervisor pools" within a cluster
Product: Red Hat Enterprise Virtualization Manager Reporter: Pawan kumar Vilayatkar <pvilayat>
Component: RFEsAssignee: Martin Sivák <msivak>
Status: CLOSED ERRATA QA Contact: Artyom <alukiano>
Severity: medium Docs Contact:
Priority: urgent    
Version: 3.5.3CC: amureini, aperotti, dfediuck, gklein, istein, juwu, lpeer, lsurette, mavital, mgoldboi, msivak, pablo.iranzo, pvilayat, rbalakri, rgolan, srevivo, ykaul
Target Milestone: ovirt-4.0.0-rcKeywords: FutureFeature, Triaged
Target Release: 4.0.0   
Hardware: All   
OS: Linux   
URL: http://www.ovirt.org/documentation/sla/affinity-labels/
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
You can now use the REST API to assign affinity labels to hosts and virtual machines. A virtual machine can be scheduled on a host as long as the host has all the affinity labels the virtual machine has. It is also supported if the host has additional affinity labels that the virtual machine does not have.
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-08-23 20:28:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: SLA RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1338799    
Attachments:
Description Flags
Basic sanity test of labels for one VM and one Host none

Description Pawan kumar Vilayatkar 2015-08-18 23:21:21 UTC
1. What is the nature and description of the request?

As a Windows Systems Engineer working for Red Hat Internal IT, I need rhev to provide host node or cpu affinity )or pooling). Microsoft licensing requires that a license be purchased for each physical CPU that Windows servers are running on, we need to ensure that VMs are bound to a group of physical processors (or host nodes) so that we no not need to purchase licenses for every processor across all of our RHEV environments. The hosts need to be run in a high availability mode.

2. Why do you need this? (List the business requirements here)

We need this feature to prevent Red Hat IT from spending millions of dollars in licensing for Microsoft Windows Licensing for te small number of Windows hosts we have.
  

3. How would you like to achieve this? (List the functional requirements here)

CPU affinity or resource pooling as is available in other Virt solutions.


4. Do you have any specific time-line dependencies?

no specific time line or requirements


5. List any affected packages or components.

Unknown


6. Would you be able to assist in testing this functionality if implemented?

This is the choice of the Platform Operations team


7. For each functional requirement listed in the previous question, can test to confirm the requirement is successfully implemented.

we are willing to provide testing for this functionality

Comment 2 Doron Fediuck 2016-03-03 14:15:18 UTC
Hi,
we're working on an improved feature for 4.0.
However, thanks to bug 1107512, you can handle your request in 3.6 using
VM pinning to multiple hosts.

Comment 3 Doron Fediuck 2016-03-20 10:06:27 UTC
*** Bug 1266041 has been marked as a duplicate of this bug. ***

Comment 6 Yaniv Lavi 2016-05-09 10:58:45 UTC
oVirt 4.0 Alpha has been released, moving to oVirt 4.0 Beta target.

Comment 9 Martin Sivák 2016-06-02 11:32:33 UTC
Created attachment 1164031 [details]
Basic sanity test of labels for one VM and one Host

This script can be used to do basic sanity testing of the affinity label functionality. It requires a running ovirt-engine (preconfigured values - ip 127.0.0.1:8080, admin@internal:letmein) with one VM (does not have to be running) and one Host.

Comment 11 Artyom 2016-07-18 15:43:35 UTC
Verified on rhevm-4.0.2-0.2.rc1.el7ev.noarch
According to polarion plan https://polarion.engineering.redhat.com/polarion/#/project/RHEVM3/testrun?id=4%5F0%5FSLA%5FVMS%5Fto%5FHosts%5FLabels%5Frun

Comment 13 errata-xmlrpc 2016-08-23 20:28:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-1743.html