Hide Forgot
1. What is the nature and description of the request? As a Windows Systems Engineer working for Red Hat Internal IT, I need rhev to provide host node or cpu affinity )or pooling). Microsoft licensing requires that a license be purchased for each physical CPU that Windows servers are running on, we need to ensure that VMs are bound to a group of physical processors (or host nodes) so that we no not need to purchase licenses for every processor across all of our RHEV environments. The hosts need to be run in a high availability mode. 2. Why do you need this? (List the business requirements here) We need this feature to prevent Red Hat IT from spending millions of dollars in licensing for Microsoft Windows Licensing for te small number of Windows hosts we have. 3. How would you like to achieve this? (List the functional requirements here) CPU affinity or resource pooling as is available in other Virt solutions. 4. Do you have any specific time-line dependencies? no specific time line or requirements 5. List any affected packages or components. Unknown 6. Would you be able to assist in testing this functionality if implemented? This is the choice of the Platform Operations team 7. For each functional requirement listed in the previous question, can test to confirm the requirement is successfully implemented. we are willing to provide testing for this functionality
Hi, we're working on an improved feature for 4.0. However, thanks to bug 1107512, you can handle your request in 3.6 using VM pinning to multiple hosts.
*** Bug 1266041 has been marked as a duplicate of this bug. ***
oVirt 4.0 Alpha has been released, moving to oVirt 4.0 Beta target.
Created attachment 1164031 [details] Basic sanity test of labels for one VM and one Host This script can be used to do basic sanity testing of the affinity label functionality. It requires a running ovirt-engine (preconfigured values - ip 127.0.0.1:8080, admin@internal:letmein) with one VM (does not have to be running) and one Host.
Verified on rhevm-4.0.2-0.2.rc1.el7ev.noarch According to polarion plan https://polarion.engineering.redhat.com/polarion/#/project/RHEVM3/testrun?id=4%5F0%5FSLA%5FVMS%5Fto%5FHosts%5FLabels%5Frun
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-1743.html