Bug 1954878 - [RFE] Auto Pinning Policy: improve tooltip description and policy names
Summary: [RFE] Auto Pinning Policy: improve tooltip description and policy names
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.4.6
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ovirt-4.4.7
: 4.4.7
Assignee: Liran Rotenberg
QA Contact: Polina
URL:
Whiteboard:
Depends On:
Blocks: 1963681
TreeView+ depends on / blocked
 
Reported: 2021-04-29 00:39 UTC by Germano Veit Michel
Modified: 2021-07-22 15:13 UTC (History)
6 users (show)

Fixed In Version: ovirt-engine-4.4.7.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-22 15:12:33 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:2865 0 None None None 2021-07-22 15:13:39 UTC
oVirt gerrit 114956 0 master MERGED core+webadmin: rename auto pinning policies 2021-06-01 12:59:26 UTC

Internal Links: 1919804 1970426

Description Germano Veit Michel 2021-04-29 00:39:43 UTC
Description of problem:

Enabling Auto Pinning Policy makes the user defined number of vCPUs and NUMA nodes of the VM be overwritten based on the host topology.
This does not sound right. 

For example:
Host: 4 NUMA nodes with 8 cores each
VM  : I want 16 cores, over 2 NUMA nodes

I create a VM with 16 cores and 2 NUMA nodes, pin it to the Host with 4 NUMA nodes

Then I go to  VM -> Edit -> Host -> Auto Pinning Policy -> Adjust -> OK

After that the VM is configured with 28 cores, over 4 numa nodes (7 cores on each). I did not change the number of vCPUs or NUMA nodes myself, and I'd not expect the engine to do so. 
It should keep my settings and pin the 2 vNUMA onto 2 pNUMA of the host.

Here is an example with a freshly created VM (default to 1 CPU), just to illustrate:

1. Create a VM (fill name and pin to a NUMA host, everything default

2. VM has 1 CPU

engine=# select vm_name,cpu_pinning,num_of_sockets,cpu_per_socket,threads_per_cpu from vm_static where vm_name = 'TEST';
 vm_name | cpu_pinning | num_of_sockets | cpu_per_socket | threads_per_cpu 
---------+-------------+----------------+----------------+-----------------
 TEST    |             |              1 |              1 |               1

3. VM -> Edit -> Host -> Auto Pinning Policy -> Adjust -> OK

4. VM now has 28 CPUs (?)

engine=# select vm_name,cpu_pinning,num_of_sockets,cpu_per_socket,threads_per_cpu from vm_static where vm_name = 'TEST';
 vm_name |                                                                      cpu_pinning                                                                      | num_of_sockets | cpu_per_socket | threads_per_cpu 
---------+-------------------------------------------------------------------------------------------------------------------------------------------------------+----------------+----------------+-----------------
 TEST    | 0#1_1#2_2#3_3#4_4#5_5#6_6#7_7#9_8#10_9#11_10#12_11#13_12#14_13#15_14#17_15#18_16#19_17#20_18#21_19#22_20#23_21#25_22#26_23#27_24#28_25#29_26#30_27#31 |              4 |              7 |               1
(1 row)

It's reproducible in many  ways:
* Starting with 4 CPUs                    -> Goes to 28 over 4 vNUMA cells on the VM
* Starting with 32 CPUs                   -> Goes to 28 over 4 vNUMA cells on the VM
* Starting with 16 CPUs and 2 vNUMA cells -> Goes to 28 over 4 vNUMA cells on the VM

Version-Release number of selected component (if applicable):
rhvm-4.4.6.5-0.17.el8ev.noarch

How reproducible:
Always

Steps to Reproduce:
As above.

Actual results:
* It overwrites the number of vCPUs and Numa nodes of the VM

Expected results:
* Keep the user defined number of vCPUs and NUMA nodes of the VM, and pin it correctly to the host. If the VM has 2 NUMA, then pin that VM to 2 out of the 4 NUMAs of the Host. Don't change the VM to 4 NUMA and its CPU count.

Comment 1 Arik 2021-04-29 06:47:37 UTC
Germano, this is what expected from the 'adjust' policy (it adjusts the CPU and NUMA topologies to the host).
If you wish to preserve the current topologies, the 'existing' policy should be used

Comment 2 Michal Skrivanek 2021-04-29 07:21:45 UTC
(In reply to Arik from comment #1)
> Germano, this is what expected from the 'adjust' policy (it adjusts the CPU
> and NUMA topologies to the host).
> If you wish to preserve the current topologies, the 'existing' policy should
> be used

does it need a (i) tooltip perhaps?

Comment 3 Liran Rotenberg 2021-04-29 07:29:38 UTC
Sounds like the Adjust policy did what it suppose to.

(In reply to Michal Skrivanek from comment #2)
> (In reply to Arik from comment #1)
> > Germano, this is what expected from the 'adjust' policy (it adjusts the CPU
> > and NUMA topologies to the host).
> > If you wish to preserve the current topologies, the 'existing' policy should
> > be used
> 
> does it need a (i) tooltip perhaps?

There is one.

Comment 4 Germano Veit Michel 2021-04-30 01:43:35 UTC
OK, thanks! I see now what 'Adjust' really means. However the fact that both the customer and I got this wrong means we have to improve it :)

The current tooltip is:
autoPinningLabelExplanation=VM pinned to a host can use the policies for automatic CPU and NUMA pinning. Existing keeps the current CPU topology. Adjust will also maximize the CPU topology according to the host. By default High Performance VM will be set with Existing policy.

Assuming I got this right, may I suggest an improvement?
"VM pinned to a host can use the policies for automatic CPU and NUMA pinning. 'Do Not Change' keeps the current vCPU and vNUMA node count, along with the pinning configuration. 'Existing' keeps the current vCPU and vNUMA node count of the VM and pins them to pCPU and pNUMA of the host. 'Adjust' resizes the vCPU and vNUMA counts of the Virtual Machine to match the ones of the Host, and also pins them to the Host resources. By default High Performance VM will be set with Existing policy, but if manual pinning configuration exists it will be kept instead.

We could perhaps also rename them, maybe something like?
'Do not change' -> 'Existing'
'Existing' -> 'Pin'
'Adjust' -> 'Resize and Pin'

Does this make more sense?

Comment 6 Germano Veit Michel 2021-04-30 03:41:47 UTC
(In reply to Arik from comment #1)
> If you wish to preserve the current topologies, the 'existing' policy should be used
It still changes the vNUMA count of the VM, but keeps the vCPU count... Is it a bug?

Comment 10 Arik 2021-05-04 19:49:25 UTC
(In reply to Germano Veit Michel from comment #4)
> Assuming I got this right, may I suggest an improvement?
> "VM pinned to a host can use the policies for automatic CPU and NUMA
> pinning. 'Do Not Change' keeps the current vCPU and vNUMA node count, along
> with the pinning configuration. 'Existing' keeps the current vCPU and vNUMA
> node count of the VM and pins them to pCPU and pNUMA of the host. 'Adjust'
> resizes the vCPU and vNUMA counts of the Virtual Machine to match the ones
> of the Host, and also pins them to the Host resources. By default High
> Performance VM will be set with Existing policy, but if manual pinning
> configuration exists it will be kept instead.

As for the very last part ("but if manual pinning configuration exists it will be kept instead"), I'd say that then the selected auto pinning policy should be the one that doesn't make any change (what is 'Do not change' today and 'Existing' in your suggestion below).

> 
> We could perhaps also rename them, maybe something like?
> 'Do not change' -> 'Existing'
> 'Existing' -> 'Pin'
> 'Adjust' -> 'Resize and Pin'
> 
> Does this make more sense?

I like the idea of changing 'Existing' to 'Pin' and 'Adjust' to 'Resize and Pin'
I see why you propose 'Existing' for the first policy (as "use existing settings") but I think it's not that clear when it appears as the description of auto-pinning policy. Maybe something like 'Do nothing' or 'None' would be better?

Comment 12 Germano Veit Michel 2021-05-06 00:03:42 UTC
As discussed, opened RFE for 'Existing' to keep the vNUMA count: https://bugzilla.redhat.com/show_bug.cgi?id=1957526

Maybe instead of closing, let's use this BZ to track the changes in the tooltip and renaming, as we are discussing?

(In reply to Arik from comment #10)
> I like the idea of changing 'Existing' to 'Pin' and 'Adjust' to 'Resize and
> Pin'
> I see why you propose 'Existing' for the first policy (as "use existing
> settings") but I think it's not that clear when it appears as the
> description of auto-pinning policy. Maybe something like 'Do nothing' or
> 'None' would be better?

Yup, perhaps 'None' is even better.

So we get 'None', 'Pin' and 'Resize and Pin'?

And the tooltip like:
"VM pinned to a host can use the policies for automatic CPU and NUMA pinning. 'None' keeps the current vCPU and vNUMA node count, along with the pinning configuration. 'Pin' keeps the current vCPU count of the VM and pins them to pCPU and pNUMA of the host, expanding the vNUMA count of the VM to match the one of the Host. 'Resize and Pin' resizes the vCPU and vNUMA counts of the Virtual Machine to match the ones of the Host, and also pins them to the Host resources. By default High Performance VM will be set with Existing policy, but if manual pinning configuration exists it will be kept instead.

It will become more coherent once BZ1957526 is implemented, then 'Pin' will not resize anything (vNUMA).

Comment 13 Liran Rotenberg 2021-05-26 14:00:26 UTC
Although renaming the `Existing` to `Pin`, the tooltip won't show the `Pin` policy and the High Performance parts. They are currently blocked until BZ 1957551 is resolved.

Comment 18 Polina 2021-07-05 15:11:25 UTC
Verifying the bug on ovirt-engine-4.4.7.6-0.11.el8ev.noarch 

In this version we only have two options - 'None' or 'Resize and Pin'.

Choice for 'Resize and Pin' brings the tooltip msg:

CPU pinning topology will be lost
The current configuration of the VM does not allow cpu pinning.
The pinning topology will be lost when the VM is saved.

Comment 22 errata-xmlrpc 2021-07-22 15:12:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: RHV Manager (ovirt-engine) security update [ovirt-4.4.7]), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2865


Note You need to log in before you can comment on or make changes to this bug.