Description of problem: The engine defaults to 'Existing' Auto Pinning Policy for HP VMs. However, if I just do Edit -> OK on a VM that does not really keep the Existing pinning, it overwrites it. To make it worse, it pins all VMs on the first N pCPU of the host, causing performance issues. I've noticed this patch on master: 5fa363215fa webadmin: show cpu pinning with auto pin So I upgraded to 4.4.6.5 from the tlv server (see below) and tested again and the behaviour continues. The difference is the current CPU pinning topology is not hidden any-more, but it continues to wipe the setting. For example: 1. HP VM without pinning # /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "SELECT vm_name,cpu_pinning FROM vm_static WHERE vm_name = 'Test-2'" vm_name | cpu_pinning ---------+------------- Test-2 | 2. Set to 'Do Not Change' and Pin Manually: # /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "SELECT vm_name,cpu_pinning FROM vm_static WHERE vm_name = 'Test-2'" vm_name | cpu_pinning ---------+------------- Test-2 | 0#2_1#4 3. Click Edit -> OK (don't change anything) # /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "SELECT vm_name,cpu_pinning FROM vm_static WHERE vm_name = 'Test-2'" vm_name | cpu_pinning ---------+------------- Test-2 | 0#1_1#3 NOTE: at step 3, we can see that Auto Pinning Policy is set to 'Existing' and the pinning "0#2_1#4" is seen on the Resource Allocation tab. However after clicking OK the configuration is gone. This warning is also shown, which doesn't make sense as nothing was changed: vmCpuPinningClearMessage=The current configuration of the VM does not allow cpu pinning.\nThe pinning topology will be lost when the VM is saved.\n\nAre you sure you want to continue? So there are 2 problems here that can cause a lot of frustration and performance issues: a) wiping existing configuration without the user knowing b) pinning everything to the same physical cpus Version-Release number of selected component (if applicable): ovirt-engine-4.4.6.5-0.17.el8ev.noarch How reproducible: Always Steps to Reproduce: 1. Create a few HP VMs with 2 vNUMA 2. Pin them to different physical CPUs 3. Click Edit -> OK on all of them (dont change anything) 4. All VMs are now pinned to the same physical CPUs Actual results: * Pinning configuration changed Expected results: * Pinning configuration preserved
yeah, it seems wrong to change the auto-pinning policy automatically when editing the VM
verified on ovirt-engine-4.4.6.6-0.10.el8ev.noarch. Configure VM to have for example 4 CPUs, 2 NUMA nodes, vCPU pinning 0#0_1#1_2#2_3#3. Run, shutdown , nothing is changed for numa and cpu pinning while remains in "Don't change" policy. Only if the policy is changed to the 'Existing' the vcpu pinning and numa is replaced by vcpu_pinning according to the 'Existing' policy
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: RHV Manager security update (ovirt-engine) [ovirt-4.4.6]), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2179