Description of problem: Updating CPU pinning or NUMA nodes setting via engine UI for a running VM is done immediately by updating the DB even though the VM continue to run with old configuration. No hot plug setting is supported and no UI popup warning that this is not allowed. Version-Release number of selected component (if applicable): master branch How reproducible: 100% Steps to Reproduce: 1. Choose a running VM and open "Edit VM" dialog 2. go to "Host" tab and change the "Configure NUMA" setting in any way. 3.go to "Resource Allocation" tab and change "CPU Pinning topology" field value 4. Click ok for saving. Actual results: The Configuration is saved in DB even though the VM is running with previous configuration. Expected results: It should be handled by next run configuration model and a pop-up warning that this configuration will be handled after restart should be displayed.
should be as simple as adding onStatuses = VMStatus.Down to @EditableVmField for these two.
Severity...?
*** Bug 1550021 has been marked as a duplicate of this bug. ***
Re-targeting to 4.3.1 since it is missing a patch, an acked blocker flag, or both
Missed the latest build
Verified with: - ovirt-engine-4.4.2.3-0.6.el8ev.noarch - libvirt-6.0.0-25.module+el8.2.1+7154+47ffd890.x86_64 - vdsm-4.40.26-1.el8ev.x86_64 Verification steps: 1. Configure NUMA node on a running VM under 'Hosts > Numa Pinning' (NUMA Node Count == 1) 2. Check current CPU with 'virsh -r vcpuinfo <vm_name>' (CPU is 6 in my case) 3. Set 'CPU Pinning topology' under 'VM Edit > Resource Allocation > CPU Pinning topology' (0#12 in my case) Result: - After section 3, the CPU remains 6, and the next-run icon added to the VM. the CPU changed to 12 once the VM rebooted.
This bugzilla is included in oVirt 4.4.2 release, published on September 17th 2020. Since the problem described in this bug report should be resolved in oVirt 4.4.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.