Bug 1573218

Summary: Updating CPU pinning setting or NUMA nodes setting for a running VM requires VM restart (should be updated only for VM next run)
Product: [oVirt] ovirt-engine Reporter: Sharon Gratch <sgratch>
Component: BLL.VirtAssignee: Steven Rosenberg <srosenbe>
Status: CLOSED CURRENTRELEASE QA Contact: Beni Pelled <bpelled>
Severity: low Docs Contact:
Priority: medium    
Version: 4.2.3CC: ahadas, bpelled, bugs, mavital, michal.skrivanek, mtessun, sgratch, yalzhang
Target Milestone: ovirt-4.4.2Keywords: EasyFix
Target Release: 4.4.2.3Flags: pm-rhel: ovirt-4.4+
mtessun: planning_ack+
ahadas: devel_ack+
mavital: testing_ack+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: rhv-4.4.2-3, ovirt-engine-4.4.2.3 Doc Type: Bug Fix
Doc Text:
Cause: NUMA Node and CPU Pinning were not being updated properly when modified while the Virtual Machine was running. They were not supported by the Next Run mechanism that preserves changes until the Virtual Machine is brought down. Consequence: Changes to the Numa Nodes and CPU pinning were not saved to the database. Fix: Preserved changes to the Numa Nodes and CPU Pinning via the Next Run mechanism so that the values can be updated to the database when the Virtual Machine is brought down. Result: Numa Node and CPU Pinning changes are preserved and updated to the database when the Virtual Machine is brought down.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-09-18 07:11:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sharon Gratch 2018-04-30 14:08:17 UTC
Description of problem:
Updating CPU pinning or NUMA nodes setting via engine UI for a running VM is done  immediately by updating the DB even though the VM continue to run with old configuration. No hot plug setting is supported and no UI popup warning that this is not allowed.


Version-Release number of selected component (if applicable):
master branch

How reproducible:
100%

Steps to Reproduce:
1. Choose a running VM and open "Edit VM" dialog
2. go to "Host" tab and change the "Configure NUMA" setting in any way.
3.go to "Resource Allocation" tab and change "CPU Pinning topology" field value
4. Click ok for saving.

Actual results:
The Configuration is saved in DB even though the VM is running with previous configuration.

Expected results:
It should be handled by next run configuration model and a pop-up warning that this configuration will be handled after restart should be displayed.

Comment 1 Michal Skrivanek 2018-05-02 15:58:21 UTC
should be as simple as adding onStatuses = VMStatus.Down to @EditableVmField for these two.

Comment 2 Yaniv Kaul 2018-05-14 15:57:38 UTC
Severity...?

Comment 3 Andrej Krejcir 2018-12-04 15:31:49 UTC
*** Bug 1550021 has been marked as a duplicate of this bug. ***

Comment 4 Ryan Barry 2019-01-21 14:53:33 UTC
Re-targeting to 4.3.1 since it is missing a patch, an acked blocker flag, or both

Comment 5 Arik 2020-08-09 08:41:11 UTC
Missed the latest build

Comment 6 Beni Pelled 2020-08-26 13:11:47 UTC
Verified with:
- ovirt-engine-4.4.2.3-0.6.el8ev.noarch
- libvirt-6.0.0-25.module+el8.2.1+7154+47ffd890.x86_64
- vdsm-4.40.26-1.el8ev.x86_64

Verification steps:
1. Configure NUMA node on a running VM under 'Hosts > Numa Pinning' (NUMA Node Count == 1)
2. Check current CPU with 'virsh -r vcpuinfo <vm_name>' (CPU is 6 in my case)
3. Set 'CPU Pinning topology' under 'VM Edit > Resource Allocation > CPU Pinning topology' (0#12 in my case)

Result:
- After section 3, the CPU remains 6, and the next-run icon added to the VM. the CPU changed to 12 once the VM rebooted.

Comment 7 Sandro Bonazzola 2020-09-18 07:11:53 UTC
This bugzilla is included in oVirt 4.4.2 release, published on September 17th 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.