Bug 1573218 - Updating CPU pinning setting or NUMA nodes setting for a running VM requires VM restart (should be updated only for VM next run)
Summary: Updating CPU pinning setting or NUMA nodes setting for a running VM requires ...
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt
Version: 4.2.3
Hardware: Unspecified
OS: Unspecified
low vote
Target Milestone: ovirt-4.4.2
Assignee: Steven Rosenberg
QA Contact: Beni Pelled
: 1550021 (view as bug list)
Depends On:
TreeView+ depends on / blocked
Reported: 2018-04-30 14:08 UTC by Sharon Gratch
Modified: 2020-09-18 09:07 UTC (History)
8 users (show)

Fixed In Version: rhv-4.4.2-3, ovirt-engine-
Doc Type: Bug Fix
Doc Text:
Cause: NUMA Node and CPU Pinning were not being updated properly when modified while the Virtual Machine was running. They were not supported by the Next Run mechanism that preserves changes until the Virtual Machine is brought down. Consequence: Changes to the Numa Nodes and CPU pinning were not saved to the database. Fix: Preserved changes to the Numa Nodes and CPU Pinning via the Next Run mechanism so that the values can be updated to the database when the Virtual Machine is brought down. Result: Numa Node and CPU Pinning changes are preserved and updated to the database when the Virtual Machine is brought down.
Clone Of:
Last Closed: 2020-09-18 07:11:53 UTC
oVirt Team: Virt
pm-rhel: ovirt-4.4+
mtessun: planning_ack+
ahadas: devel_ack+
mavital: testing_ack+

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1550021 0 low CLOSED rhvm GUI do not prompt warning message when the numa settings can not take effect immediately 2021-02-22 00:41:40 UTC
oVirt gerrit 110377 0 master MERGED core: NUMA changes to be applied on Next Run 2020-12-10 14:20:01 UTC

Internal Links: 1550021

Description Sharon Gratch 2018-04-30 14:08:17 UTC
Description of problem:
Updating CPU pinning or NUMA nodes setting via engine UI for a running VM is done  immediately by updating the DB even though the VM continue to run with old configuration. No hot plug setting is supported and no UI popup warning that this is not allowed.

Version-Release number of selected component (if applicable):
master branch

How reproducible:

Steps to Reproduce:
1. Choose a running VM and open "Edit VM" dialog
2. go to "Host" tab and change the "Configure NUMA" setting in any way.
3.go to "Resource Allocation" tab and change "CPU Pinning topology" field value
4. Click ok for saving.

Actual results:
The Configuration is saved in DB even though the VM is running with previous configuration.

Expected results:
It should be handled by next run configuration model and a pop-up warning that this configuration will be handled after restart should be displayed.

Comment 1 Michal Skrivanek 2018-05-02 15:58:21 UTC
should be as simple as adding onStatuses = VMStatus.Down to @EditableVmField for these two.

Comment 2 Yaniv Kaul 2018-05-14 15:57:38 UTC

Comment 3 Andrej Krejcir 2018-12-04 15:31:49 UTC
*** Bug 1550021 has been marked as a duplicate of this bug. ***

Comment 4 Ryan Barry 2019-01-21 14:53:33 UTC
Re-targeting to 4.3.1 since it is missing a patch, an acked blocker flag, or both

Comment 5 Arik 2020-08-09 08:41:11 UTC
Missed the latest build

Comment 6 Beni Pelled 2020-08-26 13:11:47 UTC
Verified with:
- ovirt-engine-
- libvirt-6.0.0-25.module+el8.2.1+7154+47ffd890.x86_64
- vdsm-4.40.26-1.el8ev.x86_64

Verification steps:
1. Configure NUMA node on a running VM under 'Hosts > Numa Pinning' (NUMA Node Count == 1)
2. Check current CPU with 'virsh -r vcpuinfo <vm_name>' (CPU is 6 in my case)
3. Set 'CPU Pinning topology' under 'VM Edit > Resource Allocation > CPU Pinning topology' (0#12 in my case)

- After section 3, the CPU remains 6, and the next-run icon added to the VM. the CPU changed to 12 once the VM rebooted.

Comment 7 Sandro Bonazzola 2020-09-18 07:11:53 UTC
This bugzilla is included in oVirt 4.4.2 release, published on September 17th 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.

Note You need to log in before you can comment on or make changes to this bug.