Red Hat Bugzilla – Bug 614288
RFE: Option to perform CPU pinning on a running VM, (aka #virsh vcpupin)
Last modified: 2011-05-19 09:46:16 EDT
Description of problem:
Assign the NUMA generated pinning to a running VM, which should not require a guest reboot to take effect as #virsh vcpupin behavior in libvirt.
And it should show the error just like 'error: vcpupin: Invalid vCPU number.' but not 'Some changes may require a guest reboot to take effect.' when I give a larger value than the num of physical cpus.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
This feature request did not get resolved in time for Feature Freeze
for the current Red Hat Enterprise Linux release and has now been
denied. It has been proposed for the next Red Hat Enterprise Linux
** If you would still like this feature request considered for
the current release, please ask your support representative to
file an exception on your behalf. **
NB it doesn't make sense to refer to this as NUMA pinning, since NUMA pinning can only be done at startup. Once a VM is running, its memory has already been allocated from whatever NUMA node it started running on. You can change pinning of the virtual CPUs at runtime, but this will not affect NUMA placement of the guest's memory.
Version-Release number of selected component :
This feature has been supported by libvirt for a long time, but never be supported by virt-manager. So, it's really a bug need to be fixed.
Fixed in virt-manager-0.8.6-1.el6
Verified this bug PASS with virt-manager-0.8.6-2.el6.noarch
Could perfrom cpu pin for each vcpu at runtime via virt-manager
Verified this bug PASS with virt-manager-0.8.6-3.el6.noarch
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.