Bug 1256716 - Possible to pin vm numa node to host numa node, that vm not pinned
Possible to pin vm numa node to host numa node, that vm not pinned
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.6.0
All Linux
unspecified Severity high
: ovirt-3.6.1
: 3.6.1
Assigned To: Roman Mohr
Artyom
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-08-25 06:30 EDT by Artyom
Modified: 2016-04-19 21:29 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, when examining the virtual NUMA node pinning on host A, you could also see NUMA nodes from virtual machines which were pinned to host B, and even pin them to NUMA nodes on host A. This broke the existing pinning, and the virtual machine would fail to start because of invalid NUMA pinning. Now, Red Hat Enterprise Virtualization detects invalid pinning when the preferred host changes, unpins invalid nodes when entering the "NUMA Pinning" dialogue, and only shows virtual machines that are pinned to the specific host.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-04-19 21:29:15 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: SLA
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
screenshot (45.54 KB, image/png)
2015-12-10 08:53 EST, Artyom
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 48720 None None None 2016-01-19 05:57 EST
oVirt gerrit 49150 ovirt-engine-3.6.1 MERGED webadmin: Unpin numa nodes when preferred host changes Never

  None (edit)
Description Artyom 2015-08-25 06:30:09 EDT
Description of problem:
Possible to pin vm numa node to host numa node, that vm not pinned

Version-Release number of selected component (if applicable):
rhevm-3.6.0-0.12.master.el6.noarch

How reproducible:
Always

Steps to Reproduce:
1. Have two hosts in cluster one with Numa support and second without(that just said that host have only one numa node)
2. Create vm that pinned to host without NUMA support
3. Create numa node on vm and open "Numa Pinning" window under vm->Edit->Hosts

Actual results:
You can see in "Numa Pinning" window numa nodes of host that vm not pinned to him, and also pin vm numa nodes to this host numa nodes

Expected results:
I expect that we will check if host that vm pinned to him have Numa Support and if not we will fade "Numa Pinning" button and for sure not show all Numa nodes that we have in cluster

Additional info:
Also when you run such vm, start will fail: libvirtError: invalid argument: Failed to parse bitmap ''
Also we mix two different things about vm with NUMA nodes:
1) Vm can have numa node without to be pinned to some host, it just some numa architecture on vm, I do not see real reason to restrict on user create vm numa node only when vm pinned to host connect to <numa> parameter under libvirt
2) user can pin vm numa node only if vm pinned to host and if host have numa support(at least two numa nodes) connect to <numatune> parameter under libvirt
Comment 1 Roman Mohr 2015-11-25 10:31:53 EST
Missed 3.6.1-rc1. Now merged and will be in rc2.
Comment 2 Artyom 2015-12-07 10:56:20 EST
Checked on rhevm-3.6.1.1-0.1.el6.noarch
1) have host with NUMA support and without under the same cluster
2) create vm with two VNUMA nodes
3) pin vm to host without NUMA support(Numa Pinning button not faded), I still can pin VNUMA to PNUMA node(in case of host without NUMA support I see only one PNUMA node)
4) but when I run vm I receive error message:
The host cyan-vdsg.qa.lab.tlv.redhat.com did not satisfy internal filter Memory because cannot accommodate memory of VM's pinned virtual NUMA nodes within host's physical NUMA nodes

Question if it desired behavior? Again by my opinion better to fade Numa Pinning button instead of failed to start vm with error message and also if we decide to show error message it must be more explicit
"Can not start vm with NUMA pinning on host without NUMA support"
Comment 3 Roman Mohr 2015-12-08 08:42:27 EST
(In reply to Artyom from comment #2)
> Checked on rhevm-3.6.1.1-0.1.el6.noarch
> 1) have host with NUMA support and without under the same cluster
> 2) create vm with two VNUMA nodes
> 3) pin vm to host without NUMA support(Numa Pinning button not faded), I
> still can pin VNUMA to PNUMA node(in case of host without NUMA support I see
> only one PNUMA node)
> 4) but when I run vm I receive error message:
> The host cyan-vdsg.qa.lab.tlv.redhat.com did not satisfy internal filter
> Memory because cannot accommodate memory of VM's pinned virtual NUMA nodes
> within host's physical NUMA nodes

I explicitly tested this scenario with one NUMA node when I rewrote that part. That should work when there is sufficient memory. Maybe there is really not sufficient memory there? Also did you use 'strict' or 'interleave' mode?

> Question if it desired behavior? Again by my opinion better to fade Numa
> Pinning button instead of failed to start vm with error message and also if
> we decide to show error message it must be more explicit
> "Can not start vm with NUMA pinning on host without NUMA support"

You are right that I just fixed the pinning error. I did not change the visibility of the button.

I see two solutions here, make the 'NumaSupport" always visible also for hosts, or grey it out in the VM edit dialog, as you suggested. We can fix this in a separete bug, what do you think?
Comment 4 Artyom 2015-12-09 07:20:11 EST
1) I checked it again and host has sufficient amount of memory to run vm.
If I run vm under interleave mode it succeed to run and if under strict mode vm failed to start with error from comment 2, if it desired behavior?

2) I will open additional bug to fade button "Numa Pinning" in case when host not have "NUMA Support"
Comment 5 Roman Mohr 2015-12-10 04:57:08 EST
(In reply to Artyom from comment #4)
> 1) I checked it again and host has sufficient amount of memory to run vm.
> If I run vm under interleave mode it succeed to run and if under strict mode
> vm failed to start with error from comment 2, if it desired behavior?

One last thing, could you post an image of the NUMA pinning sceen? If the single node itself does not have enough memory (maybe it has 0?) then the error is correct. If the host memory is treated as memory which is not owned by the single numa node then the behaviour is correct.

I will do some additional tests, but it would be great to see the screenshot.

> 2) I will open additional bug to fade button "Numa Pinning" in case when
> host not have "NUMA Support"

Thx for creating the bug
Comment 6 Artyom 2015-12-10 08:53 EST
Created attachment 1104348 [details]
screenshot
Comment 7 Artyom 2015-12-10 11:16:34 EST
Hm I do not really see reason why behavior different for strict and interleave mode in case of single node architecture. I also tried to reduce vm memory from 2048 to 1024, the same result in case of strict mode.
Comment 8 Artyom 2015-12-14 04:41:35 EST
Verified on rhevm-3.6.1.3-0.1.el6.noarch
Now if hosts have same numa architecture, engine try to save pinning when we change host.

Note You need to log in before you can comment on or make changes to this bug.