Bug 1256716

Summary: Possible to pin vm numa node to host numa node, that vm not pinned
Product: Red Hat Enterprise Virtualization Manager Reporter: Artyom <alukiano>
Component: ovirt-engineAssignee: Roman Mohr <rmohr>
Status: CLOSED CURRENTRELEASE QA Contact: Artyom <alukiano>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.6.0CC: alukiano, dfediuck, gklein, lsurette, rbalakri, rgolan, Rhev-m-bugs, rmohr, srevivo, ykaul
Target Milestone: ovirt-3.6.1Keywords: Triaged
Target Release: 3.6.1   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Previously, when examining the virtual NUMA node pinning on host A, you could also see NUMA nodes from virtual machines which were pinned to host B, and even pin them to NUMA nodes on host A. This broke the existing pinning, and the virtual machine would fail to start because of invalid NUMA pinning. Now, Red Hat Enterprise Virtualization detects invalid pinning when the preferred host changes, unpins invalid nodes when entering the "NUMA Pinning" dialogue, and only shows virtual machines that are pinned to the specific host.
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-04-20 01:29:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: SLA RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
screenshot none

Description Artyom 2015-08-25 10:30:09 UTC
Description of problem:
Possible to pin vm numa node to host numa node, that vm not pinned

Version-Release number of selected component (if applicable):
rhevm-3.6.0-0.12.master.el6.noarch

How reproducible:
Always

Steps to Reproduce:
1. Have two hosts in cluster one with Numa support and second without(that just said that host have only one numa node)
2. Create vm that pinned to host without NUMA support
3. Create numa node on vm and open "Numa Pinning" window under vm->Edit->Hosts

Actual results:
You can see in "Numa Pinning" window numa nodes of host that vm not pinned to him, and also pin vm numa nodes to this host numa nodes

Expected results:
I expect that we will check if host that vm pinned to him have Numa Support and if not we will fade "Numa Pinning" button and for sure not show all Numa nodes that we have in cluster

Additional info:
Also when you run such vm, start will fail: libvirtError: invalid argument: Failed to parse bitmap ''
Also we mix two different things about vm with NUMA nodes:
1) Vm can have numa node without to be pinned to some host, it just some numa architecture on vm, I do not see real reason to restrict on user create vm numa node only when vm pinned to host connect to <numa> parameter under libvirt
2) user can pin vm numa node only if vm pinned to host and if host have numa support(at least two numa nodes) connect to <numatune> parameter under libvirt

Comment 1 Roman Mohr 2015-11-25 15:31:53 UTC
Missed 3.6.1-rc1. Now merged and will be in rc2.

Comment 2 Artyom 2015-12-07 15:56:20 UTC
Checked on rhevm-3.6.1.1-0.1.el6.noarch
1) have host with NUMA support and without under the same cluster
2) create vm with two VNUMA nodes
3) pin vm to host without NUMA support(Numa Pinning button not faded), I still can pin VNUMA to PNUMA node(in case of host without NUMA support I see only one PNUMA node)
4) but when I run vm I receive error message:
The host cyan-vdsg.qa.lab.tlv.redhat.com did not satisfy internal filter Memory because cannot accommodate memory of VM's pinned virtual NUMA nodes within host's physical NUMA nodes

Question if it desired behavior? Again by my opinion better to fade Numa Pinning button instead of failed to start vm with error message and also if we decide to show error message it must be more explicit
"Can not start vm with NUMA pinning on host without NUMA support"

Comment 3 Roman Mohr 2015-12-08 13:42:27 UTC
(In reply to Artyom from comment #2)
> Checked on rhevm-3.6.1.1-0.1.el6.noarch
> 1) have host with NUMA support and without under the same cluster
> 2) create vm with two VNUMA nodes
> 3) pin vm to host without NUMA support(Numa Pinning button not faded), I
> still can pin VNUMA to PNUMA node(in case of host without NUMA support I see
> only one PNUMA node)
> 4) but when I run vm I receive error message:
> The host cyan-vdsg.qa.lab.tlv.redhat.com did not satisfy internal filter
> Memory because cannot accommodate memory of VM's pinned virtual NUMA nodes
> within host's physical NUMA nodes

I explicitly tested this scenario with one NUMA node when I rewrote that part. That should work when there is sufficient memory. Maybe there is really not sufficient memory there? Also did you use 'strict' or 'interleave' mode?

> Question if it desired behavior? Again by my opinion better to fade Numa
> Pinning button instead of failed to start vm with error message and also if
> we decide to show error message it must be more explicit
> "Can not start vm with NUMA pinning on host without NUMA support"

You are right that I just fixed the pinning error. I did not change the visibility of the button.

I see two solutions here, make the 'NumaSupport" always visible also for hosts, or grey it out in the VM edit dialog, as you suggested. We can fix this in a separete bug, what do you think?

Comment 4 Artyom 2015-12-09 12:20:11 UTC
1) I checked it again and host has sufficient amount of memory to run vm.
If I run vm under interleave mode it succeed to run and if under strict mode vm failed to start with error from comment 2, if it desired behavior?

2) I will open additional bug to fade button "Numa Pinning" in case when host not have "NUMA Support"

Comment 5 Roman Mohr 2015-12-10 09:57:08 UTC
(In reply to Artyom from comment #4)
> 1) I checked it again and host has sufficient amount of memory to run vm.
> If I run vm under interleave mode it succeed to run and if under strict mode
> vm failed to start with error from comment 2, if it desired behavior?

One last thing, could you post an image of the NUMA pinning sceen? If the single node itself does not have enough memory (maybe it has 0?) then the error is correct. If the host memory is treated as memory which is not owned by the single numa node then the behaviour is correct.

I will do some additional tests, but it would be great to see the screenshot.

> 2) I will open additional bug to fade button "Numa Pinning" in case when
> host not have "NUMA Support"

Thx for creating the bug

Comment 6 Artyom 2015-12-10 13:53:16 UTC
Created attachment 1104348 [details]
screenshot

Comment 7 Artyom 2015-12-10 16:16:34 UTC
Hm I do not really see reason why behavior different for strict and interleave mode in case of single node architecture. I also tried to reduce vm memory from 2048 to 1024, the same result in case of strict mode.

Comment 8 Artyom 2015-12-14 09:41:35 UTC
Verified on rhevm-3.6.1.3-0.1.el6.noarch
Now if hosts have same numa architecture, engine try to save pinning when we change host.