Description of problem: Possible to pin vm numa node to host numa node, that vm not pinned Version-Release number of selected component (if applicable): rhevm-3.6.0-0.12.master.el6.noarch How reproducible: Always Steps to Reproduce: 1. Have two hosts in cluster one with Numa support and second without(that just said that host have only one numa node) 2. Create vm that pinned to host without NUMA support 3. Create numa node on vm and open "Numa Pinning" window under vm->Edit->Hosts Actual results: You can see in "Numa Pinning" window numa nodes of host that vm not pinned to him, and also pin vm numa nodes to this host numa nodes Expected results: I expect that we will check if host that vm pinned to him have Numa Support and if not we will fade "Numa Pinning" button and for sure not show all Numa nodes that we have in cluster Additional info: Also when you run such vm, start will fail: libvirtError: invalid argument: Failed to parse bitmap '' Also we mix two different things about vm with NUMA nodes: 1) Vm can have numa node without to be pinned to some host, it just some numa architecture on vm, I do not see real reason to restrict on user create vm numa node only when vm pinned to host connect to <numa> parameter under libvirt 2) user can pin vm numa node only if vm pinned to host and if host have numa support(at least two numa nodes) connect to <numatune> parameter under libvirt
Missed 3.6.1-rc1. Now merged and will be in rc2.
Checked on rhevm-3.6.1.1-0.1.el6.noarch 1) have host with NUMA support and without under the same cluster 2) create vm with two VNUMA nodes 3) pin vm to host without NUMA support(Numa Pinning button not faded), I still can pin VNUMA to PNUMA node(in case of host without NUMA support I see only one PNUMA node) 4) but when I run vm I receive error message: The host cyan-vdsg.qa.lab.tlv.redhat.com did not satisfy internal filter Memory because cannot accommodate memory of VM's pinned virtual NUMA nodes within host's physical NUMA nodes Question if it desired behavior? Again by my opinion better to fade Numa Pinning button instead of failed to start vm with error message and also if we decide to show error message it must be more explicit "Can not start vm with NUMA pinning on host without NUMA support"
(In reply to Artyom from comment #2) > Checked on rhevm-3.6.1.1-0.1.el6.noarch > 1) have host with NUMA support and without under the same cluster > 2) create vm with two VNUMA nodes > 3) pin vm to host without NUMA support(Numa Pinning button not faded), I > still can pin VNUMA to PNUMA node(in case of host without NUMA support I see > only one PNUMA node) > 4) but when I run vm I receive error message: > The host cyan-vdsg.qa.lab.tlv.redhat.com did not satisfy internal filter > Memory because cannot accommodate memory of VM's pinned virtual NUMA nodes > within host's physical NUMA nodes I explicitly tested this scenario with one NUMA node when I rewrote that part. That should work when there is sufficient memory. Maybe there is really not sufficient memory there? Also did you use 'strict' or 'interleave' mode? > Question if it desired behavior? Again by my opinion better to fade Numa > Pinning button instead of failed to start vm with error message and also if > we decide to show error message it must be more explicit > "Can not start vm with NUMA pinning on host without NUMA support" You are right that I just fixed the pinning error. I did not change the visibility of the button. I see two solutions here, make the 'NumaSupport" always visible also for hosts, or grey it out in the VM edit dialog, as you suggested. We can fix this in a separete bug, what do you think?
1) I checked it again and host has sufficient amount of memory to run vm. If I run vm under interleave mode it succeed to run and if under strict mode vm failed to start with error from comment 2, if it desired behavior? 2) I will open additional bug to fade button "Numa Pinning" in case when host not have "NUMA Support"
(In reply to Artyom from comment #4) > 1) I checked it again and host has sufficient amount of memory to run vm. > If I run vm under interleave mode it succeed to run and if under strict mode > vm failed to start with error from comment 2, if it desired behavior? One last thing, could you post an image of the NUMA pinning sceen? If the single node itself does not have enough memory (maybe it has 0?) then the error is correct. If the host memory is treated as memory which is not owned by the single numa node then the behaviour is correct. I will do some additional tests, but it would be great to see the screenshot. > 2) I will open additional bug to fade button "Numa Pinning" in case when > host not have "NUMA Support" Thx for creating the bug
Created attachment 1104348 [details] screenshot
Hm I do not really see reason why behavior different for strict and interleave mode in case of single node architecture. I also tried to reduce vm memory from 2048 to 1024, the same result in case of strict mode.
Verified on rhevm-3.6.1.3-0.1.el6.noarch Now if hosts have same numa architecture, engine try to save pinning when we change host.