Bug 1256716
Summary: | Possible to pin vm numa node to host numa node, that vm not pinned | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Artyom <alukiano> | ||||
Component: | ovirt-engine | Assignee: | Roman Mohr <rmohr> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Artyom <alukiano> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 3.6.0 | CC: | alukiano, dfediuck, gklein, lsurette, rbalakri, rgolan, Rhev-m-bugs, rmohr, srevivo, ykaul | ||||
Target Milestone: | ovirt-3.6.1 | Keywords: | Triaged | ||||
Target Release: | 3.6.1 | ||||||
Hardware: | All | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: |
Previously, when examining the virtual NUMA node pinning on host A, you could also see NUMA nodes from virtual machines which were pinned to host B, and even pin them to NUMA nodes on host A. This broke the existing pinning, and the virtual machine would fail to start because of invalid NUMA pinning. Now, Red Hat Enterprise Virtualization detects invalid pinning when the preferred host changes, unpins invalid nodes when entering the "NUMA Pinning" dialogue, and only shows virtual machines that are pinned to the specific host.
|
Story Points: | --- | ||||
Clone Of: | Environment: | ||||||
Last Closed: | 2016-04-20 01:29:15 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | SLA | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Artyom
2015-08-25 10:30:09 UTC
Missed 3.6.1-rc1. Now merged and will be in rc2. Checked on rhevm-3.6.1.1-0.1.el6.noarch 1) have host with NUMA support and without under the same cluster 2) create vm with two VNUMA nodes 3) pin vm to host without NUMA support(Numa Pinning button not faded), I still can pin VNUMA to PNUMA node(in case of host without NUMA support I see only one PNUMA node) 4) but when I run vm I receive error message: The host cyan-vdsg.qa.lab.tlv.redhat.com did not satisfy internal filter Memory because cannot accommodate memory of VM's pinned virtual NUMA nodes within host's physical NUMA nodes Question if it desired behavior? Again by my opinion better to fade Numa Pinning button instead of failed to start vm with error message and also if we decide to show error message it must be more explicit "Can not start vm with NUMA pinning on host without NUMA support" (In reply to Artyom from comment #2) > Checked on rhevm-3.6.1.1-0.1.el6.noarch > 1) have host with NUMA support and without under the same cluster > 2) create vm with two VNUMA nodes > 3) pin vm to host without NUMA support(Numa Pinning button not faded), I > still can pin VNUMA to PNUMA node(in case of host without NUMA support I see > only one PNUMA node) > 4) but when I run vm I receive error message: > The host cyan-vdsg.qa.lab.tlv.redhat.com did not satisfy internal filter > Memory because cannot accommodate memory of VM's pinned virtual NUMA nodes > within host's physical NUMA nodes I explicitly tested this scenario with one NUMA node when I rewrote that part. That should work when there is sufficient memory. Maybe there is really not sufficient memory there? Also did you use 'strict' or 'interleave' mode? > Question if it desired behavior? Again by my opinion better to fade Numa > Pinning button instead of failed to start vm with error message and also if > we decide to show error message it must be more explicit > "Can not start vm with NUMA pinning on host without NUMA support" You are right that I just fixed the pinning error. I did not change the visibility of the button. I see two solutions here, make the 'NumaSupport" always visible also for hosts, or grey it out in the VM edit dialog, as you suggested. We can fix this in a separete bug, what do you think? 1) I checked it again and host has sufficient amount of memory to run vm. If I run vm under interleave mode it succeed to run and if under strict mode vm failed to start with error from comment 2, if it desired behavior? 2) I will open additional bug to fade button "Numa Pinning" in case when host not have "NUMA Support" (In reply to Artyom from comment #4) > 1) I checked it again and host has sufficient amount of memory to run vm. > If I run vm under interleave mode it succeed to run and if under strict mode > vm failed to start with error from comment 2, if it desired behavior? One last thing, could you post an image of the NUMA pinning sceen? If the single node itself does not have enough memory (maybe it has 0?) then the error is correct. If the host memory is treated as memory which is not owned by the single numa node then the behaviour is correct. I will do some additional tests, but it would be great to see the screenshot. > 2) I will open additional bug to fade button "Numa Pinning" in case when > host not have "NUMA Support" Thx for creating the bug Created attachment 1104348 [details]
screenshot
Hm I do not really see reason why behavior different for strict and interleave mode in case of single node architecture. I also tried to reduce vm memory from 2048 to 1024, the same result in case of strict mode. Verified on rhevm-3.6.1.3-0.1.el6.noarch Now if hosts have same numa architecture, engine try to save pinning when we change host. |