Bug 1220122 - Run vm with one cpu and two numa nodes failed
Summary: Run vm with one cpu and two numa nodes failed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.5.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 3.5.3
Assignee: Dudi Maroshi
QA Contact: Artyom
URL:
Whiteboard: sla
Depends On: 1196235
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-10 09:29 UTC by rhev-integ
Modified: 2016-02-10 20:19 UTC (History)
13 users (show)

Fixed In Version: org.ovirt.engine-root-3.5.3-2
Doc Type: Bug Fix
Doc Text:
Previously, verification for NUMA nodes correlation to CPUs were missing and resulted in an inefficient NUMA architecture. This update adds validation for NUMA nodes correlation to CPUs in both the GUI and REST API.
Clone Of: 1196235
Environment:
Last Closed: 2015-06-15 13:28:34 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:1095 0 normal SHIPPED_LIVE Red Hat Enterprise Virtualization Manager 3.5.3 update 2015-06-15 17:26:37 UTC
oVirt gerrit 39317 0 master MERGED engine: validate (vm NUMA nodes) <= (vm CPU cores) Never
oVirt gerrit 40824 0 ovirt-engine-3.5 MERGED engine: validate (vm NUMA nodes) <= (vm CPU cores) Never
oVirt gerrit 40825 0 ovirt-engine-3.5.3 MERGED engine: validate (vm NUMA nodes) <= (vm CPU cores) Never

Comment 2 Artyom 2015-05-17 08:21:58 UTC
Checked on rhevm-3.5.3-0.2.el6ev.noarch, it still possible to create VM Numa node without any core via REST.
1) Create new vm(pinned to host with NUMA)
2) Create new vm NUMA node via REST
<vm_numa_node>
<index>0</index>
<memory>512</memory>
<cpu>
<cores></cores>
</cpu>
</vm_numa_node>
3) Start vm
Vm failed to start with libvirt error:
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2287, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3351, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3427, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: invalid argument: Failed to parse bitmap '

cpu libvirt section:
<cpu match="exact">
                <model>Opteron_G1</model>
                <topology cores="1" sockets="16" threads="1"/>
                <numa>
                        <cell cpus="" memory="524288"/>
                        <cell cpus="" memory="524288"/>
                </numa>
        </cpu>
Via webadmin fix work, but in case if it failed validation:
Error while executing action:

test_numa:

    Cannot edit VM. Assigned 2 NUMA nodes for 1 CPU cores. Cannot assign more NUMA nodes than CPU cores.

engine close "New Virtual Machine" or "Edit Virtual Machine" window, that can be very annoying(if you configure some additional stuff on vm), by my opinion you need to show error message without close "New Virtual Machine" or "Edit Virtual Machine" window

Comment 3 Dudi Maroshi 2015-05-19 09:04:30 UTC
Checked comment 2. Working as designed.
The complaint in comment 2 is valid.
We need to open a new bug on comment 2.
With title: "Adding NUMA node with 0 CPU cores fails libvirt."

Reason for opening a new bug: The current bug is about validating VM for: NUMA nodes and CPUs. The new bug is about validating NUMA node request.

As for the complaint about inconvenient error message (reset all the user's work).
This is a know architectural deficiency. Will not be addressed in the current version.

Comments appreciated.

Comment 4 Artyom 2015-05-19 12:43:29 UTC
Ok thanks for explanation, I already have opened couple of bug connect to NUMA validation under REST.
Verified on rhevm-3.5.3-0.2.el6ev.noarch

Comment 6 errata-xmlrpc 2015-06-15 13:28:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1095.html


Note You need to log in before you can comment on or make changes to this bug.