Bug 1265815 - Rest allows the creation of too much numa nodes
Summary: Rest allows the creation of too much numa nodes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: RestAPI
Version: 3.6.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-3.6.2
: 3.6.2
Assignee: Roman Mohr
QA Contact: Artyom
URL:
Whiteboard: sla
Depends On: 1248049
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-09-23 20:25 UTC by Roman Mohr
Modified: 2016-02-29 12:14 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-18 11:00:20 UTC
oVirt Team: SLA
Embargoed:
msivak: ovirt-3.6.z?
rule-engine: planning_ack?
rgolan: devel_ack+
mavital: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 46627 0 master ABANDONED core: Fix check if enought cpus for numa nodes are there Never
oVirt gerrit 48720 0 master MERGED core: Refactor NUMA node validation Never
oVirt gerrit 49163 0 ovirt-engine-3.6 MERGED core: Refactor NUMA node validation Never

Description Roman Mohr 2015-09-23 20:25:17 UTC
Description of problem:
According to Bug 1196235 wo should at least have as much virtual cpus as virtual numa nodes. And the fix for Bug 1196235 adds the necessary checks when creating or updating a VM configuration, but it is still possible to create too much numa nodes when using the rest endpoint.

Version-Release number of selected component (if applicable):


How reproducible:
Create a VM with X virtual CPUS and no numa nodes. Then POST X + Y times a new numa node to /api/vms/<guid>/numanodes and you will succeed.

Steps to Reproduce:
1. Create a VM with X virtual CPUs
2. Post X + Y times a new numa node to /api/vms/<guid>/numanodes
3. You will see X+Y times a 201 response
4. Check them VM configuration and you will see X+Y nodes

Actual results:
X+Y numa nodes are created

Expected results:
X times the creation and Y times a forbidden response and only X numa nodes in the database

Additional info:
 When creating the whole VM including the numa nodes with just one POST to /api/vms the check works as expected.

Comment 1 Roman Mohr 2015-09-23 20:37:33 UTC
The bug is only observable when Bug 1248049 is fixed. Otherwise creating individual numa nodes via /api/vms/<guid>/numanodes is not possible

Comment 2 Red Hat Bugzilla Rules Engine 2015-11-04 16:32:11 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 3 Red Hat Bugzilla Rules Engine 2015-11-04 16:32:11 UTC
This bug is not marked for z-stream, yet the milestone is for a z-stream version, therefore the milestone has been reset.
Please set the correct milestone or add the z-stream flag.

Comment 4 Red Hat Bugzilla Rules Engine 2015-11-04 16:33:36 UTC
This bug is not marked for z-stream, yet the milestone is for a z-stream version, therefore the milestone has been reset.
Please set the correct milestone or add the z-stream flag.

Comment 5 Roman Mohr 2015-12-22 13:24:15 UTC
Still not done. The main patch is not  in. Only patches which prepared the ground are done.

Comment 6 Red Hat Bugzilla Rules Engine 2015-12-22 13:24:17 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 7 Roman Mohr 2015-12-22 15:34:59 UTC
(In reply to Roman Mohr from comment #5)
> Still not done. The main patch is not  in. Only patches which prepared the
> ground are done.

Ignore that. Wrong bug. This one is done.

Comment 8 Sandro Bonazzola 2015-12-23 13:40:11 UTC
oVirt 3.6.2 RC1 has been released for testing, moving to ON_QA

Comment 9 Artyom 2016-02-04 10:12:45 UTC
Checked on rhevm-3.6.3-0.1.el6.noarch
Have vm with:
<cpu>
<topology sockets="4" cores="1" threads="1" />
<architecture>X86_64</architecture>
 </cpu>

Created 4 numa nodes, when tried to create 5, received error message:
<fault>
<reason>Operation Failed</reason>
<detail>[Cannot ${action} ${type}. Assigned 5 NUMA nodes for 4 CPU cores. Cannot assign more NUMA nodes than CPU cores.]</detail>
 </fault>

but I was succeed to create many different types on NUMA nodes, that by my opinion we must to forbid, for the same vm with only 4 cpus:

<vm_numa_nodes>
 <vm_numa_node href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b/numanodes/d925d50b-b533-4f5e-aded-585dd4a4b540" id="d925d50b-b533-4f5e-aded-585dd4a4b540">
<index>0</index>
<memory>512</memory>
<cpu>
 <cores>
<core index="0" />
<core index="1" />
<core index="2" />
<core index="3" />
 </cores>
</cpu>
<vm href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b" id="c7ecd2dc-dbd3-4419-956f-1249651c0f2b" />
 </vm_numa_node>
 <vm_numa_node href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b/numanodes/d90ba579-eedd-4e6d-82f7-7e6e523e00b2" id="d90ba579-eedd-4e6d-82f7-7e6e523e00b2">
<index>1</index>
<memory>512</memory>
<cpu>
 <cores>
<core index="0" />
<core index="1" />
<core index="2" />
<core index="3" />
 </cores>
</cpu>
<vm href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b" id="c7ecd2dc-dbd3-4419-956f-1249651c0f2b" />
 </vm_numa_node>
 <vm_numa_node href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b/numanodes/0fda92e9-2635-439d-9911-57450b9a9c9b" id="0fda92e9-2635-439d-9911-57450b9a9c9b">
<index>2</index>
<memory>512</memory>
<cpu>
 <cores>
<core index="0" />
<core index="1" />
<core index="2" />
<core index="4" />
 </cores>
</cpu>
<vm href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b" id="c7ecd2dc-dbd3-4419-956f-1249651c0f2b" />
 </vm_numa_node>
 <vm_numa_node href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b/numanodes/7e7a8c5b-2748-4e6b-90a4-7a7e3a6180c9" id="7e7a8c5b-2748-4e6b-90a4-7a7e3a6180c9">
<index>0</index>
<memory>1024</memory>
<cpu>
 <cores>
<core index="0" />
<core index="1" />
<core index="2" />
<core index="3" />
 </cores>
</cpu>
<vm href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b" id="c7ecd2dc-dbd3-4419-956f-1249651c0f2b" />
 </vm_numa_node>
 </vm_numa_nodes>

so problems:
1) possible to create numa nodes with the same index - bug https://bugzilla.redhat.com/show_bug.cgi?id=1126180
2) total amount of memory on numa nodes bigger that total vm memory - we validate it on vm start, but maybe it better to validate it on numa update and creation stage
3) numa nodes can have the same cpu indexes
4) numa nodes can have more cpus than vm has

I remember you said you have some big path for whole numa validation stuff. Does it cover all above problems?

Comment 10 Roman Mohr 2016-02-29 12:14:59 UTC
(In reply to Artyom from comment #9)
> Checked on rhevm-3.6.3-0.1.el6.noarch
> Have vm with:
> <cpu>
> <topology sockets="4" cores="1" threads="1" />
> <architecture>X86_64</architecture>
>  </cpu>
> 
> Created 4 numa nodes, when tried to create 5, received error message:
> <fault>
> <reason>Operation Failed</reason>
> <detail>[Cannot ${action} ${type}. Assigned 5 NUMA nodes for 4 CPU cores.
> Cannot assign more NUMA nodes than CPU cores.]</detail>
>  </fault>
> 
> but I was succeed to create many different types on NUMA nodes, that by my
> opinion we must to forbid, for the same vm with only 4 cpus:
> 
> <vm_numa_nodes>
>  <vm_numa_node
> href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b/numanodes/
> d925d50b-b533-4f5e-aded-585dd4a4b540"
> id="d925d50b-b533-4f5e-aded-585dd4a4b540">
> <index>0</index>
> <memory>512</memory>
> <cpu>
>  <cores>
> <core index="0" />
> <core index="1" />
> <core index="2" />
> <core index="3" />
>  </cores>
> </cpu>
> <vm href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b"
> id="c7ecd2dc-dbd3-4419-956f-1249651c0f2b" />
>  </vm_numa_node>
>  <vm_numa_node
> href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b/numanodes/
> d90ba579-eedd-4e6d-82f7-7e6e523e00b2"
> id="d90ba579-eedd-4e6d-82f7-7e6e523e00b2">
> <index>1</index>
> <memory>512</memory>
> <cpu>
>  <cores>
> <core index="0" />
> <core index="1" />
> <core index="2" />
> <core index="3" />
>  </cores>
> </cpu>
> <vm href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b"
> id="c7ecd2dc-dbd3-4419-956f-1249651c0f2b" />
>  </vm_numa_node>
>  <vm_numa_node
> href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b/numanodes/
> 0fda92e9-2635-439d-9911-57450b9a9c9b"
> id="0fda92e9-2635-439d-9911-57450b9a9c9b">
> <index>2</index>
> <memory>512</memory>
> <cpu>
>  <cores>
> <core index="0" />
> <core index="1" />
> <core index="2" />
> <core index="4" />
>  </cores>
> </cpu>
> <vm href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b"
> id="c7ecd2dc-dbd3-4419-956f-1249651c0f2b" />
>  </vm_numa_node>
>  <vm_numa_node
> href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b/numanodes/
> 7e7a8c5b-2748-4e6b-90a4-7a7e3a6180c9"
> id="7e7a8c5b-2748-4e6b-90a4-7a7e3a6180c9">
> <index>0</index>
> <memory>1024</memory>
> <cpu>
>  <cores>
> <core index="0" />
> <core index="1" />
> <core index="2" />
> <core index="3" />
>  </cores>
> </cpu>
> <vm href="/ovirt-engine/api/vms/c7ecd2dc-dbd3-4419-956f-1249651c0f2b"
> id="c7ecd2dc-dbd3-4419-956f-1249651c0f2b" />
>  </vm_numa_node>
>  </vm_numa_nodes>
> 
> so problems:
> 1) possible to create numa nodes with the same index - bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1126180
> 2) total amount of memory on numa nodes bigger that total vm memory - we
> validate it on vm start, but maybe it better to validate it on numa update
> and creation stage
> 3) numa nodes can have the same cpu indexes
> 4) numa nodes can have more cpus than vm has
> 
> I remember you said you have some big path for whole numa validation stuff.
> Does it cover all above problems?

All valid additional things we have to prevent. The main refactoring was about doing the same validations for REST and UI and doing a lot of additional checks, but 2-4 are still not covered.


Note You need to log in before you can comment on or make changes to this bug.