Bug 1324830 - Update VM NUMA pinning via host menu, when VM run will result to VM failed to start on next run
Summary: Update VM NUMA pinning via host menu, when VM run will result to VM failed to...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Backend.Core
Version: 3.6.5.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ovirt-4.0.2
: 4.0.2
Assignee: Roman Mohr
QA Contact: Artyom
URL:
Whiteboard:
Depends On: 1350861
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-07 11:44 UTC by Artyom
Modified: 2016-08-12 14:25 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-12 14:25:24 UTC
oVirt Team: SLA
Embargoed:
ykaul: ovirt-4.0.z+
rule-engine: blocker+
mgoldboi: planning_ack+
rgolan: devel_ack+
mavital: testing_ack+


Attachments (Terms of Use)
engine and vdsm log (1.13 MB, application/zip)
2016-04-07 11:44 UTC, Artyom
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 60476 0 master MERGED webadmin: On NUMA pinning update send a fully populated NUMA node 2016-07-11 13:47:25 UTC
oVirt gerrit 60511 0 ovirt-engine-4.0 MERGED webadmin: On NUMA pinning update send a fully populated NUMA node 2016-07-11 14:13:51 UTC
oVirt gerrit 60512 0 ovirt-engine-4.0.1 MERGED webadmin: On NUMA pinning update send a fully populated NUMA node 2016-07-11 14:14:41 UTC

Description Artyom 2016-04-07 11:44:55 UTC
Created attachment 1144689 [details]
engine and vdsm log

Description of problem:
Update VM NUMA pinning via host menu, when VM run will result to VM failed to start on next run

Version-Release number of selected component (if applicable):
rhevm-3.6.5.1-0.1.el6.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create VM with one NUMA node
2. Pin VNUMA to PNUMA node
3. Start VM
4. Update VNUMA pinning via host "NUMA support" menu(drag VNUMA node to another PNUMA node)
5. Poweroff VM
6. Start VM

Actual results:
VM failed to start

Expected results:
VM succeed to start

Additional info:

Comment 1 Roy Golan 2016-04-10 07:58:23 UTC
2016-04-07 12:34:28,030 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-70) [74a8d9d2] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM test_numa is down with error. Exit message: invalid argument: Failed to parse bitmap ''.


The VM configuration sent from the engine is:
numaTune={nodeset=0, mode=interleave},
guestNumaNodes=[{memory=2048, cpus=, nodeIndex=0}],


Can you have a look?

Comment 2 Sandro Bonazzola 2016-05-02 09:50:50 UTC
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.

Comment 3 Yaniv Lavi 2016-05-23 13:14:34 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 4 Roman Mohr 2016-06-28 14:56:08 UTC
Artyom and I can confirm that while the NUMA pinning still gets lost when editing the VM through the "NUMA Support" dialogue on the host, the VM starts fine.

@rgolan I think we can remove the blocker flag.

Comment 5 Roy Golan 2016-07-04 12:52:11 UTC
But this means that all the current numa vms will fail to start

Comment 6 Roman Mohr 2016-07-20 08:54:54 UTC
@rgolan we should do a backport to 3.6

Comment 7 Artyom 2016-07-24 10:30:44 UTC
Verified on rhevm-4.0.2-0.2.rc1.el7ev.noarch


Note You need to log in before you can comment on or make changes to this bug.