Bug 1808788 - VM configured with 16 CPUs fails on start with unsupported configuration error .
Summary: VM configured with 16 CPUs fails on start with unsupported configuration error .
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt
Version: 4.4.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ovirt-4.4.0
: ---
Assignee: Steven Rosenberg
QA Contact: Polina
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-01 09:03 UTC by Polina
Modified: 2020-06-23 08:32 UTC (History)
4 users (show)

Fixed In Version: ovirt-engine 4.4.0-26 b5b5c99ca2f
Doc Type: Bug Fix
Doc Text:
Previously, trying to run a VM failed with an unsupported configuration error if its configuration did not specify a numa node. This happened because the domain xml was missing its numa node section, and VMs require at least one numa node to run. The current release fixes this issue: If the user has not specified any numa nodes, the VM generates a numa node section. As a result, a VM where numa nodes were not specified launches regardless of how many offline CPUs are available.
Clone Of:
Environment:
Last Closed: 2020-05-20 20:02:03 UTC
oVirt Team: Virt
Embargoed:
pm-rhel: ovirt-4.4+


Attachments (Terms of Use)
logs (799.41 KB, application/gzip)
2020-03-01 09:03 UTC, Polina
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 107376 0 master MERGED core: Ensure at least one numa node is sent 2021-02-07 17:51:37 UTC

Description Polina 2020-03-01 09:03:19 UTC
Created attachment 1666729 [details]
logs

Description of problem:The VM configured with 'Total Virtual CPUs' = 16 fails on start. 2020-02-27 15:36:21,121+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) [] EVENT_ID: VM_DOWN_ERROR(119), VM golden_env_mixed_virtio_0 is down with error. Exit message: unsupported configuration: At least one numa node has to be configured when enabling memory hotplug.


Version-Release number of selected component (if applicable):
http://bob-dr.lab.eng.brq.redhat.com/builds/4.4/rhv-4.4.0-20/

How reproducible:100% in certain env


Steps to Reproduce:
1. Build the VM on the base of infra template latest-rhel-guest-image-8.1-infra like all the VMs in automation environments.  The setup I'm testing now is built with hosts CPU Type 'Secure Intel Cascadelake Server Family'.(point it here though it could be not related)
2. Configure the VM with 'Total Virtual CPUs' = 16. 
3. start

Actual results:
Fails on start irrelevant error has been brought (nothing is relevant to  numa in this case, no numa support in the setup
The relevant entry in engine.log 
2020-02-27 15:36:21,121+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) [] EVENT_ID: VM_DOWN_ERROR(119), VM golden_env_mixed_virtio_0 is down with error. Exit message: unsupported configuration: At least one numa node has to be configured when enabling memory hotplug.
2020-02-27 15:36:21,125+02 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) [] add VM '5c1e9021-da7a-42b7-82db-57916bd5b399'(golden_env_mixed_virtio_0) to rerun treatment
2020-02-27 15:36:21,135+02 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) [] Rerun VM '5c1e9021-da7a-42b7-82db-57916bd5b399'. Called from VDS 'host_mixed_1'

from vdsm.log

2020-02-27 16:12:34,725+0200 ERROR (vm/5c1e9021) [virt.vm] (vmId='5c1e9021-da7a-42b7-82db-57916bd5b399') The vm start process failed (vm:834)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 768, in _startUnderlyingVm
    self._run()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2570, in _run
    dom.createWithFlags(flags)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1265, in createWithFlags
    if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
libvirt.libvirtError: unsupported configuration: At least one numa node has to be configured when enabling memory hotplug
2020-02-27 16:12:34,725+0200 INFO  (vm/5c1e9021) [virt.vm] (vmId='5c1e9021-da7a-42b7-82db-57916bd5b399') Changed state to Down: unsupported configuration: At least one numa node has to be configured when enabling memory hotplug (code=1) (vm:1592)
2020-02-27 16:12:34,730+0200 INFO  (vm/5c1e9021) [virt.vm] (vmId='5c1e9021-da7a-42b7-82db-57916bd5b399') Stopping connection (guestagent:441)
Expected results:


Additional info:

the host's lscpu output:
root@janus01 qemu]# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              32
On-line CPU(s) list: 0-31
Thread(s) per core:  2
Core(s) per socket:  16
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               85
Model name:          Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
Stepping:            7
CPU MHz:             1000.451
BogoMIPS:            4600.00
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
L3 cache:            22528K
NUMA node0 CPU(s):   0-31
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities

Comment 1 Ryan Barry 2020-03-02 00:12:25 UTC
As discussed, the NUMA part is not irrelevant. Even VMs without different NUMA groups are still configured with one, and it's required for memory plugging by qemu. 

The problem is that, since there are no offlined CPUs, this group is not created. Should be an easy fix

Comment 2 Polina 2020-03-05 08:48:04 UTC
small addition - happens also on engine el8 (http://bob-dr.lab.eng.brq.redhat.com/builds/4.4/rhv-4.4.0-23/)

Comment 3 Polina 2020-03-25 17:32:07 UTC
verified on http://bob-dr.lab.eng.brq.redhat.com/builds/4.4/rhv-4.4.0-26

Comment 4 Sandro Bonazzola 2020-05-20 20:02:03 UTC
This bugzilla is included in oVirt 4.4.0 release, published on May 20th 2020.

Since the problem described in this bug report should be
resolved in oVirt 4.4.0 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.