Created attachment 1666729 [details] logs Description of problem:The VM configured with 'Total Virtual CPUs' = 16 fails on start. 2020-02-27 15:36:21,121+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) [] EVENT_ID: VM_DOWN_ERROR(119), VM golden_env_mixed_virtio_0 is down with error. Exit message: unsupported configuration: At least one numa node has to be configured when enabling memory hotplug. Version-Release number of selected component (if applicable): http://bob-dr.lab.eng.brq.redhat.com/builds/4.4/rhv-4.4.0-20/ How reproducible:100% in certain env Steps to Reproduce: 1. Build the VM on the base of infra template latest-rhel-guest-image-8.1-infra like all the VMs in automation environments. The setup I'm testing now is built with hosts CPU Type 'Secure Intel Cascadelake Server Family'.(point it here though it could be not related) 2. Configure the VM with 'Total Virtual CPUs' = 16. 3. start Actual results: Fails on start irrelevant error has been brought (nothing is relevant to numa in this case, no numa support in the setup The relevant entry in engine.log 2020-02-27 15:36:21,121+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) [] EVENT_ID: VM_DOWN_ERROR(119), VM golden_env_mixed_virtio_0 is down with error. Exit message: unsupported configuration: At least one numa node has to be configured when enabling memory hotplug. 2020-02-27 15:36:21,125+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) [] add VM '5c1e9021-da7a-42b7-82db-57916bd5b399'(golden_env_mixed_virtio_0) to rerun treatment 2020-02-27 15:36:21,135+02 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) [] Rerun VM '5c1e9021-da7a-42b7-82db-57916bd5b399'. Called from VDS 'host_mixed_1' from vdsm.log 2020-02-27 16:12:34,725+0200 ERROR (vm/5c1e9021) [virt.vm] (vmId='5c1e9021-da7a-42b7-82db-57916bd5b399') The vm start process failed (vm:834) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 768, in _startUnderlyingVm self._run() File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2570, in _run dom.createWithFlags(flags) File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1265, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirt.libvirtError: unsupported configuration: At least one numa node has to be configured when enabling memory hotplug 2020-02-27 16:12:34,725+0200 INFO (vm/5c1e9021) [virt.vm] (vmId='5c1e9021-da7a-42b7-82db-57916bd5b399') Changed state to Down: unsupported configuration: At least one numa node has to be configured when enabling memory hotplug (code=1) (vm:1592) 2020-02-27 16:12:34,730+0200 INFO (vm/5c1e9021) [virt.vm] (vmId='5c1e9021-da7a-42b7-82db-57916bd5b399') Stopping connection (guestagent:441) Expected results: Additional info: the host's lscpu output: root@janus01 qemu]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz Stepping: 7 CPU MHz: 1000.451 BogoMIPS: 4600.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 1024K L3 cache: 22528K NUMA node0 CPU(s): 0-31 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
As discussed, the NUMA part is not irrelevant. Even VMs without different NUMA groups are still configured with one, and it's required for memory plugging by qemu. The problem is that, since there are no offlined CPUs, this group is not created. Should be an easy fix
small addition - happens also on engine el8 (http://bob-dr.lab.eng.brq.redhat.com/builds/4.4/rhv-4.4.0-23/)
verified on http://bob-dr.lab.eng.brq.redhat.com/builds/4.4/rhv-4.4.0-26
This bugzilla is included in oVirt 4.4.0 release, published on May 20th 2020. Since the problem described in this bug report should be resolved in oVirt 4.4.0 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.