Description of problem: Specifying a config like: <vcpu placement='auto' current='2'>12</vcpu> [...] <cpu> <numa> <cell id='0' cpus='0,2,4,6' memory='512000' unit='KiB'/> <cell id='1' cpus='1,3,5,7' memory='512000' unit='KiB'/> </numa> </cpu> Results in: 2019-06-06T11:43:21.655857Z qemu-system-x86_64: warning: CPU(s) not present in any NUMA nodes: CPU 8 [socket-id: 8, core-id: 0, thread-id: 0], CPU 9 [socket-id: 9, core-id: 0, thread-id: 0], CPU 10 [socket-id: 10, core-id: 0, thread-id: 0], CPU 11 [socket-id: 11, core-id: 0, thread-id: 0] 2019-06-06T11:43:21.655870Z qemu-system-x86_64: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future QEMU rightfully deprecated mapping cpus which were not specified in the NUMA topology to some random node. Version-Release number of selected component (if applicable): libvirt-v5.4.0-166-gd23f5fa08c qemu-v4.0.0-917-g8c1ecb5904 How reproducible: always, just don't specify all vCPUs in the NUMA topology. Steps to Reproduce: 1. define XML with some vCPUS missing in NUMA 2. start VM 3. look at vm log file Actual results: warnings printed Expected results: ... Additional info:
(In reply to Peter Krempa from comment #0) > QEMU rightfully deprecated mapping cpus which were not specified in the NUMA > topology to some random node. Yeah, I'd argue this situation is a application bug and libvirt doesn't need to workaround it. If anything we should report the error to the application before we even launch QEMU.
The problem is that old apps may use the legacy APIs for modifying the number of VCPUs in which case we will not update the numa topology. We must then either ban those APIs if numa is used or allow some kind of fallback.
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.