Description of problem:If set HMAT cache none associativity or none policy, then after virsh define, the associativity and policy attribute will be ignored directly, so error msg would be prompts when starting the guest. Version-Release number of selected component (if applicable): # rpm -q qemu-kvm libvirt qemu-kvm-8.0.0-2.el9.x86_64 libvirt-9.3.0-1.el9.x86_64 How reproducible: 100% Steps to Reproduce: 1 Define a guest vm with below vcpu setting: <numa> <cell id="0" cpus="0-1" memory="1048576" unit="KiB"> <cache level="1" associativity='none' policy="writethrough"> <size value="10" unit="KiB"/> <line value="8" unit="B"/> </cache> </cell> <cell id="1" cpus="2-3" memory="1048576" unit="KiB"/> <interconnects> <latency initiator="0" target="0" cache="1" type="read" value="5"/> <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/> </interconnects> </numa> 2 Check the config xml by virsh dumpxml, and find associativity attribute is ignored # virsh dumpxml vm1 --xpath '//numa' <numa> <cell id="0" cpus="0-1" memory="1048576" unit="KiB"> <cache level="1" policy="writethrough"> <size value="10" unit="KiB"/> <line value="8" unit="B"/> </cache> </cell> <cell id="1" cpus="2-3" memory="1048576" unit="KiB"/> <interconnects> <latency initiator="0" target="0" cache="1" type="read" value="5"/> <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/> </interconnects> </numa> 3 Start the guest and meet the error # virsh start vm1 error: Failed to start domain 'vm1' error: XML error: Missing 'associativity' attribute in cache element for NUMA node 0 4 Define a guest vm with below vcpu setting: <numa> <cell id="0" cpus="0-1" memory="1048576" unit="KiB"> <cache level="1" associativity='direct' policy='none'> <size value="10" unit="KiB"/> <line value="8" unit="B"/> </cache> </cell> <cell id="1" cpus="2-3" memory="1048576" unit="KiB"/> <interconnects> <latency initiator="0" target="0" cache="1" type="read" value="5"/> <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/> </interconnects> </numa> 5 Check the config xml by virsh dumpxml, and find policy attribute is ignored # virsh dumpxml vm1 --xpath '//numa' <numa> <cell id="0" cpus="0-1" memory="1048576" unit="KiB"> <cache level="1" associativity="direct"> <size value="10" unit="KiB"/> <line value="8" unit="B"/> </cache> </cell> <cell id="1" cpus="2-3" memory="1048576" unit="KiB"/> <interconnects> <latency initiator="0" target="0" cache="1" type="read" value="5"/> <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/> </interconnects> </numa> 6 Start the guest and meet the error # virsh start vm1 error: Failed to start domain 'vm1' error: XML error: Invalid cache policy '(null)' Actual results: HMAT cache none associativity or none policy would be ignored by virsh define and guest could not be started Expected results: HMAT cache none associativity or none policy could be set properly Additional info: 1. From libvirt doc, HMAT cache none associativity or none policy is supported. (https://libvirt.org/formatdomain.html#acpi-heterogeneous-memory-attribute-table), if not or other kind of setting, we should update the doc. 2. From qemu doc, at least none associativity is supported, "associativity is the cache associativity, the possible value is ‘none/direct(direct-mapped)/complex(complex cache indexing)’. " https://www.qemu.org/docs/master/system/invocation.html 3. This issue could also be seen on rhel9.2 libvirt:libvirt-9.0.0-10.1.el9_2.x86_64
There's a mistake in the formatter code which ignores the 'none' value when formatting. The bug triggers because the parser requires it. At starting we copy the definition via formatting and parsing.
Fixed upstream: commit af621caa6bd479ca7666bcc6254e0043466b7b00 Author: Peter Krempa <pkrempa> Date: Tue May 16 10:22:39 2023 +0200 conf: numa: Allow formatting 'none' values for 'associativity' and 'policy' of cache The parser makes the values mandatory and also the qemu code implements actions for those values. The formatter skips them though. Since format+parse is used to copy the XML at startup a definition with those values can't be started. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2203709 Signed-off-by: Peter Krempa <pkrempa> Reviewed-by: Michal Privoznik <mprivozn> commit 0d5fc7219ae605959e14d877865793f48c729f5e Author: Peter Krempa <pkrempa> Date: Tue May 16 10:19:42 2023 +0200 virDomainNumaDefNodeCacheParseXML: Refactor parsing of cache XML Use virXMLProp* helpers to simplify the code. Signed-off-by: Peter Krempa <pkrempa> Reviewed-by: Michal Privoznik <mprivozn> v9.3.0-78-gaf621caa6b
Preverified on upstream libvirt v9.3.0-110-g3b6d69237f Test steps: 1 Define a guest vm with below vcpu setting: <numa> <cell id="0" cpus="0" memory="1048576" unit="KiB"> <cache level="1" associativity='none' policy="none"> <size value="10" unit="KiB"/> <line value="8" unit="B"/> </cache> </cell> <cell id="1" cpus="1" memory="1048576" unit="KiB"/> <interconnects> <latency initiator="0" target="0" cache="1" type="read" value="5"/> <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/> </interconnects> </numa> 2 Start the guest # virsh start vm1 Domain 'vm1' started 3 Check the config xml by virsh dumpxml # virsh dumpxml vm1 --xpath '//numa' <numa> <cell id="0" cpus="0" memory="1048576" unit="KiB"> <cache level="1" associativity="none" policy="none"> <size value="10" unit="KiB"/> <line value="8" unit="B"/> </cache> </cell> <cell id="1" cpus="1" memory="1048576" unit="KiB"/> <interconnects> <latency initiator="0" target="0" cache="1" type="read" value="5"/> <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/> </interconnects> </numa> 4 Check the hmat info by dmesg cmd # dmesg | grep hmat [ 0.783293] acpi/hmat: Memory Flags:0001 Processor Domain:0 Memory Domain:0 [ 0.784178] acpi/hmat: Memory Flags:0001 Processor Domain:1 Memory Domain:1 [ 0.785175] acpi/hmat: Locality: Flags:01 Type:Read Latency Initiator Domains:2 Target Domains:2 Base:1000 [ 0.786309] acpi/hmat: Initiator-Target[0-0]:5 nsec [ 0.787137] acpi/hmat: Initiator-Target[0-1]:0 nsec [ 0.787933] acpi/hmat: Initiator-Target[1-0]:0 nsec [ 0.788722] acpi/hmat: Initiator-Target[1-1]:0 nsec [ 0.790134] acpi/hmat: Locality: Flags:01 Type:Access Bandwidth Initiator Domains:2 Target Domains:2 Base:8 [ 0.791268] acpi/hmat: Initiator-Target[0-0]:200 MB/s [ 0.792131] acpi/hmat: Initiator-Target[0-1]:0 MB/s [ 0.792933] acpi/hmat: Initiator-Target[1-0]:0 MB/s [ 0.793705] acpi/hmat: Initiator-Target[1-1]:0 MB/s [ 0.794129] acpi/hmat: Cache: Domain:0 Size:10240 Attrs:00080011 SMBIOS Handles:0
Verified on build: # rpm -q libvirt qemu-kvm libvirt-9.5.0-3.el9.x86_64 qemu-kvm-8.0.0-9.el9.x86_64 Test steps: 1 Define a guest vm with below vcpu setting: <numa> <cell id="0" cpus="0" memory="1048576" unit="KiB"> <cache level="1" associativity='none' policy="none"> <size value="10" unit="KiB"/> <line value="8" unit="B"/> </cache> </cell> <cell id="1" cpus="1" memory="1048576" unit="KiB"/> <interconnects> <latency initiator="0" target="0" cache="1" type="read" value="5"/> <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/> </interconnects> </numa> 2 Start the guest # virsh start vm1 Domain 'vm1' started 3 Check the config xml by virsh dumpxml # virsh dumpxml vm1 | xmllint -xpath '//numa' - <numa> <cell id="0" cpus="0" memory="1048576" unit="KiB"> <cache level="1" associativity="none" policy="none"> <size value="10" unit="KiB"/> <line value="8" unit="B"/> </cache> </cell> <cell id="1" cpus="1" memory="1048576" unit="KiB"/> <interconnects> <latency initiator="0" target="0" cache="1" type="read" value="5"/> <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/> </interconnects> </numa> 4 Check the hmat info by dmesg cmd in guest # dmesg | grep hmat [ 0.873137] acpi/hmat: Memory Flags:0001 Processor Domain:0 Memory Domain:0 [ 0.873139] acpi/hmat: Memory Flags:0001 Processor Domain:1 Memory Domain:1 [ 0.873140] acpi/hmat: Locality: Flags:01 Type:Read Latency Initiator Domains:2 Target Domains:2 Base:1000 [ 0.873141] acpi/hmat: Initiator-Target[0-0]:5 nsec [ 0.873142] acpi/hmat: Initiator-Target[0-1]:0 nsec [ 0.873142] acpi/hmat: Initiator-Target[1-0]:0 nsec [ 0.873143] acpi/hmat: Initiator-Target[1-1]:0 nsec [ 0.873143] acpi/hmat: Locality: Flags:01 Type:Access Bandwidth Initiator Domains:2 Target Domains:2 Base:8 [ 0.873145] acpi/hmat: Initiator-Target[0-0]:200 MB/s [ 0.873145] acpi/hmat: Initiator-Target[0-1]:0 MB/s [ 0.873146] acpi/hmat: Initiator-Target[1-0]:0 MB/s [ 0.873146] acpi/hmat: Initiator-Target[1-1]:0 MB/s [ 0.873147] acpi/hmat: Cache: Domain:0 Size:10240 Attrs:00080011 SMBIOS Handles:0 5. in guest check numa hmat related info by virsh capabilities # virsh capabilities | xmllint --xpath '//cells' - ... <cache level="1" associativity="none" policy="none"> <size value="10" unit="KiB"/> <line value="8" unit="B"/> </cache> ...
mark it verified per comment 4