RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2203709 - Fail to set HMAT cache none associativity or none policy
Summary: Fail to set HMAT cache none associativity or none policy
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.3
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: liang cong
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-05-15 05:11 UTC by liang cong
Modified: 2023-11-07 09:42 UTC (History)
5 users (show)

Fixed In Version: libvirt-9.4.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-07 08:31:17 UTC
Type: Bug
Target Upstream Version: 9.4.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-157187 0 None None None 2023-05-15 05:13:03 UTC
Red Hat Product Errata RHSA-2023:6409 0 None None None 2023-11-07 08:31:49 UTC

Description liang cong 2023-05-15 05:11:22 UTC
Description of problem:If set HMAT cache none associativity or none policy, then after virsh define, the associativity and policy attribute will be ignored directly, so error msg would be prompts when starting the guest.



Version-Release number of selected component (if applicable):
# rpm -q qemu-kvm libvirt
qemu-kvm-8.0.0-2.el9.x86_64
libvirt-9.3.0-1.el9.x86_64


How reproducible:
100%

Steps to Reproduce:
1 Define a guest vm with below vcpu setting:
<numa>
  <cell id="0" cpus="0-1" memory="1048576" unit="KiB">
    <cache level="1" associativity='none' policy="writethrough">
      <size value="10" unit="KiB"/>
      <line value="8" unit="B"/>
    </cache>
  </cell>
  <cell id="1" cpus="2-3" memory="1048576" unit="KiB"/>
  <interconnects>
    <latency initiator="0" target="0" cache="1" type="read" value="5"/>
    <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/>
  </interconnects>
</numa>

2 Check the config xml by virsh dumpxml, and find associativity attribute is ignored
# virsh dumpxml vm1 --xpath '//numa'
<numa>
  <cell id="0" cpus="0-1" memory="1048576" unit="KiB">
    <cache level="1" policy="writethrough">
      <size value="10" unit="KiB"/>
      <line value="8" unit="B"/>
    </cache>
  </cell>
  <cell id="1" cpus="2-3" memory="1048576" unit="KiB"/>
  <interconnects>
    <latency initiator="0" target="0" cache="1" type="read" value="5"/>
    <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/>
  </interconnects>
</numa>

3 Start the guest and meet the error
# virsh start vm1
error: Failed to start domain 'vm1'
error: XML error: Missing 'associativity' attribute in cache element for NUMA node 0

4  Define a guest vm with below vcpu setting:
<numa>
  <cell id="0" cpus="0-1" memory="1048576" unit="KiB">
    <cache level="1" associativity='direct' policy='none'>
      <size value="10" unit="KiB"/>
      <line value="8" unit="B"/>
    </cache>
  </cell>
  <cell id="1" cpus="2-3" memory="1048576" unit="KiB"/>
  <interconnects>
    <latency initiator="0" target="0" cache="1" type="read" value="5"/>
    <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/>
  </interconnects>
</numa>

5 Check the config xml by virsh dumpxml, and find policy attribute is ignored
# virsh dumpxml vm1 --xpath '//numa'
<numa>
  <cell id="0" cpus="0-1" memory="1048576" unit="KiB">
    <cache level="1" associativity="direct">
      <size value="10" unit="KiB"/>
      <line value="8" unit="B"/>
    </cache>
  </cell>
  <cell id="1" cpus="2-3" memory="1048576" unit="KiB"/>
  <interconnects>
    <latency initiator="0" target="0" cache="1" type="read" value="5"/>
    <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/>
  </interconnects>
</numa>

6 Start the guest and meet the error
# virsh start vm1
error: Failed to start domain 'vm1'
error: XML error: Invalid cache policy '(null)'

Actual results:
HMAT cache none associativity or none policy would be ignored by virsh define and guest could not be started

Expected results:
HMAT cache none associativity or none policy could be set properly

Additional info:
1. From libvirt doc, HMAT cache none associativity or none policy is supported.
(https://libvirt.org/formatdomain.html#acpi-heterogeneous-memory-attribute-table), if not or other kind of setting, we should update the doc.
2. From qemu doc, at least none associativity is supported, "associativity is the cache associativity, the possible value is ‘none/direct(direct-mapped)/complex(complex cache indexing)’. "
https://www.qemu.org/docs/master/system/invocation.html
3. This issue could also be seen on rhel9.2 libvirt:libvirt-9.0.0-10.1.el9_2.x86_64

Comment 1 Peter Krempa 2023-05-16 08:22:08 UTC
There's a mistake in the formatter code which ignores the 'none' value when formatting. The bug triggers because the parser requires it. At starting we copy the definition via formatting and parsing.

Comment 2 Peter Krempa 2023-05-18 10:51:03 UTC
Fixed upstream:

commit af621caa6bd479ca7666bcc6254e0043466b7b00
Author: Peter Krempa <pkrempa>
Date:   Tue May 16 10:22:39 2023 +0200

    conf: numa: Allow formatting 'none' values for 'associativity' and 'policy' of cache
    
    The parser makes the values mandatory and also the qemu code implements
    actions for those values. The formatter skips them though. Since
    format+parse is used to copy the XML at startup a definition with those
    values can't be started.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2203709
    Signed-off-by: Peter Krempa <pkrempa>
    Reviewed-by: Michal Privoznik <mprivozn>

commit 0d5fc7219ae605959e14d877865793f48c729f5e
Author: Peter Krempa <pkrempa>
Date:   Tue May 16 10:19:42 2023 +0200

    virDomainNumaDefNodeCacheParseXML: Refactor parsing of cache XML
    
    Use virXMLProp* helpers to simplify the code.
    
    Signed-off-by: Peter Krempa <pkrempa>
    Reviewed-by: Michal Privoznik <mprivozn>

v9.3.0-78-gaf621caa6b

Comment 3 liang cong 2023-05-23 09:03:58 UTC
Preverified on upstream libvirt v9.3.0-110-g3b6d69237f

Test steps:
1 Define a guest vm with below vcpu setting:
<numa>
  <cell id="0" cpus="0" memory="1048576" unit="KiB">
    <cache level="1" associativity='none' policy="none">
      <size value="10" unit="KiB"/>
      <line value="8" unit="B"/>
    </cache>
  </cell>
  <cell id="1" cpus="1" memory="1048576" unit="KiB"/>
  <interconnects>
    <latency initiator="0" target="0" cache="1" type="read" value="5"/>
    <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/>
  </interconnects>
</numa>

2 Start the guest
# virsh start vm1
Domain 'vm1' started

3 Check the config xml by virsh dumpxml
# virsh dumpxml vm1 --xpath '//numa'
<numa>
  <cell id="0" cpus="0" memory="1048576" unit="KiB">
    <cache level="1" associativity="none" policy="none">
      <size value="10" unit="KiB"/>
      <line value="8" unit="B"/>
    </cache>
  </cell>
  <cell id="1" cpus="1" memory="1048576" unit="KiB"/>
  <interconnects>
    <latency initiator="0" target="0" cache="1" type="read" value="5"/>
    <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/>
  </interconnects>
</numa>

4 Check the hmat info by dmesg cmd
# dmesg | grep hmat
[    0.783293] acpi/hmat: Memory Flags:0001 Processor Domain:0 Memory Domain:0
[    0.784178] acpi/hmat: Memory Flags:0001 Processor Domain:1 Memory Domain:1
[    0.785175] acpi/hmat: Locality: Flags:01 Type:Read Latency Initiator Domains:2 Target Domains:2 Base:1000
[    0.786309] acpi/hmat:   Initiator-Target[0-0]:5 nsec
[    0.787137] acpi/hmat:   Initiator-Target[0-1]:0 nsec
[    0.787933] acpi/hmat:   Initiator-Target[1-0]:0 nsec
[    0.788722] acpi/hmat:   Initiator-Target[1-1]:0 nsec
[    0.790134] acpi/hmat: Locality: Flags:01 Type:Access Bandwidth Initiator Domains:2 Target Domains:2 Base:8
[    0.791268] acpi/hmat:   Initiator-Target[0-0]:200 MB/s
[    0.792131] acpi/hmat:   Initiator-Target[0-1]:0 MB/s
[    0.792933] acpi/hmat:   Initiator-Target[1-0]:0 MB/s
[    0.793705] acpi/hmat:   Initiator-Target[1-1]:0 MB/s
[    0.794129] acpi/hmat: Cache: Domain:0 Size:10240 Attrs:00080011 SMBIOS Handles:0

Comment 4 liang cong 2023-07-25 06:53:41 UTC
Verified on build:
# rpm -q libvirt qemu-kvm
libvirt-9.5.0-3.el9.x86_64
qemu-kvm-8.0.0-9.el9.x86_64

Test steps:
1 Define a guest vm with below vcpu setting:
<numa>
  <cell id="0" cpus="0" memory="1048576" unit="KiB">
    <cache level="1" associativity='none' policy="none">
      <size value="10" unit="KiB"/>
      <line value="8" unit="B"/>
    </cache>
  </cell>
  <cell id="1" cpus="1" memory="1048576" unit="KiB"/>
  <interconnects>
    <latency initiator="0" target="0" cache="1" type="read" value="5"/>
    <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/>
  </interconnects>
</numa>

2 Start the guest
# virsh start vm1
Domain 'vm1' started

3 Check the config xml by virsh dumpxml
# virsh dumpxml vm1 | xmllint -xpath '//numa' -
<numa>
      <cell id="0" cpus="0" memory="1048576" unit="KiB">
        <cache level="1" associativity="none" policy="none">
          <size value="10" unit="KiB"/>
          <line value="8" unit="B"/>
        </cache>
      </cell>
      <cell id="1" cpus="1" memory="1048576" unit="KiB"/>
      <interconnects>
        <latency initiator="0" target="0" cache="1" type="read" value="5"/>
        <bandwidth initiator="0" target="0" cache="1" type="access" value="204800" unit="KiB"/>
      </interconnects>
    </numa>

4 Check the hmat info by dmesg cmd in guest
# dmesg | grep hmat
[    0.873137] acpi/hmat: Memory Flags:0001 Processor Domain:0 Memory Domain:0
[    0.873139] acpi/hmat: Memory Flags:0001 Processor Domain:1 Memory Domain:1
[    0.873140] acpi/hmat: Locality: Flags:01 Type:Read Latency Initiator Domains:2 Target Domains:2 Base:1000
[    0.873141] acpi/hmat:   Initiator-Target[0-0]:5 nsec
[    0.873142] acpi/hmat:   Initiator-Target[0-1]:0 nsec
[    0.873142] acpi/hmat:   Initiator-Target[1-0]:0 nsec
[    0.873143] acpi/hmat:   Initiator-Target[1-1]:0 nsec
[    0.873143] acpi/hmat: Locality: Flags:01 Type:Access Bandwidth Initiator Domains:2 Target Domains:2 Base:8
[    0.873145] acpi/hmat:   Initiator-Target[0-0]:200 MB/s
[    0.873145] acpi/hmat:   Initiator-Target[0-1]:0 MB/s
[    0.873146] acpi/hmat:   Initiator-Target[1-0]:0 MB/s
[    0.873146] acpi/hmat:   Initiator-Target[1-1]:0 MB/s
[    0.873147] acpi/hmat: Cache: Domain:0 Size:10240 Attrs:00080011 SMBIOS Handles:0


5. in guest check numa hmat related info by virsh capabilities
# virsh capabilities | xmllint --xpath '//cells' -
...
     <cache level="1" associativity="none" policy="none">
            <size value="10" unit="KiB"/>
            <line value="8" unit="B"/>
          </cache>
...

Comment 8 liang cong 2023-08-07 03:25:21 UTC
mark it verified per comment 4

Comment 10 errata-xmlrpc 2023-11-07 08:31:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: libvirt security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6409


Note You need to log in before you can comment on or make changes to this bug.