Bug 2155622 - L1 Cache info not correct
Summary: L1 Cache info not correct
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 4.10.9
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 4.14.0
Assignee: Barak
QA Contact: Kedar Bidarkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-12-21 16:24 UTC by Nils Koenig
Modified: 2023-12-23 04:25 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-24 10:02:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirt domain xml (19.34 KB, text/plain)
2022-12-21 16:24 UTC, Nils Koenig
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CNV-23525 0 None None None 2022-12-21 16:33:22 UTC

Description Nils Koenig 2022-12-21 16:24:34 UTC
Created attachment 1933987 [details]
libvirt domain xml

Description of problem:

The information which CPUs and their hyperthread share which cache seems to be wrong in the guest.


Version-Release number of selected component (if applicable):

oc version
Client Version: 4.10.45
Server Version: 4.10.45
Kubernetes Version: v1.23.12+8a6bfe4


How reproducible:

Guest

# lscpu --all --extended

CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE
0   0    0      0    0:0:0:0       yes
1   0    0      0    1:1:0:0       yes
2   0    0      1    2:2:1:0       yes
3   0    0      1    3:3:1:0       yes
4   0    0      2    4:4:2:0       yes
5   0    0      2    5:5:2:0       yes

Dom XML

    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='113'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='114'/>
    <vcpupin vcpu='4' cpuset='3'/>
    <vcpupin vcpu='5' cpuset='115'/>


Bare metal host

lscpu --all --extended
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ    MINMHZ
0   0    0      0    0:0:0:0       yes    4300.0000 1000.0000
1   0    0      1    1:1:1:0       yes    4300.0000 1000.0000
2   0    0      2    2:2:2:0       yes    4300.0000 1000.0000
3   0    0      3    3:3:3:0       yes    4300.0000 1000.0000
...
112 0    0      0    0:0:0:0       yes    4300.0000 1000.0000
113 0    0      1    1:1:1:0       yes    4300.0000 1000.0000
114 0    0      2    2:2:2:0       yes    4300.0000 1000.0000
115 0    0      3    3:3:3:0       yes    4300.0000 1000.0000


Note that on the HW the it's indicated that a CPU and it's Hyperthread share all caches (e.g. cpu1 and cpu113 have 1:1:1:0).

In the guest the coresponding cpu0 and cpu1 have a different line there 0:0:0:0  
 vs 1:1:0:0.

Now I can't tell if thats a real issue or just a cosmetic thing but it's definately different and should be corrected.

Comment 2 Barak 2023-08-21 15:19:43 UTC
I'm not really sure if the cache info within the domain has any meaning, This is because the guest VMs are provided with virtualized CPU resources by the host, and these virtualized resources not perfectly match the physical resources of the host machine.

For instance we can have 1 cpu in the host that is mapped to 10 virtualized cpus.

Jiri Do you know if it is possible to configure the way the domain see the caches via the domain xml?

I'm not sure if the cache details within the domain really matter. This is because the virtual machines that run on the host computer use virtualized CPUs and virtualized cache system, which might not match the actual physical CPU of the host perfectly.

To give you an example, imagine the host has 1 CPU, but it's set up so that 10 virtual CPUs can use it.

@, do you know if it's possible to adjust how the domain understands and uses the CPU caches through the domain's configuration settings (XML)?

Comment 3 Barak 2023-08-24 10:02:30 UTC
Upon further investigation, I've found additional information regarding the matter. In the past, it was revealed that allocating 16MB of L3 cache to the guest CPU (achieved through the use of the -cpu ... l3-cache=on parameter in the QEMU command-line) led to a 15% performance enhancement( https://gitlab.com/qemu-project/qemu/-/commit/14c985c )

Subsequent to conversations with the libvirt team, it has come to my attention that there exists no method to configure the guest view of the CPU's L1 and L2 caches via the guest XML.

It's crucial to understand that in this scenario, the physical CPU effectively shares its L1 and L2 caches. This behavior, however, is not present through the `lscpu --all --extended` command within the guest environment.

closing this as not a bug.

Comment 4 Daniel Berrangé 2023-08-24 14:36:01 UTC
(In reply to Barak from comment #3)
> Upon further investigation, I've found additional information regarding the
> matter. In the past, it was revealed that allocating 16MB of L3 cache to the
> guest CPU (achieved through the use of the -cpu ... l3-cache=on parameter in
> the QEMU command-line) led to a 15% performance enhancement(
> https://gitlab.com/qemu-project/qemu/-/commit/14c985c )

NB, this does not actually allocating 16 MB of L3 cache. It just tells the
guest OS that its CPU model has 16 MB of L3 cache. Whether that is actually
accurate or not depends on what amount of L3 the current host CPUs have :-)

> Subsequent to conversations with the libvirt team, it has come to my
> attention that there exists no method to configure the guest view of the
> CPU's L1 and L2 caches via the guest XML.

Correct, there is no direct knob for configuiring the exact size of
L1/L2/L3 caches.

> It's crucial to understand that in this scenario, the physical CPU
> effectively shares its L1 and L2 caches. This behavior, however, is not
> present through the `lscpu --all --extended` command within the guest
> environment.

The challenge with attempting to make the guest cache sizes match
the host cache sizes, is that guest can migrate to other hosts
with different cache sizes. So as a default, it makes sense for
QEMU to lie and expose an arbitrary cache size, to make it portable
across migration.

If the VM is using host-passthrough CPU model, then it starts to
get more sensible to try to match the L1/L2 cache info between
host and guest, except for the fact that guest CPUs are floating
freely across host CPUs. So you can say 2 guest CPUs share a
cache, but if one of the guest CPUs has been moved to another
host socket this guest cache info will be inaccurate once more.

If the VM does strict 1:1 CPU pinning for host:guest AND is using
host-passthrough CPU model, then it makes total sense to expose
the host cache info to the guest OS.

This is possible using

<cpu mode='host-passthrough' migratable='off'>
  <cache mode='passthrough'/>
</cpu>


See also:

  https://libvirt.org/formatdomain.html#cpu-model-and-topology

Comment 5 Red Hat Bugzilla 2023-12-23 04:25:09 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.