Bug 2188878 - Cpu affinity info from vcpuinfo and vcpupin are different after host cpu back online
Summary: Cpu affinity info from vcpuinfo and vcpupin are different after host cpu back...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: libvirt
Version: 8.8
Hardware: x86_64
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Martin Kletzander
QA Contact: Luyao Huang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-23 03:10 UTC by liang cong
Modified: 2023-07-27 16:08 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-155448 0 None None None 2023-04-23 03:10:55 UTC

Description liang cong 2023-04-23 03:10:07 UTC
Description of problem:Make the host cpu offline, check the cpu affinity with virsh vcpuinfo and virsh cpupin. Then make the host cpu online and check cpu affinity again, find cpu affinity from vcpuinfo remains same as cpu offline, even destroy and start the guest again the cpu affinity info from vcpuinfo remains unchanged. But cpu affinity from virsh vcpupin would vary according to cpu online status.


Version-Release number of selected component (if applicable):
# rpm -q libvirt qemu-kvm
libvirt-8.0.0-19.module+el8.8.0+18453+e0bf0d1d.x86_64
qemu-kvm-6.2.0-32.module+el8.8.0+18361+9f407f6e.x86_64

How reproducible:
100%

Steps to Reproduce:
1 Start a guest vm with below vcpu setting:
<vcpu placement='static'>2</vcpu>

2 Check the cpu affinity with vcpuinfo and vcpupin
# virsh vcpuinfo vm1
VCPU:           0
CPU:            12
State:          running
CPU time:       19.7s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           1
CPU:            44
State:          running
CPU time:       13.6s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-47
 1      0-47

3 Make one host cpu offline
# echo 0 > /sys/devices/system/cpu/cpu2/online

4  Check the cpu affinity with vcpuinfo and vcpupin again
# virsh vcpuinfo vm1
VCPU:           0
CPU:            12
State:          running
CPU time:       19.8s
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           1
CPU:            44
State:          running
CPU time:       13.7s
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-1,3-47
 1      0-1,3-47

5 Check the cgroup cpuset.cpus
# cat /sys/fs/cgroup/cpuset/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/emulator/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/vcpu*/cpuset.cpus
0-1,3-47
0-1,3-47

6 Make the cpu back online
# echo 1 > /sys/devices/system/cpu/cpu2/online

7 Check the cpu affinity with vcpuinfo and vcpupin
# virsh vcpuinfo vm1
VCPU:           0
CPU:            12
State:          running
CPU time:       19.8s
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           1
CPU:            44
State:          running
CPU time:       13.8s
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-47
 1      0-47

7 Destroy and start the guest
# virsh destroy vm1
Domain 'vm1' destroyed

# virsh start vm1
Domain 'vm1' started

8 Check the cpu affinity with vcpuinfo and vcpupin
# virsh vcpuinfo vm1
VCPU:           0
CPU:            35
State:          running
CPU time:       1.8s
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           1
CPU:            9
State:          running
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-47
 1      0-47

9 Check cgroup cpuset.cpus
# cat /sys/fs/cgroup/cpuset/cpuset.cpus
0-47
# cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/emulator/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/vcpu*/cpuset.cpus
0-1,3-47
0-1,3-47

Actual results:
Cpu affinity info from vcpuinfo and vcpupin are different after cpu back online

Expected results:
Cpu affinity info from vcpuinfo and vcpupin should be same

Additional info:
1 This issue can not be reproduced on rhel9.2 and rhel9.3(cgroupv2)
2 virsh emulatorpin has the same behavior with vcpupin

Comment 1 Michal Privoznik 2023-06-01 14:09:18 UTC
(In reply to liang cong from comment #0)

> 3 Make one host cpu offline
> # echo 0 > /sys/devices/system/cpu/cpu2/online
> 

> 5 Check the cgroup cpuset.cpus
> # cat /sys/fs/cgroup/cpuset/cpuset.cpus
> 0-1,3-47
> # cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
> 0-1,3-47
> # cat
> /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/cpuset.
> cpus
> 0-1,3-47

> 6 Make the cpu back online
> # echo 1 > /sys/devices/system/cpu/cpu2/online

> 9 Check cgroup cpuset.cpus
> # cat /sys/fs/cgroup/cpuset/cpuset.cpus
> 0-47
> # cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
> 0-1,3-47

1: this ^^^

> # cat
> /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/cpuset.
> cpus
> 0-1,3-47

Now, [1] is the source of the problem. Off-lining a CPU removes it from cpuset controller (expected), bringing it back online doesn't (okay). But since all machines are started under machine.slice (which is managed by systemd), and the cpuset list is already missing the formerly offlined CPU then all subsequent cpusets are missing it too (because of the hierarchical design of CGroups). Now, there's a difference between affinity and pinning. The former sets on which CPUs a process WANTS to run, the latter sets on which CPUs a process CAN run. 'virsh vcpuinfo' displays the affinity and 'virsh vcpupin' displays pinning. So maybe the real bug here is the misleading "CPU Affinity" string in the output of 'vcpupin' command?

Comment 2 Martin Kletzander 2023-06-02 10:23:30 UTC
I think the issue is that the two commands are taking the source of their information from different places.  One is probably using sched_getaffinity and the other one browses cgroups.  The question is what should we report.  I would even be fine if after online-ing the cpu the affinity got updated and the pinning did not, which is the other way around.  But I guess we'll have to either pick one or do a union.

Comment 3 liang cong 2023-06-09 07:42:45 UTC
There is another scenario:
libvirt version:
# rpm -q libvirt
libvirt-9.3.0-2.el9.x86_64

Test steps:
1. Start numad service
# systemctl start numad

2. Start a guest with below confg xml:
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
...
<vcpu placement='static'>4</vcpu>

3. Check the cpu affinity by vcpuinfo and vcpupin right after the guest is started. And the cpu affinity are the same, and it is all the available physical CPUs.
# virsh vcpuinfo vm1 --pretty
VCPU:           0
CPU:            1
State:          running
CPU time:       5.0s
CPU Affinity:   0-5 (out of 6)

VCPU:           1
CPU:            1
State:          running
CPU time:       0.1s
CPU Affinity:   0-5 (out of 6)

VCPU:           2
CPU:            3
State:          running
CPU time:       0.1s
CPU Affinity:   0-5 (out of 6)

VCPU:           3
CPU:            0
State:          running
CPU time:       0.1s
CPU Affinity:   0-5 (out of 6)

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-5
 1      0-5
 2      0-5
 3      0-5


4. Then wait for around 3 minutes, check the cpu affinity by vcpuinfo and vcpupin again. Then the cpu affinity are different from vcpuinfo and vcpupin.
# virsh vcpuinfo vm1 --pretty
VCPU:           0
CPU:            2
State:          running
CPU time:       24.6s
CPU Affinity:   0-2 (out of 6)

VCPU:           1
CPU:            1
State:          running
CPU time:       9.9s
CPU Affinity:   0-2 (out of 6)

VCPU:           2
CPU:            0
State:          running
CPU time:       9.7s
CPU Affinity:   0-2 (out of 6)

VCPU:           3
CPU:            1
State:          running
CPU time:       9.4s
CPU Affinity:   0-2 (out of 6)

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-5
 1      0-5
 2      0-5
 3      0-5

5. Check the cgroup info:
# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/cpuset.cpus

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/emulator/cpuset.cpus

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu0/cpuset.cpus

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu1/cpuset.cpus

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu2/cpuset.cpus

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu3/cpuset.cpus


Hi Martin, could you help to clarify whether this scenario is the same root cause of this bug, if not I would create another bug for this scenario, thx.

Comment 4 Martin Kletzander 2023-07-27 15:36:43 UTC
(In reply to liang cong from comment #3)
> There is another scenario:

[...]

> Hi Martin, could you help to clarify whether this scenario is the same root
> cause of this bug, if not I would create another bug for this scenario, thx.

Oh, that's right, that's another way to change the pinning without libvirt knowing.  The fix will combine both anyway, so I think keeping it here is fine.

Comment 5 Martin Kletzander 2023-07-27 16:08:28 UTC
So actually, I found that on newer kernel (particularly 6.4.5) this is no longer issue.  I wonder, since this is a long existing issue, whether we could just wait for the fix in the kernel.  I'll try to figure this out, just FYI


Note You need to log in before you can comment on or make changes to this bug.