Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2188878

Summary: Cpu affinity info from vcpuinfo and vcpupin are different after host cpu back online
Product: Red Hat Enterprise Linux 8 Reporter: liang cong <lcong>
Component: libvirtAssignee: Martin Kletzander <mkletzan>
Status: CLOSED MIGRATED QA Contact: Luyao Huang <lhuang>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 8.8CC: jsuchane, lmen, mkletzan, mprivozn, virt-maint
Target Milestone: rcKeywords: MigratedToJIRA, Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-22 17:54:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liang cong 2023-04-23 03:10:07 UTC
Description of problem:Make the host cpu offline, check the cpu affinity with virsh vcpuinfo and virsh cpupin. Then make the host cpu online and check cpu affinity again, find cpu affinity from vcpuinfo remains same as cpu offline, even destroy and start the guest again the cpu affinity info from vcpuinfo remains unchanged. But cpu affinity from virsh vcpupin would vary according to cpu online status.


Version-Release number of selected component (if applicable):
# rpm -q libvirt qemu-kvm
libvirt-8.0.0-19.module+el8.8.0+18453+e0bf0d1d.x86_64
qemu-kvm-6.2.0-32.module+el8.8.0+18361+9f407f6e.x86_64

How reproducible:
100%

Steps to Reproduce:
1 Start a guest vm with below vcpu setting:
<vcpu placement='static'>2</vcpu>

2 Check the cpu affinity with vcpuinfo and vcpupin
# virsh vcpuinfo vm1
VCPU:           0
CPU:            12
State:          running
CPU time:       19.7s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           1
CPU:            44
State:          running
CPU time:       13.6s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-47
 1      0-47

3 Make one host cpu offline
# echo 0 > /sys/devices/system/cpu/cpu2/online

4  Check the cpu affinity with vcpuinfo and vcpupin again
# virsh vcpuinfo vm1
VCPU:           0
CPU:            12
State:          running
CPU time:       19.8s
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           1
CPU:            44
State:          running
CPU time:       13.7s
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-1,3-47
 1      0-1,3-47

5 Check the cgroup cpuset.cpus
# cat /sys/fs/cgroup/cpuset/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/emulator/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/vcpu*/cpuset.cpus
0-1,3-47
0-1,3-47

6 Make the cpu back online
# echo 1 > /sys/devices/system/cpu/cpu2/online

7 Check the cpu affinity with vcpuinfo and vcpupin
# virsh vcpuinfo vm1
VCPU:           0
CPU:            12
State:          running
CPU time:       19.8s
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           1
CPU:            44
State:          running
CPU time:       13.8s
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-47
 1      0-47

7 Destroy and start the guest
# virsh destroy vm1
Domain 'vm1' destroyed

# virsh start vm1
Domain 'vm1' started

8 Check the cpu affinity with vcpuinfo and vcpupin
# virsh vcpuinfo vm1
VCPU:           0
CPU:            35
State:          running
CPU time:       1.8s
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           1
CPU:            9
State:          running
CPU Affinity:   yy-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-47
 1      0-47

9 Check cgroup cpuset.cpus
# cat /sys/fs/cgroup/cpuset/cpuset.cpus
0-47
# cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/emulator/cpuset.cpus
0-1,3-47
# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/vcpu*/cpuset.cpus
0-1,3-47
0-1,3-47

Actual results:
Cpu affinity info from vcpuinfo and vcpupin are different after cpu back online

Expected results:
Cpu affinity info from vcpuinfo and vcpupin should be same

Additional info:
1 This issue can not be reproduced on rhel9.2 and rhel9.3(cgroupv2)
2 virsh emulatorpin has the same behavior with vcpupin

Comment 1 Michal Privoznik 2023-06-01 14:09:18 UTC
(In reply to liang cong from comment #0)

> 3 Make one host cpu offline
> # echo 0 > /sys/devices/system/cpu/cpu2/online
> 

> 5 Check the cgroup cpuset.cpus
> # cat /sys/fs/cgroup/cpuset/cpuset.cpus
> 0-1,3-47
> # cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
> 0-1,3-47
> # cat
> /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/cpuset.
> cpus
> 0-1,3-47

> 6 Make the cpu back online
> # echo 1 > /sys/devices/system/cpu/cpu2/online

> 9 Check cgroup cpuset.cpus
> # cat /sys/fs/cgroup/cpuset/cpuset.cpus
> 0-47
> # cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
> 0-1,3-47

1: this ^^^

> # cat
> /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/cpuset.
> cpus
> 0-1,3-47

Now, [1] is the source of the problem. Off-lining a CPU removes it from cpuset controller (expected), bringing it back online doesn't (okay). But since all machines are started under machine.slice (which is managed by systemd), and the cpuset list is already missing the formerly offlined CPU then all subsequent cpusets are missing it too (because of the hierarchical design of CGroups). Now, there's a difference between affinity and pinning. The former sets on which CPUs a process WANTS to run, the latter sets on which CPUs a process CAN run. 'virsh vcpuinfo' displays the affinity and 'virsh vcpupin' displays pinning. So maybe the real bug here is the misleading "CPU Affinity" string in the output of 'vcpupin' command?

Comment 2 Martin Kletzander 2023-06-02 10:23:30 UTC
I think the issue is that the two commands are taking the source of their information from different places.  One is probably using sched_getaffinity and the other one browses cgroups.  The question is what should we report.  I would even be fine if after online-ing the cpu the affinity got updated and the pinning did not, which is the other way around.  But I guess we'll have to either pick one or do a union.

Comment 3 liang cong 2023-06-09 07:42:45 UTC
There is another scenario:
libvirt version:
# rpm -q libvirt
libvirt-9.3.0-2.el9.x86_64

Test steps:
1. Start numad service
# systemctl start numad

2. Start a guest with below confg xml:
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
...
<vcpu placement='static'>4</vcpu>

3. Check the cpu affinity by vcpuinfo and vcpupin right after the guest is started. And the cpu affinity are the same, and it is all the available physical CPUs.
# virsh vcpuinfo vm1 --pretty
VCPU:           0
CPU:            1
State:          running
CPU time:       5.0s
CPU Affinity:   0-5 (out of 6)

VCPU:           1
CPU:            1
State:          running
CPU time:       0.1s
CPU Affinity:   0-5 (out of 6)

VCPU:           2
CPU:            3
State:          running
CPU time:       0.1s
CPU Affinity:   0-5 (out of 6)

VCPU:           3
CPU:            0
State:          running
CPU time:       0.1s
CPU Affinity:   0-5 (out of 6)

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-5
 1      0-5
 2      0-5
 3      0-5


4. Then wait for around 3 minutes, check the cpu affinity by vcpuinfo and vcpupin again. Then the cpu affinity are different from vcpuinfo and vcpupin.
# virsh vcpuinfo vm1 --pretty
VCPU:           0
CPU:            2
State:          running
CPU time:       24.6s
CPU Affinity:   0-2 (out of 6)

VCPU:           1
CPU:            1
State:          running
CPU time:       9.9s
CPU Affinity:   0-2 (out of 6)

VCPU:           2
CPU:            0
State:          running
CPU time:       9.7s
CPU Affinity:   0-2 (out of 6)

VCPU:           3
CPU:            1
State:          running
CPU time:       9.4s
CPU Affinity:   0-2 (out of 6)

# virsh vcpupin vm1
 VCPU   CPU Affinity
----------------------
 0      0-5
 1      0-5
 2      0-5
 3      0-5

5. Check the cgroup info:
# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/cpuset.cpus

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/emulator/cpuset.cpus

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu0/cpuset.cpus

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu1/cpuset.cpus

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu2/cpuset.cpus

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu3/cpuset.cpus


Hi Martin, could you help to clarify whether this scenario is the same root cause of this bug, if not I would create another bug for this scenario, thx.

Comment 4 Martin Kletzander 2023-07-27 15:36:43 UTC
(In reply to liang cong from comment #3)
> There is another scenario:

[...]

> Hi Martin, could you help to clarify whether this scenario is the same root
> cause of this bug, if not I would create another bug for this scenario, thx.

Oh, that's right, that's another way to change the pinning without libvirt knowing.  The fix will combine both anyway, so I think keeping it here is fine.

Comment 5 Martin Kletzander 2023-07-27 16:08:28 UTC
So actually, I found that on newer kernel (particularly 6.4.5) this is no longer issue.  I wonder, since this is a long existing issue, whether we could just wait for the fix in the kernel.  I'll try to figure this out, just FYI

Comment 6 RHEL Program Management 2023-09-22 17:52:06 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 7 RHEL Program Management 2023-09-22 17:54:27 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.