Bug 2188878
| Summary: | Cpu affinity info from vcpuinfo and vcpupin are different after host cpu back online | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | liang cong <lcong> |
| Component: | libvirt | Assignee: | Martin Kletzander <mkletzan> |
| Status: | ASSIGNED --- | QA Contact: | Luyao Huang <lhuang> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.8 | CC: | jsuchane, lmen, mkletzan, mprivozn, virt-maint |
| Target Milestone: | rc | Keywords: | Triaged |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
liang cong
2023-04-23 03:10:07 UTC
(In reply to liang cong from comment #0) > 3 Make one host cpu offline > # echo 0 > /sys/devices/system/cpu/cpu2/online > > 5 Check the cgroup cpuset.cpus > # cat /sys/fs/cgroup/cpuset/cpuset.cpus > 0-1,3-47 > # cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus > 0-1,3-47 > # cat > /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/cpuset. > cpus > 0-1,3-47 > 6 Make the cpu back online > # echo 1 > /sys/devices/system/cpu/cpu2/online > 9 Check cgroup cpuset.cpus > # cat /sys/fs/cgroup/cpuset/cpuset.cpus > 0-47 > # cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus > 0-1,3-47 1: this ^^^ > # cat > /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/cpuset. > cpus > 0-1,3-47 Now, [1] is the source of the problem. Off-lining a CPU removes it from cpuset controller (expected), bringing it back online doesn't (okay). But since all machines are started under machine.slice (which is managed by systemd), and the cpuset list is already missing the formerly offlined CPU then all subsequent cpusets are missing it too (because of the hierarchical design of CGroups). Now, there's a difference between affinity and pinning. The former sets on which CPUs a process WANTS to run, the latter sets on which CPUs a process CAN run. 'virsh vcpuinfo' displays the affinity and 'virsh vcpupin' displays pinning. So maybe the real bug here is the misleading "CPU Affinity" string in the output of 'vcpupin' command? I think the issue is that the two commands are taking the source of their information from different places. One is probably using sched_getaffinity and the other one browses cgroups. The question is what should we report. I would even be fine if after online-ing the cpu the affinity got updated and the pinning did not, which is the other way around. But I guess we'll have to either pick one or do a union. There is another scenario: libvirt version: # rpm -q libvirt libvirt-9.3.0-2.el9.x86_64 Test steps: 1. Start numad service # systemctl start numad 2. Start a guest with below confg xml: <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> ... <vcpu placement='static'>4</vcpu> 3. Check the cpu affinity by vcpuinfo and vcpupin right after the guest is started. And the cpu affinity are the same, and it is all the available physical CPUs. # virsh vcpuinfo vm1 --pretty VCPU: 0 CPU: 1 State: running CPU time: 5.0s CPU Affinity: 0-5 (out of 6) VCPU: 1 CPU: 1 State: running CPU time: 0.1s CPU Affinity: 0-5 (out of 6) VCPU: 2 CPU: 3 State: running CPU time: 0.1s CPU Affinity: 0-5 (out of 6) VCPU: 3 CPU: 0 State: running CPU time: 0.1s CPU Affinity: 0-5 (out of 6) # virsh vcpupin vm1 VCPU CPU Affinity ---------------------- 0 0-5 1 0-5 2 0-5 3 0-5 4. Then wait for around 3 minutes, check the cpu affinity by vcpuinfo and vcpupin again. Then the cpu affinity are different from vcpuinfo and vcpupin. # virsh vcpuinfo vm1 --pretty VCPU: 0 CPU: 2 State: running CPU time: 24.6s CPU Affinity: 0-2 (out of 6) VCPU: 1 CPU: 1 State: running CPU time: 9.9s CPU Affinity: 0-2 (out of 6) VCPU: 2 CPU: 0 State: running CPU time: 9.7s CPU Affinity: 0-2 (out of 6) VCPU: 3 CPU: 1 State: running CPU time: 9.4s CPU Affinity: 0-2 (out of 6) # virsh vcpupin vm1 VCPU CPU Affinity ---------------------- 0 0-5 1 0-5 2 0-5 3 0-5 5. Check the cgroup info: # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/cpuset.cpus # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/emulator/cpuset.cpus # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu0/cpuset.cpus # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu1/cpuset.cpus # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu2/cpuset.cpus # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d12\\x2dvm1.scope/libvirt/vcpu3/cpuset.cpus Hi Martin, could you help to clarify whether this scenario is the same root cause of this bug, if not I would create another bug for this scenario, thx. (In reply to liang cong from comment #3) > There is another scenario: [...] > Hi Martin, could you help to clarify whether this scenario is the same root > cause of this bug, if not I would create another bug for this scenario, thx. Oh, that's right, that's another way to change the pinning without libvirt knowing. The fix will combine both anyway, so I think keeping it here is fine. So actually, I found that on newer kernel (particularly 6.4.5) this is no longer issue. I wonder, since this is a long existing issue, whether we could just wait for the fix in the kernel. I'll try to figure this out, just FYI |