Hide Forgot
+++ This bug was initially created as a clone of Bug #1789824 +++ Description of problem: The cgroups cpu.shares limits applied to existing scopes (e.g. machine-qemu*, systemd-nspawn*) under /sys/fs/cgroup/cpu/machine.slice/ are reset to the default value (of 1024) when next creating a new scope via machinectl or libvirt after performing a systemctl daemon-reload. However, if you manually create a new scope under machine.slice and change the cpu.shares to a different value, but don't allocate a process to it that is registered with systemd-machined, then the value of cpu.shares is preserved. This affects both (QEMU/KVM) VMs created via libvirt and nspawn containers created via machinectl. Version-Release number of selected component (if applicable): systemd-219-67.el7_7.2 libvirt-4.5.0-23.el7_7.3 How reproducible: Consistently reproducible. Steps to Reproduce: 1. virsh create test1.xml 2. systemctl daemon-reload 3. virsh create test2.xml 4. systemctl daemon-reload 5. virsh create test3.xml (See attached libvirt XML definition) Actual results: After (1): grep . /sys/fs/cgroup/cpu/machine.slice/*/cpu.shares /sys/fs/cgroup/cpu/machine.slice/machine-qemu\x2d148\x2dtest1.scope/cpu.shares:2048 After (3): grep . /sys/fs/cgroup/cpu/machine.slice/*/cpu.shares /sys/fs/cgroup/cpu/machine.slice/machine-qemu\x2d148\x2dtest1.scope/cpu.shares:1024 /sys/fs/cgroup/cpu/machine.slice/machine-qemu\x2d150\x2dtest2.scope/cpu.shares:2048 After (5): grep . /sys/fs/cgroup/cpu/machine.slice/*/cpu.shares /sys/fs/cgroup/cpu/machine.slice/machine-qemu\x2d148\x2dtest1.scope/cpu.shares:1024 /sys/fs/cgroup/cpu/machine.slice/machine-qemu\x2d150\x2dtest2.scope/cpu.shares:1024 /sys/fs/cgroup/cpu/machine.slice/machine-qemu\x2d151\x2dtest3.scope/cpu.shares:2048 Expected results: grep . /sys/fs/cgroup/cpu/machine.slice/*/cpu.shares /sys/fs/cgroup/cpu/machine.slice/machine-qemu\x2d148\x2dtest1.scope/cpu.shares:2048 /sys/fs/cgroup/cpu/machine.slice/machine-qemu\x2d150\x2dtest2.scope/cpu.shares:2048 /sys/fs/cgroup/cpu/machine.slice/machine-qemu\x2d151\x2dtest3.scope/cpu.shares:2048 Additional info: This seems similar in scope to the runc bug detailed in #1455071 and the systemd bug detailed in #1139223. I've confirmed that the Delegate= directive is correctly set and applied to the existing machine scopes under machine.slice: # cat /run/systemd/system/machine-qemu\\x2d148\\x2dtest1.scope.d/50-Delegate.conf [Scope] Delegate=yes # systemctl show "machine-qemu\\x2d148\\x2dtest1.scope" | grep Delegate Delegate=yes
*** Bug 1906323 has been marked as a duplicate of this bug. ***
Fixed in upstream: 6a1f5e8a4f vircgroup: correctly free nested virCgroupPtr 85099c3393 tests: add cgroup nested tests 184245f53b vircgroup: introduce nested cgroup to properly work with systemd badc2bcc73 vircgroup: introduce virCgroupV1Exists and virCgroupV2Exists 382fa15cde vircgroupv2: move task into cgroup before enabling controllers 5f56dd7c83 vircgroupv1: refactor virCgroupV1DetectPlacement 9c1693eff4 vircgroup: use DBus call to systemd for some APIs d3fb774b1e virsystemd: introduce virSystemdGetMachineUnitByPID 385704d5a4 virsystemd: introduce virSystemdGetMachineByPID a51147d906 virsystemd: export virSystemdHasMachined
reproduced on libvirt-7.0.0-3.module+el8.4.0+9709+a99efd61.x86_64, qa_ack+ [root@dell-per730-59 libvirt-ci]# virsh schedinfo avocado-vt-vm1 --set cpu_shares=2048 Scheduler : posix cpu_shares : 2048 vcpu_period : 100000 vcpu_quota : -1 emulator_period: 100000 emulator_quota : -1 global_period : 100000 global_quota : -1 iothread_period: 100000 iothread_quota : -1 [root@dell-per730-59 libvirt-ci]# virsh schedinfo avocado-vt-vm1 Scheduler : posix cpu_shares : 2048 vcpu_period : 100000 vcpu_quota : -1 emulator_period: 100000 emulator_quota : -1 global_period : 100000 global_quota : -1 iothread_period: 100000 iothread_quota : -1 [root@dell-per730-59 libvirt-ci]# cat /sys/fs/cgroup/cpu\,cpuacct/machine.slice/machine-qemu\\x2d3\\x2davocado\\x2dvt\\x2dvm1.scope/cpu.shares 2048 [root@dell-per730-59 libvirt-ci]# systemctl daemon-reload [root@dell-per730-59 libvirt-ci]# cat /sys/fs/cgroup/cpu\,cpuacct/machine.slice/machine-qemu\\x2d3\\x2davocado\\x2dvt\\x2dvm1.scope/cpu.shares 1024 [root@dell-per730-59 libvirt-ci]# virsh schedinfo avocado-vt-vm1 Scheduler : posix cpu_shares : 1024 vcpu_period : 100000 vcpu_quota : -1 emulator_period: 100000 emulator_quota : -1 global_period : 100000 global_quota : -1 iothread_period: 100000 iothread_quota : -1
Tested with scratch build: libvirt-6.0.0-35.el8_rc.ff27127dba.x86_64 Bug Test: cgroup v1 test: [root@dell-per730-62 files]# virsh blkiotune vm1 --weight 999 [root@dell-per730-62 files]# virsh blkiotune vm1 weight : 999 device_weight : device_read_iops_sec: device_write_iops_sec: device_read_bytes_sec: device_write_bytes_sec: [root@dell-per730-62 files]# cat /sys/fs/cgroup/blkio/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/blkio.bfq.weight 999 [root@dell-per730-62 files]# virsh memtune vm1 --hard-limit 8888888 [root@dell-per730-62 files]# virsh memtune vm1 hard_limit : 8888888 soft_limit : unlimited swap_hard_limit: unlimited [root@dell-per730-62 files]# cat /sys/fs/cgroup/memory/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/libvirt/memory.limit_in_bytes 9102221312 (=1024 * 8888888) [root@dell-per730-62 files]# virsh schedinfo vm1 --set cpu_shares=2222 global_period=3333 global_quota=1111 Scheduler : posix cpu_shares : 2222 vcpu_period : 100000 vcpu_quota : -1 emulator_period: 100000 emulator_quota : -1 global_period : 3333 global_quota : 1111 iothread_period: 3333 iothread_quota : 1111 [root@dell-per730-62 files]# cat /sys/fs/cgroup/cpu\,cpuacct/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/cpu.shares 2222 [root@dell-per730-62 files]# cat /sys/fs/cgroup/cpu\,cpuacct/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/libvirt/cpu.cfs_period_us 3333 [root@dell-per730-62 files]# cat /sys/fs/cgroup/cpu\,cpuacct/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/libvirt/cpu.cfs_quota_us 1111 [root@dell-per730-62 files]# systemctl daemon-reload [root@dell-per730-62 files]# cat /sys/fs/cgroup/blkio/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/blkio.bfq.weight 999 [root@dell-per730-62 files]# cat /sys/fs/cgroup/memory/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/libvirt/memory.limit_in_bytes 9102221312 [root@dell-per730-62 files]# cat /sys/fs/cgroup/cpu\,cpuacct/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/cpu.shares 2222 [root@dell-per730-62 files]# cat /sys/fs/cgroup/cpu\,cpuacct/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/libvirt/cpu.cfs_period_us 3333 [root@dell-per730-62 files]# cat /sys/fs/cgroup/cpu\,cpuacct/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/libvirt/cpu.cfs_quota_us 1111 [root@dell-per730-62 files]# virsh blkiotune vm1 weight : 999 device_weight : device_read_iops_sec: device_write_iops_sec: device_read_bytes_sec: device_write_bytes_sec: [root@dell-per730-62 files]# virsh memtune vm1 hard_limit : 8888888 soft_limit : unlimited swap_hard_limit: unlimited [root@dell-per730-62 files]# virsh schedinfo vm1 Scheduler : posix cpu_shares : 2222 vcpu_period : 100000 vcpu_quota : -1 emulator_period: 100000 emulator_quota : -1 global_period : 3333 global_quota : 1111 iothread_period: 3333 iothread_quota : 1111 Regression Automation Test: (.libvirt-ci-venv-ci-runtest-Wfwcan) [root@dell-per730-62 files]# avocado run --vt-type libvirt guest_resource_control JOB ID : 26058a853468b9054fb345e58d9ca145268700df JOB LOG : /root/avocado/job-results/job-2021-02-25T04.26-26058a8/job.log (001/168) type_specific.io-github-autotest-libvirt.guest_resource_control.control_cgroup.vm_running.hot.blkiotune.weight.positive: PASS (5.09 s) (002/168) type_specific.io-github-autotest-libvirt.guest_resource_control.control_cgroup.vm_running.hot.blkiotune.weight.negative.value_over_limit: PASS (11.17 s) ... (166/168) type_specific.io-github-autotest-libvirt.guest_resource_control.control_cgroup.vm_shutdown.config.schedinfo.iothread_period.negative.value_not_number: PASS (4.02 s) (167/168) type_specific.io-github-autotest-libvirt.guest_resource_control.control_cgroup.vm_shutdown.config.schedinfo.iothread_quota.positive: PASS (26.73 s) (168/168) type_specific.io-github-autotest-libvirt.guest_resource_control.control_cgroup.vm_shutdown.config.schedinfo.iothread_quota.negative.value_not_number: PASS (4.11 s) RESULTS : PASS 168 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB TIME : 1979.43 s CGroup V2 test: [root@dell-per730-62 ~]# virsh blkiotune vm1 --weight 999 [root@dell-per730-62 ~]# virsh blkiotune vm1 weight : 100 device_weight : device_read_iops_sec: device_write_iops_sec: device_read_bytes_sec: device_write_bytes_sec: <== hit bz1927290 [root@dell-per730-62 ~]# virsh memtune vm1 --hard-limit 8888888 [root@dell-per730-62 ~]# virsh memtune vm1 hard_limit : 8888888 soft_limit : unlimited swap_hard_limit: unlimited [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/memory.max 9102221312 [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# virsh schedinfo vm1 --set cpu_shares=2222 global_period=3333 global_quota=1111 Scheduler : posix cpu_shares : 2222 vcpu_period : 100000 vcpu_quota : 17592186044415 emulator_period: 100000 emulator_quota : 17592186044415 global_period : 3333 global_quota : 1111 iothread_period: 3333 iothread_quota : 1111 [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/cpu.weight 2222 [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/cpu.max 1111 3333 [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# systemctl daemon-reload [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# virsh memtune vm1 hard_limit : 8888888 soft_limit : unlimited swap_hard_limit: unlimited [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/memory.max 9102221312 [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# virsh schedinfo vm1 Scheduler : posix cpu_shares : 2222 vcpu_period : 100000 vcpu_quota : 17592186044415 emulator_period: 100000 emulator_quota : 17592186044415 global_period : 3333 global_quota : 1111 iothread_period: 3333 iothread_quota : 1111 [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/cpu.weight 2222 [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/cpu.max 1111 3333 Automation regression test: (.libvirt-ci-venv-ci-runtest-Wfwcan) [root@dell-per730-62 machine-qemu\x2d1\x2dvm1.scope]# avocado run --vt-type libvirt guest_resource_control..vm_running..hot JOB ID : 2a3d0ae037730e6957f4a4c60c073b1c6ac5f169 JOB LOG : /root/avocado/job-results/job-2021-02-25T05.47-2a3d0ae/job.log (01/42) type_specific.io-github-autotest-libvirt.guest_resource_control.control_cgroup.vm_running.hot.blkiotune.weight.positive: FAIL: blkiotune checking failed. (5.73 s) ... (17/42) type_specific.io-github-autotest-libvirt.guest_resource_control.control_cgroup.vm_running.hot.blkiotune.all_in_one.positive: FAIL: blkiotune checking failed. (11.96 s) (41/42) type_specific.io-github-autotest-libvirt.guest_resource_control.control_cgroup.vm_running.hot.schedinfo.iothread_quota.positive: PASS (11.05 s) (42/42) type_specific.io-github-autotest-libvirt.guest_resource_control.control_cgroup.vm_running.hot.schedinfo.iothread_quota.negative.value_not_number: PASS (11.14 s) RESULTS : PASS 40 | ERROR 0 | FAIL 2 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB TIME : 464.35 s <== Failed cases hit bz1927290
Also have same questoins as: https://bugzilla.redhat.com/show_bug.cgi?id=1798464#c10 But they are just some minor issues, so set "verified" field to "tested" For this function, we can say: "validation is successful and there were no observed regressions"
Verified on libvirt-6.0.0-35.module+el8.4.0+10230+7a9b21e4.x86_64 same test steps and auto regression test as comment 9
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:1762