RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1758679 - OCP workloads result in kubepods-besteffort-podf***.slice units are left in dead state
Summary: OCP workloads result in kubepods-besteffort-podf***.slice units are left in d...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: systemd
Version: 7.7
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: systemd-maint
QA Contact: Frantisek Sumsal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-04 20:03 UTC by Kyle Walker
Modified: 2020-11-11 21:50 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-11 21:50:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Kyle Walker 2019-10-04 20:03:41 UTC
Description of problem:
 Similar to rhbz1718953, another source of "dead" slices is when errors are encountered in the following codepath:

src/core/cgroup.c
<snip>
void unit_destroy_cgroup_if_empty(Unit *u) {
        int r;

        assert(u);

        if (!u->cgroup_path)
                return;

        r = cg_trim_everywhere(u->manager->cgroup_supported, u->cgroup_path, !unit_has_name(u, SPECIAL_ROOT_SLICE));
        if (r < 0) {
                log_debug_errno(r, "Failed to destroy cgroup %s: %m", u->cgroup_path);
                return;
        }

        hashmap_remove(u->manager->cgroup_unit, u->cgroup_path);

        free(u->cgroup_path);
        u->cgroup_path = NULL;
        u->cgroup_realized = false;
        u->cgroup_realized_mask = 0;
}
<snip>


# journalctl  -b | grep "pod115339f0_e6de_11e9_b8a3_fa163e6b8b06"
Oct 04 15:35:39 <hostname> systemd[1]: Failed to destroy cgroup /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice/docker-d90daa71b39c3f4e26e7550bcdaa6e3eef932c42714a284c5817225976d1fc0e.scope, ignoring: No such file or directory
Oct 04 15:35:39 <hostname> systemd[1]: Failed to destroy cgroup /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice, ignoring: No such file or directory


Version-Release number of selected component (if applicable):
 systemd-219-67.el7_7.1.x86_64


How reproducible:
 Difficult - though there may be a simpler possibility not yet uncovered 

Steps to Reproduce:
1. Stand up a functioning OCP 3.11 cluster
2. Execute the following job:

    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: test
    spec:
      schedule: "*/1 * * * *"
      jobTemplate:
        spec:
          template:
            spec:
              containers:
                - name: hello
                  image: bash
                  command: ["echo",  "Hello world"]
              restartPolicy: OnFailure

3. On the node in which the above job executes, monitor the "dead" slices
    # systemctl list-units --type slice --all kubepod*

Actual results:
    # systemctl list-units --all kubepods* | grep dead
      kubepods-burstable-pod2f3868dd_b853_11e9_9946_001a4aa86608.slice                                                                                                                                                                      loaded    inactive dead      kubepods-burstable-pod2f3868dd_b853_11e9_9946_001a4aa86608.slice
      kubepods-burstable-pod376775d4_b2d8_11e9_9946_001a4aa86608.slice                                                                                                                                                                      loaded    inactive dead      libcontainer container kubepods-burstable-pod376775d4_b2d8_11e9_9946_001a4aa86608.slice
      kubepods-burstable-pod649a894a_b793_11e9_9946_001a4aa86608.slice                                                                                                                                                                      loaded    inactive dead      kubepods-burstable-pod649a894a_b793_11e9_9946_001a4aa86608.slice
      kubepods-burstable-podcb0c5f7a_b2f6_11e9_9946_001a4aa86608.slice                                                                                                                                                                      loaded    inactive dead      kubepods-burstable-podcb0c5f7a_b2f6_11e9_9946_001a4aa86608.slice
      kubepods-burstable-pode90a450e_b33c_11e9_9946_001a4aa86608.slice                                                                                                                                                                      loaded    inactive dead      libcontainer container kubepods-burstable-pode90a450e_b33c_11e9_9946_001a4aa86608.slice

Expected results:
    # systemctl list-units --all kubepods* | grep dead

Additional info:
 The problem is no longer observed after including the following upstream commit:

https://github.com/systemd/systemd/commit/0219b3524f414e23589e63c6de6a759811ef8474
<snip>
cgroup: Continue unit reset if cgroup is busy

When part of the cgroup hierarchy cannot be deleted (e.g. because there
are still processes in it), do not exit unit_prune_cgroup early, but
continue so that u->cgroup_realized is reset.

Log the known case of non-empty cgroups at debug level and other errors
at warning level.

Fixes #12386
<snip>

Comment 2 Kyle Walker 2019-10-04 20:13:48 UTC
When using an eBPF script (which I put together for this particular investigation) which monitors rmdir syscalls, we can see where the ENOENT errors are coming from:

# journalctl  -b | grep "pod115339f0_e6de_11e9_b8a3_fa163e6b8b06"
Oct 04 15:35:39 <snip> systemd[1]: Failed to destroy cgroup /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice/docker-d90daa71b39c3f4e26e7550bcdaa6e3eef932c42714a284c5817225976d1fc0e.scope, ignoring: No such file or directory
Oct 04 15:35:39 <snip> systemd[1]: Failed to destroy cgroup /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice, ignoring: No such file or directory


And the associated trace events:

PID    COMM               FD ERR PATH
<snip>
2010   hyperkube          -1   2 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice
2010   hyperkube          -1  16 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice
2010   hyperkube           0   0 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice/docker-d90daa71b39c3f4e26e7550bcdaa6e3eef932c42714a284c5817225976d1fc0e.scope
2010   hyperkube          -1  20 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice/cgroup.clone_children
2010   hyperkube          -1  20 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice/cgroup.event_control
2010   hyperkube          -1  20 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice/notify_on_release
2010   hyperkube          -1  20 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice/cgroup.procs
2010   hyperkube          -1  20 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice/tasks
2010   hyperkube           0   0 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice
1      systemd            -1   2 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice/docker-d90daa71b39c3f4e26e7550bcdaa6e3eef932c42714a284c5817225976d1fc0e.scope
1      systemd            -1   2 /sys/fs/cgroup/systemd/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115339f0_e6de_11e9_b8a3_fa163e6b8b06.slice
<snip>

The hyperkube process attempts to tear down cgroups as well as systemd. When hyperkube gets to the removal first, you get ENOENTs when systemd attempts the same. I believe this is an incorrect behaviour at face-value, since it violates the single-writer rule. However, testing on internal reproducer systems no longer shows this failure after including the indicated patch in the description.

Comment 4 Kyle Walker 2020-02-15 20:42:15 UTC
In my own testing, I wasn't able to encounter the same buildup of dead slices after backporting the indicated patch, but I have not yet received confirmation that the same is true in the originating environment.

In the event that anyone else is encountering the same behaviour, and can deploy a test build with that patch, please update this bug report with that information. It's rather difficult to test for this behaviour, and the buildup of slices is a fairly severe problem in OCP workloads. So it will be beneficial to be able to gain further reports of the problem and this patch resolving the condition.

Comment 6 Chris Williams 2020-11-11 21:50:29 UTC
Red Hat Enterprise Linux 7 shipped it's final minor release on September 29th, 2020. 7.9 was the last minor releases scheduled for RHEL 7.
From intial triage it does not appear the remaining Bugzillas meet the inclusion criteria for Maintenance Phase 2 and will now be closed. 

From the RHEL life cycle page:
https://access.redhat.com/support/policy/updates/errata#Maintenance_Support_2_Phase
"During Maintenance Support 2 Phase for Red Hat Enterprise Linux version 7,Red Hat defined Critical and Important impact Security Advisories (RHSAs) and selected (at Red Hat discretion) Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available."

If this BZ was closed in error and meets the above criteria please re-open it flag for 7.9.z, provide suitable business and technical justifications, and follow the process for Accelerated Fixes:
https://source.redhat.com/groups/public/pnt-cxno/pnt_customer_experience_and_operations_wiki/support_delivery_accelerated_fix_release_handbook  

Feature Requests can re-opened and moved to RHEL 8 if the desired functionality is not already present in the product. 

Please reach out to the applicable Product Experience Engineer[0] if you have any questions or concerns.  

[0] https://bugzilla.redhat.com/page.cgi?id=agile_component_mapping.html&product=Red+Hat+Enterprise+Linux+7


Note You need to log in before you can comment on or make changes to this bug.