Bug 1991528
| Summary: | podman pod rm --force failed with device or resource busy when set --cgroup-manager to cgroupfs | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | Joy Pu <ypu> |
| Component: | podman | Assignee: | Matthew Heon <mheon> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | atomic-bugs <atomic-bugs> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 9.0 | CC: | bbaude, dwalsh, jligon, jnovy, lsm5, mheon, pthomas, tsweeney, umohnani |
| Target Milestone: | beta | Keywords: | Reopened |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-07-10 19:28:50 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Joy Pu
2021-08-09 11:10:26 UTC
I'll take this one. I was just digging around in the code around this. We have code that prevents this on CGroups v1 systems (the issue which we encountered before, and which I'm almost certain is happening here, is that the cleanup process is launching and occupying the conmon cgroup, preventing its deletion; the solution was to set a PID limit on said cgroup before stopping the containers of the pod, to prevent the cleanup process from being launched). I presume that cgroupfs has changed sufficiently from v1 to v2 that said code does not work on RHEL9 (and likely will not work on RHEL8 + cgroupsV2 + cgroupfs either). After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. @mheon any progress on this one? reping @mheon The code in question appears to have been entirely removed while I was not working on this bug (was replaced as part of the effort to add resource limits to pods), so I think we can call this done. Wish I could say this was intentional and I was waiting for the code to be refactored out of existence, but this just fell lower in priority than other bugs long enough that the code changed around it. Going to CLOSED CURRENTRELEASE given the complete removal of affected codepaths. |