Bug 2048962 - OS shutdown fails when multiple users runs rootless podman
Summary: OS shutdown fails when multiple users runs rootless podman
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: systemd
Version: CentOS Stream
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Michal Sekletar
QA Contact: Frantisek Sumsal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-01 09:17 UTC by Frank Büttner
Modified: 2023-08-01 07:28 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-01 07:28:07 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Screenshot of the hang. (30.38 KB, image/png)
2022-02-01 09:17 UTC, Frank Büttner
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-110499 0 None None None 2022-02-01 09:21:04 UTC

Description Frank Büttner 2022-02-01 09:17:12 UTC
Created attachment 1858278 [details]
Screenshot of the hang.

Description of problem:
When multiple rootless podman instances run, then the shutdown fails.

Version-Release number of selected component (if applicable):
systemd-239-56.el8.x86_64

How reproducible:
Every time

Steps to Reproduce:
1. boot the server
2. wait until the podman instances are up
3. call reboot

Actual results:
The server hangs at shutdown.


Expected results:
That an shutdown/reboot will work

Additional info:
Using only one rootless podman instance, will work.
In this case, UID 991 and 992 are the podman rootless users.

Comment 2 RHEL Program Management 2023-08-01 07:28:07 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.