This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2185720 - Podman system service doesn't work in container
Summary: Podman system service doesn't work in container
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: ubi9-container
Version: 9.1
Hardware: All
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jindrich Novy
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-11 05:26 UTC by raniz
Modified: 2024-02-06 23:20 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-11 20:09:55 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github containers podman issues 15498 0 None closed Error: mkdir /sys/fs/cgroup/init: permission denied 2023-04-11 05:26:17 UTC
Github containers podman pull 15503/commits/e448122feff952f574acdd0f9dea3c1d2edcaccb 0 None None None 2023-04-11 05:26:17 UTC
Red Hat Issue Tracker   RHEL-3132 0 None Migrated None 2024-02-06 21:17:59 UTC
Red Hat Issue Tracker RHELPLAN-154324 0 None None None 2023-04-11 05:27:25 UTC

Description raniz 2023-04-11 05:26:17 UTC
Description of problem:

When running `podman system service` Podman fails with the following error:


Error: mkdir /sys/fs/cgroup/init: read-only file system



Version-Release number of selected component (if applicable):

ubi9/podman:9.1.0-5@sha256:3a9d42016fdd273fb37d053644792aba1268638dd2a066d186c91fa8a357e365
podman-4.2.0-7.el9_1.x86_64


How reproducible:
Always

Steps to Reproduce:

1. podman run --rm registry.access.redhat.com/ubi9/podman:9.1.0-5@sha256:3a9d42016fdd273fb37d053644792aba1268638dd2a066d186c91fa8a357e365 podman system service


Actual results:


time="2023-04-11T05:17:01Z" level=warning msg="\"/\" is not a shared mount, this could cause issues or missing mounts with rootless containers"
time="2023-04-11T05:17:01Z" level=warning msg="Using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding sub*ids if not using a network user"
Error: mkdir /sys/fs/cgroup/init: read-only file system



Expected results:

Podman should start as a service and listen for incoming connections


Additional info:
This has been fixed upstream and released in 4.3.0

Comment 1 Alex Jia 2023-04-11 10:49:47 UTC
It also doesn't work for podman-4.4.1-8.el9 on RHEL 9.3.

[root@kvm-01-guest11 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux release 9.3 Beta (Plow)

[root@kvm-01-guest11 ~]# rpm -q podman crun kernel
podman-4.4.1-8.el9.x86_64
crun-1.8.3-2.el9.x86_64
kernel-5.14.0-295.el9.x86_64

[root@kvm-01-guest11 ~]# podman run --rm registry.access.redhat.com/ubi9/podman:9.1.0-5@sha256:3a9d42016fdd273fb37d053644792aba1268638dd2a066d186c91fa8a357e365 podman system service
Trying to pull registry.access.redhat.com/ubi9/podman@sha256:3a9d42016fdd273fb37d053644792aba1268638dd2a066d186c91fa8a357e365...
Getting image source signatures
Checking if image destination supports signatures
Copying blob ef82b56a7d83 done  
Copying blob d74e20a2726b done  
Copying config d6fb9bd0b4 done  
Writing manifest to image destination
Storing signatures
time="2023-04-11T10:43:10Z" level=warning msg="\"/\" is not a shared mount, this could cause issues or missing mounts with rootless containers"
Error: mkdir /sys/fs/cgroup/init: read-only file system

A workaround is to append --privileged option to podman run like below

[root@kvm-01-guest11 ~]# podman run --privileged registry.access.redhat.com/ubi9/podman:9.1.0-5@sha256:3a9d42016fdd273fb37d053644792aba1268638dd2a066d186c91fa8a357e365 podman system service -t 0 &
[1] 88852
[root@kvm-01-guest11 ~]# podman ps --no-trunc
CONTAINER ID                                                      IMAGE                                                                                                           COMMAND                     CREATED        STATUS        PORTS       NAMES
75a2bb3189620e9b5c0dd5d061ec0d68a1b364be846ca756960b7e3bcbb6d7f2  registry.access.redhat.com/ubi9/podman@sha256:3a9d42016fdd273fb37d053644792aba1268638dd2a066d186c91fa8a357e365  podman system service -t 0  3 minutes ago  Up 3 minutes              magical_ellis

Comment 2 raniz 2023-04-11 11:10:36 UTC
That's rather strange as I have it working on UBI 9.1 by installing podman-4.4.0-1.el9.x86_64.rpm from a 9.2 beta system.

[build@aea5c6c3df67 ~]$ rpm -q podman
podman-4.4.0-1.el9.x86_64
[build@aea5c6c3df67 ~]$ podman --log-level info system service
INFO[0000] podman filtering at log level info
INFO[0000] Setting parallel job count to 49
INFO[0000] podman filtering at log level info
INFO[0000] Setting parallel job count to 49
INFO[0000] API service listening on "/tmp/podman-run-1000/podman/podman.sock". URI: "unix:/tmp/podman-run-1000/podman/podman.sock"

Comment 3 raniz 2023-04-11 15:11:55 UTC
@ajia I read your comment more thoroughly now. The problem is Podman in the UBI image, not Podman on the host - in the production system our image is running under CRI-O, not Podman.

Comment 4 Alex Jia 2023-04-12 13:46:28 UTC
(In reply to raniz from comment #3)
> @ajia I read your comment more thoroughly now. The problem is
> Podman in the UBI image, not Podman on the host - in the production system
> our image is running under CRI-O, not Podman.

It works in root mode for running tests in nested podman container w/ --privileged option.

1. root

a. w/ --privileged option
[root@kvm-01-guest21 ~]# podman run --privileged -it registry-proxy.engineering.redhat.com/rh-osbs/rhel9-podman:9.1.0-13
Trying to pull registry-proxy.engineering.redhat.com/rh-osbs/rhel9-podman:9.1.0-13...
Getting image source signatures
Copying blob 1ac0cc55d713 done  
Copying blob fdf83fdf5c32 done  
Copying config 8336e53d79 done  
Writing manifest to image destination
Storing signatures
[root@81e629389117 /]# rpm -q podman crun
podman-4.2.0-10.el9_1.x86_64
crun-1.5-1.el9.x86_64
[root@723e69b03590 /]# podman --log-level info system service -t 0 &
[1] 11
[root@723e69b03590 /]# INFO[0000] podman filtering at log level info           
INFO[0000] Setting parallel job count to 7              
INFO[0000] API service listening on "/run/podman/podman.sock". URI: "unix:///run/podman/podman.sock" 
INFO[0000] API service listening on "/run/podman/podman.sock" 

b. w/o --privileged option
[root@kvm-01-guest21 ~]# podman run -it registry-proxy.engineering.redhat.com/rh-osbs/rhel9-podman:9.1.0-13
[root@81e629389117 /]# podman --log-level info system service
INFO[0000] podman filtering at log level info           
WARN[0000] "/" is not a shared mount, this could cause issues or missing mounts with rootless containers 
INFO[0000] podman filtering at log level info           
INFO[0000] Setting parallel job count to 7              
Error: mkdir /sys/fs/cgroup/init: read-only file system

2. rootless

[root@kvm-01-guest21 ~]# podman run --privileged -it registry-proxy.engineering.redhat.com/rh-osbs/rhel9-podman:9.1.0-13
[root@9f2454dac717 /]# su test
[test@9f2454dac717 /]$ id
uid=1001(test) gid=1001(test) groups=1001(test)
[test@9f2454dac717 /]$ podman unshare cat /proc/self/uid_map 
         0       1001          1
         1     100000      65536
[test@9f2454dac717 /]$ podman --log-level info system service
INFO[0000] podman filtering at log level info           
INFO[0000] Setting parallel job count to 7              
INFO[0000] podman filtering at log level info           
INFO[0000] Setting parallel job count to 7              
Error: mkdir /sys/fs/cgroup/init: permission denied

Comment 5 raniz 2023-04-12 15:25:18 UTC
It does, but using privileged containers is not an option for us.

Comment 6 Tom Sweeney 2023-04-12 19:43:58 UTC
@mheon thoughts on this one?

Comment 7 Matthew Heon 2023-04-12 20:06:17 UTC
The issue has been fixed upstream in Podman, and will land in RHEL 9.2 with Podman 4.4. At some point, that content will be added to UBI9 (I do not know the lifecycle of UBI, so I cannot say when, that is a question for someone else). I don't really have any thoughts other than this.

Is the request here is for a 9.1 backport of the specific patch and then subsequently getting a Podman build with that backport into UBI9? If so, the first questions would have to be to the maintainers of UBI, because their policy on accepting updates outside of major RHEL releases will control that.

Comment 8 raniz 2023-04-13 05:06:13 UTC
@mheon For me this is a request of a backport since it seems to be fixed with 9.2 but I have no idea when that will hit - and until it does we need to maintain a workaround in our image (which is to keep the RPM from 9.2 Beta up to date and installed in the image).

Comment 9 Josh Boyer 2023-04-13 11:43:30 UTC
(In reply to Matthew Heon from comment #7)
> The issue has been fixed upstream in Podman, and will land in RHEL 9.2 with
> Podman 4.4. At some point, that content will be added to UBI9 (I do not know
> the lifecycle of UBI, so I cannot say when, that is a question for someone
> else). I don't really have any thoughts other than this.

UBI9 inherits RHEL rpms from the latest minor release.

(In reply to raniz from comment #8)
> @mheon For me this is a request of a backport since it seems to
> be fixed with 9.2 but I have no idea when that will hit - and until it does
> we need to maintain a workaround in our image (which is to keep the RPM from
> 9.2 Beta up to date and installed in the image).

Do you have a Customer Portal support case opened for this?

Comment 11 raniz 2023-04-14 06:26:17 UTC
@(In reply to Josh Boyer from comment #9)
> (In reply to Matthew Heon from comment #7)
> > The issue has been fixed upstream in Podman, and will land in RHEL 9.2 with
> > Podman 4.4. At some point, that content will be added to UBI9 (I do not know
> > the lifecycle of UBI, so I cannot say when, that is a question for someone
> > else). I don't really have any thoughts other than this.
> 
> UBI9 inherits RHEL rpms from the latest minor release.

So I guess that means this will resolve itself when RHEL 9.2 hits. Any estimate on when that is?
 
> (In reply to raniz from comment #8)
> > @mheon For me this is a request of a backport since it seems to
> > be fixed with 9.2 but I have no idea when that will hit - and until it does
> > we need to maintain a workaround in our image (which is to keep the RPM from
> > 9.2 Beta up to date and installed in the image).
> 
> Do you have a Customer Portal support case opened for this?

I do not. I'm a consultant and don't have access to my customer's subscription. Perhaps I should ask my customer to open one?

Comment 12 Josh Boyer 2023-04-14 10:37:44 UTC
(In reply to raniz from comment #11)
> @(In reply to Josh Boyer from comment #9)
> > (In reply to Matthew Heon from comment #7)
> > > The issue has been fixed upstream in Podman, and will land in RHEL 9.2 with
> > > Podman 4.4. At some point, that content will be added to UBI9 (I do not know
> > > the lifecycle of UBI, so I cannot say when, that is a question for someone
> > > else). I don't really have any thoughts other than this.
> > 
> > UBI9 inherits RHEL rpms from the latest minor release.
> 
> So I guess that means this will resolve itself when RHEL 9.2 hits. Any
> estimate on when that is?

RHEL has a 6 month minor release cadence.  9.1 shipped at the beginning of November, so 9.2 should be available sometime in May.

> > (In reply to raniz from comment #8)
> > > @mheon For me this is a request of a backport since it seems to
> > > be fixed with 9.2 but I have no idea when that will hit - and until it does
> > > we need to maintain a workaround in our image (which is to keep the RPM from
> > > 9.2 Beta up to date and installed in the image).
> > 
> > Do you have a Customer Portal support case opened for this?
> 
> I do not. I'm a consultant and don't have access to my customer's
> subscription. Perhaps I should ask my customer to open one?

Customer cases are always appreciated.  In this specific instance, if the customer can wait for the 9.2 release then it likely isn't necessary because the fix is already planned.

Comment 13 raniz 2023-04-14 10:57:49 UTC
(In reply to Josh Boyer from comment #12)
> 
> RHEL has a 6 month minor release cadence.  9.1 shipped at the beginning of
> November, so 9.2 should be available sometime in May.

In that case I'd say we'll just wait

> 
> Customer cases are always appreciated.  In this specific instance, if the
> customer can wait for the 9.2 release then it likely isn't necessary because
> the fix is already planned.

Can and want is two different things :) But since 9.2 looks to be on the horizon I think it's ok. I'll have to keep an eye out and see if Podman in 9.2 beta gets any patches and keep our workaround updated in the interim.

I consider this issue closed from our point of view then.

Thanks!

Comment 14 Angelo Brantley 2023-07-31 09:24:24 UTC Comment hidden (spam)
Comment 15 RHEL Program Management 2023-09-11 20:08:23 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 16 RHEL Program Management 2023-09-11 20:09:55 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.

Comment 17 marrysa444 2023-11-02 10:51:15 UTC Comment hidden (spam)
Comment 18 misterhulk11 2023-11-19 10:34:11 UTC Comment hidden (spam)
Comment 19 lum387131 2023-12-10 12:43:49 UTC Comment hidden (spam)
Comment 20 Colby Emmitt 2024-01-22 07:36:40 UTC Comment hidden (spam)
Comment 21 DheemFeroz 2024-01-22 08:40:50 UTC
Podman service is running on the host machine. You can check the service status using a command like systemctl status podman.service. If it's not running, you may need to start or restart the service.

Comment 22 onezcommerce08 2024-01-23 11:48:24 UTC
The error you're encountering with Podman, specifically the "mkdir /sys/fs/cgroup/init: read-only file system" message, suggests a problem with the file system being mounted as read-only. To address this, first, verify the file system's mount status using the mount | grep '/sys/fs/cgroup' command and check if it's mounted as read-only. If so, attempt to remount it as read-write using mount -o remount,rw /sys/fs/cgroup.  

https://apkberg.com/car-simulator-2-mod-apk/

Comment 23 chrisownbey 2024-01-23 13:42:07 UTC
If you haven't already, consider upgrading your Podman installation to the latest version. You can check the official Podman documentation or repository for instructions on how to upgrade.

Comment 24 chrisownbey 2024-01-23 13:43:31 UTC
If you haven't already, consider upgrading your Podman installation to the latest version. You can check the official Podman documentation or repository for instructions on how to upgrade.

Comment 25 freefirebattles00 2024-01-31 17:43:34 UTC
It does, but using privileged containers is not an option for us.

Comment 26 GabrielaJayla 2024-02-01 10:38:15 UTC
Verify that any volumes or paths required for Podman to interact with the host system are correctly mounted into the container. This includes paths like /var/run/podman and /var/lib/containers.

Comment 27 ericbejlic 2024-02-03 10:11:46 UTC
The system service often requires elevated privileges to interact with the host system and manage containers. Containers, by default, operate with restricted privileges for security reasons. Attempting to run the system service within a container may encounter permission issues.

Comment 28 henery 2024-02-06 10:33:03 UTC
resolving the problem involves troubleshooting the root cause, which may require examining the configuration settings, checking for compatibility with the container environment, or identifying and fixing any bugs in the software. Once the issue is pinpointed and addressed, the system or service should resume functioning as expected, whether it's generating salary checks in the Ratibi system or managing containers with Podman.

Comment 29 henery 2024-02-06 10:33:58 UTC
with Podman, which is a container management tool, if its system service fails to operate within a container, it could be due to misconfigurations, compatibility issues with the container environment, or even a bug within the Podman software itself.


Note You need to log in before you can comment on or make changes to this bug.