Bug 2185720
| Summary: | Podman system service doesn't work in container | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | raniz |
| Component: | ubi9-container | Assignee: | Jindrich Novy <jnovy> |
| Status: | CLOSED MIGRATED | QA Contact: | atomic-bugs <atomic-bugs> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 9.1 | CC: | ajia, aswadb397, chrisownbey0, ericbejlic, freefirebattles00, howuae0, juneau, jwboyer, knoxxander454, mheon, onezcommerce08, tsweeney, zesismark |
| Target Milestone: | rc | Keywords: | MigratedToJIRA |
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-09-11 20:09:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
raniz
2023-04-11 05:26:17 UTC
It also doesn't work for podman-4.4.1-8.el9 on RHEL 9.3. [root@kvm-01-guest11 ~]# cat /etc/redhat-release Red Hat Enterprise Linux release 9.3 Beta (Plow) [root@kvm-01-guest11 ~]# rpm -q podman crun kernel podman-4.4.1-8.el9.x86_64 crun-1.8.3-2.el9.x86_64 kernel-5.14.0-295.el9.x86_64 [root@kvm-01-guest11 ~]# podman run --rm registry.access.redhat.com/ubi9/podman:9.1.0-5@sha256:3a9d42016fdd273fb37d053644792aba1268638dd2a066d186c91fa8a357e365 podman system service Trying to pull registry.access.redhat.com/ubi9/podman@sha256:3a9d42016fdd273fb37d053644792aba1268638dd2a066d186c91fa8a357e365... Getting image source signatures Checking if image destination supports signatures Copying blob ef82b56a7d83 done Copying blob d74e20a2726b done Copying config d6fb9bd0b4 done Writing manifest to image destination Storing signatures time="2023-04-11T10:43:10Z" level=warning msg="\"/\" is not a shared mount, this could cause issues or missing mounts with rootless containers" Error: mkdir /sys/fs/cgroup/init: read-only file system A workaround is to append --privileged option to podman run like below [root@kvm-01-guest11 ~]# podman run --privileged registry.access.redhat.com/ubi9/podman:9.1.0-5@sha256:3a9d42016fdd273fb37d053644792aba1268638dd2a066d186c91fa8a357e365 podman system service -t 0 & [1] 88852 [root@kvm-01-guest11 ~]# podman ps --no-trunc CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 75a2bb3189620e9b5c0dd5d061ec0d68a1b364be846ca756960b7e3bcbb6d7f2 registry.access.redhat.com/ubi9/podman@sha256:3a9d42016fdd273fb37d053644792aba1268638dd2a066d186c91fa8a357e365 podman system service -t 0 3 minutes ago Up 3 minutes magical_ellis That's rather strange as I have it working on UBI 9.1 by installing podman-4.4.0-1.el9.x86_64.rpm from a 9.2 beta system. [build@aea5c6c3df67 ~]$ rpm -q podman podman-4.4.0-1.el9.x86_64 [build@aea5c6c3df67 ~]$ podman --log-level info system service INFO[0000] podman filtering at log level info INFO[0000] Setting parallel job count to 49 INFO[0000] podman filtering at log level info INFO[0000] Setting parallel job count to 49 INFO[0000] API service listening on "/tmp/podman-run-1000/podman/podman.sock". URI: "unix:/tmp/podman-run-1000/podman/podman.sock" @ajia I read your comment more thoroughly now. The problem is Podman in the UBI image, not Podman on the host - in the production system our image is running under CRI-O, not Podman. (In reply to raniz from comment #3) > @ajia I read your comment more thoroughly now. The problem is > Podman in the UBI image, not Podman on the host - in the production system > our image is running under CRI-O, not Podman. It works in root mode for running tests in nested podman container w/ --privileged option. 1. root a. w/ --privileged option [root@kvm-01-guest21 ~]# podman run --privileged -it registry-proxy.engineering.redhat.com/rh-osbs/rhel9-podman:9.1.0-13 Trying to pull registry-proxy.engineering.redhat.com/rh-osbs/rhel9-podman:9.1.0-13... Getting image source signatures Copying blob 1ac0cc55d713 done Copying blob fdf83fdf5c32 done Copying config 8336e53d79 done Writing manifest to image destination Storing signatures [root@81e629389117 /]# rpm -q podman crun podman-4.2.0-10.el9_1.x86_64 crun-1.5-1.el9.x86_64 [root@723e69b03590 /]# podman --log-level info system service -t 0 & [1] 11 [root@723e69b03590 /]# INFO[0000] podman filtering at log level info INFO[0000] Setting parallel job count to 7 INFO[0000] API service listening on "/run/podman/podman.sock". URI: "unix:///run/podman/podman.sock" INFO[0000] API service listening on "/run/podman/podman.sock" b. w/o --privileged option [root@kvm-01-guest21 ~]# podman run -it registry-proxy.engineering.redhat.com/rh-osbs/rhel9-podman:9.1.0-13 [root@81e629389117 /]# podman --log-level info system service INFO[0000] podman filtering at log level info WARN[0000] "/" is not a shared mount, this could cause issues or missing mounts with rootless containers INFO[0000] podman filtering at log level info INFO[0000] Setting parallel job count to 7 Error: mkdir /sys/fs/cgroup/init: read-only file system 2. rootless [root@kvm-01-guest21 ~]# podman run --privileged -it registry-proxy.engineering.redhat.com/rh-osbs/rhel9-podman:9.1.0-13 [root@9f2454dac717 /]# su test [test@9f2454dac717 /]$ id uid=1001(test) gid=1001(test) groups=1001(test) [test@9f2454dac717 /]$ podman unshare cat /proc/self/uid_map 0 1001 1 1 100000 65536 [test@9f2454dac717 /]$ podman --log-level info system service INFO[0000] podman filtering at log level info INFO[0000] Setting parallel job count to 7 INFO[0000] podman filtering at log level info INFO[0000] Setting parallel job count to 7 Error: mkdir /sys/fs/cgroup/init: permission denied It does, but using privileged containers is not an option for us. @mheon thoughts on this one? The issue has been fixed upstream in Podman, and will land in RHEL 9.2 with Podman 4.4. At some point, that content will be added to UBI9 (I do not know the lifecycle of UBI, so I cannot say when, that is a question for someone else). I don't really have any thoughts other than this. Is the request here is for a 9.1 backport of the specific patch and then subsequently getting a Podman build with that backport into UBI9? If so, the first questions would have to be to the maintainers of UBI, because their policy on accepting updates outside of major RHEL releases will control that. @mheon For me this is a request of a backport since it seems to be fixed with 9.2 but I have no idea when that will hit - and until it does we need to maintain a workaround in our image (which is to keep the RPM from 9.2 Beta up to date and installed in the image). (In reply to Matthew Heon from comment #7) > The issue has been fixed upstream in Podman, and will land in RHEL 9.2 with > Podman 4.4. At some point, that content will be added to UBI9 (I do not know > the lifecycle of UBI, so I cannot say when, that is a question for someone > else). I don't really have any thoughts other than this. UBI9 inherits RHEL rpms from the latest minor release. (In reply to raniz from comment #8) > @mheon For me this is a request of a backport since it seems to > be fixed with 9.2 but I have no idea when that will hit - and until it does > we need to maintain a workaround in our image (which is to keep the RPM from > 9.2 Beta up to date and installed in the image). Do you have a Customer Portal support case opened for this? @(In reply to Josh Boyer from comment #9) > (In reply to Matthew Heon from comment #7) > > The issue has been fixed upstream in Podman, and will land in RHEL 9.2 with > > Podman 4.4. At some point, that content will be added to UBI9 (I do not know > > the lifecycle of UBI, so I cannot say when, that is a question for someone > > else). I don't really have any thoughts other than this. > > UBI9 inherits RHEL rpms from the latest minor release. So I guess that means this will resolve itself when RHEL 9.2 hits. Any estimate on when that is? > (In reply to raniz from comment #8) > > @mheon For me this is a request of a backport since it seems to > > be fixed with 9.2 but I have no idea when that will hit - and until it does > > we need to maintain a workaround in our image (which is to keep the RPM from > > 9.2 Beta up to date and installed in the image). > > Do you have a Customer Portal support case opened for this? I do not. I'm a consultant and don't have access to my customer's subscription. Perhaps I should ask my customer to open one? (In reply to raniz from comment #11) > @(In reply to Josh Boyer from comment #9) > > (In reply to Matthew Heon from comment #7) > > > The issue has been fixed upstream in Podman, and will land in RHEL 9.2 with > > > Podman 4.4. At some point, that content will be added to UBI9 (I do not know > > > the lifecycle of UBI, so I cannot say when, that is a question for someone > > > else). I don't really have any thoughts other than this. > > > > UBI9 inherits RHEL rpms from the latest minor release. > > So I guess that means this will resolve itself when RHEL 9.2 hits. Any > estimate on when that is? RHEL has a 6 month minor release cadence. 9.1 shipped at the beginning of November, so 9.2 should be available sometime in May. > > (In reply to raniz from comment #8) > > > @mheon For me this is a request of a backport since it seems to > > > be fixed with 9.2 but I have no idea when that will hit - and until it does > > > we need to maintain a workaround in our image (which is to keep the RPM from > > > 9.2 Beta up to date and installed in the image). > > > > Do you have a Customer Portal support case opened for this? > > I do not. I'm a consultant and don't have access to my customer's > subscription. Perhaps I should ask my customer to open one? Customer cases are always appreciated. In this specific instance, if the customer can wait for the 9.2 release then it likely isn't necessary because the fix is already planned. (In reply to Josh Boyer from comment #12) > > RHEL has a 6 month minor release cadence. 9.1 shipped at the beginning of > November, so 9.2 should be available sometime in May. In that case I'd say we'll just wait > > Customer cases are always appreciated. In this specific instance, if the > customer can wait for the 9.2 release then it likely isn't necessary because > the fix is already planned. Can and want is two different things :) But since 9.2 looks to be on the horizon I think it's ok. I'll have to keep an eye out and see if Podman in 9.2 beta gets any patches and keep our workaround updated in the interim. I consider this issue closed from our point of view then. Thanks! NGL Mod APK is the PRO version of NGL APK. By using the NGL Mod APK, you can easily complete any tasks and requirements in it. Often you need to spend a lot of time or money to get rewards easily, but by using NGL Mod APK, you often achieve your goals in a very short time. https://apkscart.com/ngl-mod-apk/ Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug. This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there. Due to differences in account names between systems, some fields were not replicated. Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information. To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer. You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like: "Bugzilla Bug" = 1234567 In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information. This comment was flagged a spam, view the edit history to see the original text if required. This comment was flagged a spam, view the edit history to see the original text if required. This comment was flagged a spam, view the edit history to see the original text if required. This comment was flagged a spam, view the edit history to see the original text if required. Podman service is running on the host machine. You can check the service status using a command like systemctl status podman.service. If it's not running, you may need to start or restart the service. The error you're encountering with Podman, specifically the "mkdir /sys/fs/cgroup/init: read-only file system" message, suggests a problem with the file system being mounted as read-only. To address this, first, verify the file system's mount status using the mount | grep '/sys/fs/cgroup' command and check if it's mounted as read-only. If so, attempt to remount it as read-write using mount -o remount,rw /sys/fs/cgroup. https://apkberg.com/car-simulator-2-mod-apk/ If you haven't already, consider upgrading your Podman installation to the latest version. You can check the official Podman documentation or repository for instructions on how to upgrade. If you haven't already, consider upgrading your Podman installation to the latest version. You can check the official Podman documentation or repository for instructions on how to upgrade. It does, but using privileged containers is not an option for us. Verify that any volumes or paths required for Podman to interact with the host system are correctly mounted into the container. This includes paths like /var/run/podman and /var/lib/containers. The system service often requires elevated privileges to interact with the host system and manage containers. Containers, by default, operate with restricted privileges for security reasons. Attempting to run the system service within a container may encounter permission issues. resolving the problem involves troubleshooting the root cause, which may require examining the configuration settings, checking for compatibility with the container environment, or identifying and fixing any bugs in the software. Once the issue is pinpointed and addressed, the system or service should resume functioning as expected, whether it's generating salary checks in the Ratibi system or managing containers with Podman. with Podman, which is a container management tool, if its system service fails to operate within a container, it could be due to misconfigurations, compatibility issues with the container environment, or even a bug within the Podman software itself. |