Bug 1732957
Summary: | rootless containers don't appear to work as expected | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Adam Miller <admiller> |
Component: | podman | Assignee: | Giuseppe Scrivano <gscrivan> |
Status: | CLOSED DUPLICATE | QA Contact: | Yuhui Jiang <yujiang> |
Severity: | high | Docs Contact: | |
Priority: | urgent | ||
Version: | 8.4 | CC: | ajia, bbreard, dwalsh, gscrivan, imcleod, jligon, jnovy, lfriedma, lsm5, mheon, pthomas, smccarty, tsweeney, weshen, ypu, yujiang |
Target Milestone: | rc | Keywords: | Reopened, Triaged |
Target Release: | 8.3 | Flags: | yujiang:
needinfo-
|
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | crun-0.13-1.el8 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-03-04 07:59:27 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1844322 | ||
Bug Blocks: |
Description
Adam Miller
2019-07-24 20:19:57 UTC
I think this is https://github.com/containers/libpod/issues/3024 Is there any chance at all that you've somehow clobbered /usr/libexec/podman/conmon ? Or that you've built a custom newer podman instead of using the one provided in the rpm? I get a similar error with a completely default RHEL 8 setup (aka no packages have been replaced): podman run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2 container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"rootfs_linux.go:58: mounting \\\"/sys/fs/cgroup\\\" to rootfs \\\"/home/fatherlinux/.local/share/containers/storage/overlay/67fe8189d37696358439dd4a4ca509e0a065ca3691c6e49242737ad54fd6b9b3/merged\\\" at \\\"/home/fatherlinux/.local/share/containers/storage/overlay/67fe8189d37696358439dd4a4ca509e0a065ca3691c6e49242737ad54fd6b9b3/merged/sys/fs/cgroup\\\" caused \\\"operation not permitted\\\"\"" : internal libpod error It's probably relevant here that the image is running systemd as pid 1. I haven't tried this before with rootless, but it seems like we should be testing this. from podman inspect: { "created": "2019-06-20T18:45:50.564489368Z", "created_by": "/bin/sh -c #(nop) CMD [\"/usr/sbin/init\"]", "empty_layer": true } The /sys/fs/cgroup error is coming out of runc - investigating. The first one, from the original reproducer - from Podman's perspective, there are no errors in there. It seems like the app in the container starts, but immediately exits with an error. Either we're not configuring things properly for systemd, or something like SELinux is blocking something the container wants. Jul 24 17:15:22 Agincourt.redhat.com audit[25503]: AVC avc: denied { write } for pid=25503 comm="systemd" name="gnome-terminal-server.service" dev="cgroup2" ino=2755 scontext=system_u:sy> Suspicion: systemd is attempting to configure CGroups but has no permissions to do so I seem to remember that something like sudo setsebool container_manage_cgroup=true is needed for nesting systemd. ...but that still didn't fix it. Perhaps there's another one that's also needed? Matt you're on the right track because the container runs fine w/ setenforce=0 On F30, things work fine after that `setsebool` command. Going to spin up an 8.1 VM tomorrow morning and see what the differences are... Potentially container-selinux version differences? I swear I already posted this information but apparently bugzilla "lost" it now that I've refreshed .... This is a fresh RHEL 8.1 Beta install, subscribed to subscription-manager, and podman installed via the CDN. [admiller@rhel81beta ~]$ rpm -qf /usr/libexec/podman/conmon podman-1.4.2-1.module+el8.1.0+3423+f0eda5e0.x86_64 [admiller@rhel81beta ~]$ rpm -Vv podman-1.4.2-1.module+el8.1.0+3423+f0eda5e0.x86_64 ......... c /etc/cni/net.d/87-podman-bridge.conflist ......... /usr/bin/podman ......... a /usr/lib/.build-id ......... a /usr/lib/.build-id/81 ......... a /usr/lib/.build-id/81/4387cefcdc0d513f40545b0065e70fd68b056a ......... a /usr/lib/.build-id/8b ......... a /usr/lib/.build-id/8b/0628b3f6c8101f947948d9b7571bc3c7d0faed ......... /usr/lib/systemd/system/io.podman.service ......... /usr/lib/systemd/system/io.podman.socket ......... /usr/lib/tmpfiles.d/podman.conf ......... /usr/libexec/podman ......... /usr/libexec/podman/conmon ......... /usr/share/bash-completion/completions/podman ......... /usr/share/containers/libpod.conf ......... /usr/share/doc/podman ......... d /usr/share/doc/podman/CONTRIBUTING.md ......... d /usr/share/doc/podman/README-hooks.md ......... d /usr/share/doc/podman/README.md ......... d /usr/share/doc/podman/code-of-conduct.md ......... d /usr/share/doc/podman/install.md ......... d /usr/share/doc/podman/transfer.md ......... /usr/share/licenses/podman ......... l /usr/share/licenses/podman/LICENSE ......... /usr/share/zsh/site-functions ......... /usr/share/zsh/site-functions/_podman Reproduced on 8.1 beta nightly. I can still reproduce with SELinux disabled, so I think we might be looking at different issues here? No AVCs even with SELinux enabled. I don't think this is SELinux. Investigating further. Not Seccomp, either. Going to need to get some debug attached to systemd here to figure out what's going on. It's specific to F30, and specific to Podman 1.4.2. I tried a bettery of Fedora images and UBI8, and got the following: [cloud-user@mheon-rhel8 ~]$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ecaf9d7bc281 registry.access.redhat.com/ubi8/ubi:latest init 2 seconds ago Up 1 second ago recursing_cohen d0b24bf921d1 docker.io/library/fedora:28 init 10 seconds ago Up 9 seconds ago xenodochial_morse 6c9d9e578806 docker.io/library/fedora:29 init 16 seconds ago Up 16 seconds ago elastic_gagarin 7c6f1f44ff78 docker.io/library/fedora:30 init 24 seconds ago Exited (255) 24 seconds ago admiring_maxwell Here, only F30 based images fail. Something specific to systemd on F30? Native build of 1.4.2 on Fedora does not have this problem. Did a build of 1.4.4, started working. Investigating further. I scratch-build 1.4.2 and it also works fine. Something's fishy... Alright, the system podman is working now. Potentially something was set by one of the newer builds that made things start working? I am surprised that you can run systemd at all inside of a rootless container. It wants to write to the cgroup file system, and is not allowed as a non privileged user. You need cgroupsv2 for this to work. you can use systemd inside of a rootless container, but you cannot limit resources without cgroups v2. systemd requires a bunch of other mounts to work correctly, so it is better if you just use the run --systemd feature that will automatically set everything up. It requires systemd to be the command you are launching. For me it was enough to run something like: podman run --rm fedora /usr/bin/init The --systemd flag is automatically applied, and it works for everything except Fedora 30 based images. I think we have a problem with the systemd version there. I've gone back and confirmed, and my earlier testing of manual builds was actually done as root (oops). I retested without root, and version does not appear to be an issue - no Podman version on RHEl8.1 works with F30 images and systemd. I'm going to contact the Systemd team to ask for assistance debugging - getting meaningful logs out of systemd would probably solve this. Btw, it does not work that well on Fedora either. I tried podman run --rm -it fedora /usr/bin/systemd and it runs, but the processes from the container are placed under cgroup that belongs to my dbus.service, which is, well weird. https://paste.fedoraproject.org/paste/wHfl~lD0sTcJz-lHdFtk2A Per discussion with the systemd maintainers, it seems like this is likely systemd refusing to delegate from CGroups v1 hierarchy to an unprivileged user. As such, it seems like this never worked properly, and systemd has begun erroring more loudly on this fact, preventing the container from running. Per discussion with the systemd folks, it seems that systemd in rootless containers cannot be supported until CGroups v2 lands, which is RHEL9 at earliest. I actually think cgroups V2 will arrive in RHEL8 earlier then that. It will not be default until RHEL9 at earliest. This is beyond 8.2, but that is the latest I can set for the bugzilla. Moved to 8.3 release. Dan Walsh, another crun and 8.3. Can we close this out or set it to Post for Jindrich? Right lets say that cgroup V2 support will be in RHEL8.3 along with crun. I'm assuming this too will require https://bugzilla.redhat.com/show_bug.cgi?id=1844322 to be completed. Assigning to Jindrich for any packaging needs that might be required once the blocking BZ clears. crun is now part of container-tools-rhel8-8.3.0 Does --cgroups=disabled or --cgroupns=host fix the problem? (In reply to Daniel Walsh from comment #47) > Does --cgroups=disabled or --cgroupns=host fix the problem? I just appended above option to podman run w/o removing other options, I still got error "[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied" like before, although the fedora30-test-container:1.9.2 container is running state. 1. --cgroups=disabled [test@kvm-08-guest29 ~]$ podman --cgroup-manager=systemd --runtime=`which crun` --log-level=debug run --cgroups=disabled --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --cgroup-manager=systemd --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0000] Running with no CGroups DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 -u fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 -r /usr/bin/crun -b /home/test/.local/share/containers/storage/overlay-containers/fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78/userdata -p /run/user/1000/containers/overlay-containers/fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78/userdata/pidfile -n beautiful_ishizaka --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78/userdata/ctr.log --log-level debug --syslog --runtime-arg --cgroup-manager --runtime-arg disabled --conmon-pidfile /run/user/1000/containers/overlay-containers/fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78]" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0000] Received: 24075 INFO[0000] Got Conmon PID as 24072 DEBU[0000] Created container fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 in OCI runtime DEBU[0000] Starting container fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 with command [/usr/sbin/init] DEBU[0000] Started container fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 DEBU[0000] Called run.PersistentPostRunE(podman --cgroup-manager=systemd --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2) [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5bbe09629ef4 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 3 seconds ago Up 3 seconds ago beautiful_lichterman 2. --cgroupns=host [test@kvm-08-guest29 ~]$ podman --cgroup-manager=systemd --runtime=`which crun` --log-level=debug run --cgroupns=host --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --cgroup-manager=systemd --runtime=/usr/bin/crun --log-level=debug run --cgroupns=host --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a -u 08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a -r /usr/bin/crun -b /home/test/.local/share/containers/storage/overlay-containers/08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a/userdata -p /run/user/1000/containers/overlay-containers/08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a/userdata/pidfile -n festive_moore --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -s -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a]" INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a.scope [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0000] Received: -1 DEBU[0000] Cleaning up container 08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a DEBU[0000] Tearing down network namespace at /run/user/1000/netns/cni-cf83cf57-f996-5c26-d1b4-15d957d2a7d5 for container 08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a DEBU[0000] unmounted container "08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a" DEBU[0000] ExitCode msg: "writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user/cgroup.subtree_control`: no such file or directory: oci runtime command not found error" Error: writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user/cgroup.subtree_control`: No such file or directory: OCI runtime command not found error [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5bbe09629ef4 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 48 seconds ago Up 48 seconds ago beautiful_lichterman I think you can remove all of the other options and it should work. The issue is RHEL8 does not have all of the cgroupV2 enabled. I think we need to get input from Giuseppe. (In reply to Daniel Walsh from comment #49) > I think you can remove all of the other options and it should work. The > issue is RHEL8 does not have all of the cgroupV2 enabled. I think we need > to get input from Giuseppe. Thank you Daniel! ACK, it works for me after removing other options. [root@kvm-08-guest29 ~]# podman --runtime=`which crun` --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0019] /usr/bin/conmon messages will be logged to syslog DEBU[0019] Running with no CGroups DEBU[0019] running conmon: /usr/bin/conmon args="[--api-version 1 -c 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 -u 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57/userdata -p /var/run/containers/storage/overlay-containers/4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57/userdata/pidfile -n eloquent_aryabhata --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket -l k8s-file:/var/lib/containers/storage/overlay-containers/4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57/userdata/ctr.log --log-level debug --syslog --runtime-arg --cgroup-manager --runtime-arg disabled --conmon-pidfile /var/run/containers/storage/overlay-containers/4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57]" DEBU[0019] Received: 49783 INFO[0019] Got Conmon PID as 49780 DEBU[0019] Created container 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 in OCI runtime DEBU[0019] Starting container 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 with command [/usr/sbin/init] DEBU[0019] Started container 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 DEBU[0019] Called run.PersistentPostRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2) [root@kvm-08-guest29 ~]# echo $? 0 [root@kvm-08-guest29 ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4c0d9b5ab96b quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 25 seconds ago Up 25 seconds ago eloquent_aryabhata [root@kvm-08-guest29 ~]# podman --runtime=`which crun` --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0019] /usr/bin/conmon messages will be logged to syslog DEBU[0019] running conmon: /usr/bin/conmon args="[--api-version 1 -c d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 -u d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1/userdata -p /var/run/containers/storage/overlay-containers/d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1/userdata/pidfile -n reverent_wilson --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket -s -l k8s-file:/var/lib/containers/storage/overlay-containers/d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /var/run/containers/storage/overlay-containers/d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1]" INFO[0019] Running conmon under slice machine.slice and unitName libpod-conmon-d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1.scope DEBU[0019] Received: 50267 INFO[0019] Got Conmon PID as 50264 DEBU[0019] Created container d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 in OCI runtime DEBU[0019] Starting container d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 with command [/usr/sbin/init] DEBU[0019] Started container d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 DEBU[0019] Called run.PersistentPostRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2) [root@kvm-08-guest29 ~]# echo $? 0 [root@kvm-08-guest29 ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d0205dfbba18 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 18 seconds ago Up 18 seconds ago reverent_wilson Alex can we set this to verified? Dan is that OK with you given the current state or should we set it back to assigned and change the Target Version to 8.4? I am still worked about this working out of the box. IE If I take a rhel8.4 box and reboot it to cgroupv2 mode, does Podman just work. Is cgroups configured in the expected way for Podman to just work. (In reply to Alex Jia from comment #50) > (In reply to Daniel Walsh from comment #49) > > I think you can remove all of the other options and it should work. The > > issue is RHEL8 does not have all of the cgroupV2 enabled. I think we need > > to get input from Giuseppe. > > Thank you Daniel! > > ACK, it works for me after removing other options. > > I forget to change user to rootless, I still got error "[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied" like before > [root@kvm-08-guest29 ~]# podman --runtime=`which crun` --log-level=debug run > --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2 [test@kvm-08-guest29 ~]$ podman unshare cat /proc/self/uid_map 0 1000 1 1 100000 65536 [test@kvm-08-guest29 ~]$ podman --runtime=`which crun` --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0019] /usr/bin/conmon messages will be logged to syslog DEBU[0019] Running with no CGroups DEBU[0019] running conmon: /usr/bin/conmon args="[--api-version 1 -c d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 -u d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 -r /usr/bin/crun -b /home/test/.local/share/containers/storage/overlay-containers/d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4/userdata -p /run/user/1000/containers/overlay-containers/d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4/userdata/pidfile -n hopeful_ramanujan --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4/userdata/ctr.log --log-level debug --syslog --runtime-arg --cgroup-manager --runtime-arg disabled --conmon-pidfile /run/user/1000/containers/overlay-containers/d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4]" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0019] Received: 57339 INFO[0019] Got Conmon PID as 57336 DEBU[0019] Created container d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 in OCI runtime DEBU[0019] Starting container d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 with command [/usr/sbin/init] DEBU[0019] Started container d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 DEBU[0019] Called run.PersistentPostRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2) [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d77bd92068cd quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 8 seconds ago Up 7 seconds ago hopeful_ramanujan > > [root@kvm-08-guest29 ~]# podman --runtime=`which crun` --log-level=debug run > --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2 [test@kvm-08-guest29 ~]$ podman --runtime=`which crun` --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0019] /usr/bin/conmon messages will be logged to syslog DEBU[0019] running conmon: /usr/bin/conmon args="[--api-version 1 -c f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259 -u f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259 -r /usr/bin/crun -b /home/test/.local/share/containers/storage/overlay-containers/f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259/userdata -p /run/user/1000/containers/overlay-containers/f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259/userdata/pidfile -n serene_bhabha --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -s -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259]" INFO[0019] Running conmon under slice user.slice and unitName libpod-conmon-f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259.scope [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0019] Received: -1 DEBU[0019] Cleaning up container f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259 DEBU[0019] Tearing down network namespace at /run/user/1000/netns/cni-98accba7-3c78-4bb5-8866-7e7e7c1e1647 for container f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259 DEBU[0019] unmounted container "f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259" DEBU[0019] ExitCode msg: "writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user/cgroup.subtree_control`: no such file or directory: oci runtime command not found error" Error: writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user/cgroup.subtree_control`: No such file or directory: OCI runtime command not found error [test@kvm-08-guest29 ~]$ echo $? 127 [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (In reply to Daniel Walsh from comment #52) > I am still worked about this working out of the box. IE If I take a rhel8.4 > box and reboot it to cgroupv2 mode, does Podman just work. Is cgroups > configured in the expected way for Podman to just work. Got the same result to Comment 54 on 8.4. [test@kvm-05-guest11 ~]$ cat /etc/redhat-release Red Hat Enterprise Linux release 8.4 Beta (Ootpa) [test@kvm-05-guest11 ~]$ rpm -q podman crun runc kernel podman-2.0.5-4.module+el8.3.0+8152+c5c3262e.x86_64 crun-0.14.1-2.module+el8.3.0+8152+c5c3262e.x86_64 runc-1.0.0-68.rc92.module+el8.3.0+8152+c5c3262e.x86_64 kernel-4.18.0-239.el8.x86_64 [test@kvm-05-guest11 ~]$ mount|grep cgroup cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate) [test@kvm-05-guest11 ~]$ podman info | grep -iA2 runtime ociRuntime: name: crun package: crun-0.14.1-2.module+el8.3.0+8152+c5c3262e.x86_64 1. --cgroups=disabled [test@kvm-05-guest11 ~]$ podman --runtime=`which crun` --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied ...ignore... [test@kvm-05-guest11 ~]$ echo $? 0 [test@kvm-05-guest11 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b8a50b2cb46e quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 18 seconds ago Up 18 seconds ago stoic_tu 2. --cgroupns=host [test@kvm-05-guest11 ~]$ podman --runtime=`which crun` --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied ...ignore... DEBU[0034] Error unmounting /home/test/.local/share/containers/storage/overlay/1850403959b8445219258e18e547484f1fbf757e2f76a350c471a356a79e7e31/merged with fusermount3 - exec: "fusermount3": executable file not found in $PATH DEBU[0034] Error unmounting /home/test/.local/share/containers/storage/overlay/1850403959b8445219258e18e547484f1fbf757e2f76a350c471a356a79e7e31/merged with fusermount - exec: "fusermount": executable file not found in $PATH DEBU[0034] unmounted container "d31ea90bede4bfe19c62cc2d205b8316314710c489dd70d30705c0d625186ecb" DEBU[0034] ExitCode msg: "writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user/cgroup.subtree_control`: no such file or directory: oci runtime command not found error" Error: writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user/cgroup.subtree_control`: No such file or directory: OCI runtime command not found error [test@kvm-05-guest11 ~]$ echo $? 127 [test@kvm-05-guest11 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [test@kvm-05-guest11 ~]$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d31ea90bede4 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 11 seconds ago Created optimistic_bose Tested on latest podman-2.0.5-5.module+el8.3.0+8221+97165c3f.x86_64 w/ crun-0.14.1-2.module+el8.3.0+8221+97165c3f.x86_64, although I still got error "[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied", the container is running status, Is it acceptable on 8.3? or need we change Target Version to 8.4? ...ignore... INFO[0020] Running conmon under slice user.slice and unitName libpod-conmon-99e54eac1f92b97c99300a125f068ca2f31bfaf3715c418e9e8d1389a4fa6806.scope [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0020] Received: 71971 INFO[0020] Got Conmon PID as 71968 DEBU[0020] Created container 99e54eac1f92b97c99300a125f068ca2f31bfaf3715c418e9e8d1389a4fa6806 in OCI runtime DEBU[0020] Starting container 99e54eac1f92b97c99300a125f068ca2f31bfaf3715c418e9e8d1389a4fa6806 with command [/usr/sbin/init] DEBU[0020] Started container 99e54eac1f92b97c99300a125f068ca2f31bfaf3715c418e9e8d1389a4fa6806 99e54eac1f92b97c99300a125f068ca2f31bfaf3715c418e9e8d1389a4fa6806 DEBU[0020] Called run.PersistentPostRunE(podman --log-level=debug run --detach --privileged=false quay.io/ansible/fedora30-test-container:1.9.2) [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 99e54eac1f92 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 11 seconds ago Up 11 seconds ago sharp_curie ...ignore... INFO[0020] Running conmon under slice user.slice and unitName libpod-conmon-3cc0b276246640d23c8d1537078c95324de07c9c56db26067c8e07556afd8505.scope [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0020] Received: 72186 INFO[0020] Got Conmon PID as 72183 DEBU[0020] Created container 3cc0b276246640d23c8d1537078c95324de07c9c56db26067c8e07556afd8505 in OCI runtime DEBU[0020] Starting container 3cc0b276246640d23c8d1537078c95324de07c9c56db26067c8e07556afd8505 with command [/usr/sbin/init] DEBU[0020] Started container 3cc0b276246640d23c8d1537078c95324de07c9c56db26067c8e07556afd8505 3cc0b276246640d23c8d1537078c95324de07c9c56db26067c8e07556afd8505 DEBU[0020] Called run.PersistentPostRunE(podman --log-level=debug run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2) [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3cc0b2762466 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 5 seconds ago Up 5 seconds ago mystifying_lehmann Alex thanks for the testing updates. Dan Walsh, I'm thinking we need to bump this RHEL 8.4. Dan or Giuseppe any contrary thoughts? |