Hide Forgot
Description of problem: When running certain container image instances as rootless, they fail but do work when run as root. Version-Release number of selected component (if applicable): podman-1.4.2-1.module+el8.1.0+3423+f0eda5e0.x86_64 How reproducible: Always Steps to Reproduce: 1. podman --log-level=debug run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2 or 1. podman --log-level=debug run --detach --privileged=false quay.io/ansible/fedora30-test-container:1.9.2 Actual results: The container is not running Expected results: The container should be running (it works if I run the podman command as root) Additional info: [admiller@rhel81beta ~]$ podman --log-level=debug run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] running as rootless DEBU[0000] Initializing boltdb state at /home/admiller/.local/share/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /home/admiller/.local/share/containers/storage DEBU[0000] Using run root /run/user/1000 DEBU[0000] Using static dir /home/admiller/.local/share/containers/storage/libpod DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp DEBU[0000] Using volume path /home/admiller/.local/share/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false DEBU[0000] Initializing event backend journald DEBU[0000] Initialized SHM lock manager at path /libpod_rootless_lock_1000 DEBU[0000] Podman detected system restart - performing state refresh DEBU[0000] parsed reference into "[overlay@/home/admiller/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]quay.io/ansible/fedora30-test-container:1.9.2" DEBU[0000] reference "[overlay@/home/admiller/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]quay.io/ansible/fedora30-test-container:1.9.2" does not resolve to an image ID DEBU[0000] parsed reference into "[overlay@/home/admiller/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]quay.io/ansible/fedora30-test-container:1.9.2" Trying to pull quay.io/ansible/fedora30-test-container:1.9.2...DEBU[0000] reference rewritten from 'quay.io/ansible/fedora30-test-container:1.9.2' to 'quay.io/ansible/fedora30-test-container:1.9.2' DEBU[0000] Trying to pull "quay.io/ansible/fedora30-test-container:1.9.2" DEBU[0000] Using registries.d directory /etc/containers/registries.d for sigstore configuration DEBU[0000] Using "default-docker" configuration DEBU[0000] No signature storage configuration found for quay.io/ansible/fedora30-test-container:1.9.2 DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/quay.io DEBU[0000] GET https://quay.io/v2/ DEBU[0000] Ping https://quay.io/v2/ status 401 DEBU[0000] GET https://quay.io/v2/auth?scope=repository%3Aansible%2Ffedora30-test-container%3Apull&service=quay.io DEBU[0000] Increasing token expiration to: 60 seconds DEBU[0000] GET https://quay.io/v2/ansible/fedora30-test-container/manifests/1.9.2 DEBU[0000] Using blob info cache at /home/admiller/.local/share/containers/cache/blob-info-cache-v1.boltdb DEBU[0000] IsRunningImageAllowed for image docker:quay.io/ansible/fedora30-test-container:1.9.2 DEBU[0000] Using default policy section DEBU[0000] Requirement 0: allowed DEBU[0000] Overall: allowed Getting image source signatures DEBU[0000] Manifest has MIME type application/vnd.docker.distribution.manifest.v1+prettyjws, ordered candidate list [application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json] DEBU[0000] ... will first try using the original manifest unmodified DEBU[0000] Downloading /v2/ansible/fedora30-test-container/blobs/sha256:f18795182921031e1541c09da32aaa49b219f16e6de4f245844baa7563088d91 DEBU[0000] GET https://quay.io/v2/ansible/fedora30-test-container/blobs/sha256:f18795182921031e1541c09da32aaa49b219f16e6de4f245844baa7563088d91 DEBU[0000] Downloading /v2/ansible/fedora30-test-container/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 DEBU[0000] GET https://quay.io/v2/ansible/fedora30-test-container/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 DEBU[0000] Downloading /v2/ansible/fedora30-test-container/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 DEBU[0000] GET https://quay.io/v2/ansible/fedora30-test-container/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 DEBU[0000] Downloading /v2/ansible/fedora30-test-container/blobs/sha256:8f6ac7ed4a91c9630083524efcef2f59f27404320bfee44397f544c252ad4bd4 DEBU[0000] GET https://quay.io/v2/ansible/fedora30-test-container/blobs/sha256:8f6ac7ed4a91c9630083524efcef2f59f27404320bfee44397f544c252ad4bd4 DEBU[0000] Downloading /v2/ansible/fedora30-test-container/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 DEBU[0000] GET https://quay.io/v2/ansible/fedora30-test-container/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 DEBU[0000] Downloading /v2/ansible/fedora30-test-container/blobs/sha256:a065df4b1b04e440b8c878108f01ee53be488b6ace013fe4af4e20113fbe6726 DEBU[0000] GET https://quay.io/v2/ansible/fedora30-test-container/blobs/sha256:a065df4b1b04e440b8c878108f01ee53be488b6ace013fe4af4e20113fbe6726 DEBU[0001] Detected compression format gzip DEBU[0001] Using original blob without modification DEBU[0001] Detected compression format gzip DEBU[0001] Using original blob without modification DEBU[0001] Detected compression format gzip DEBU[0001] Using original blob without modification DEBU[0001] Detected compression format gzip DEBU[0001] Using original blob without modification DEBU[0001] Detected compression format gzip DEBU[0001] Using original blob without modification DEBU[0001] Downloading /v2/ansible/fedora30-test-container/blobs/sha256:0f02c63902c8cc5ef9c9cd43ac704a913ccffc5161ad9fb061411d165565c0a3 DEBU[0001] GET https://quay.io/v2/ansible/fedora30-test-container/blobs/sha256:0f02c63902c8cc5ef9c9cd43ac704a913ccffc5161ad9fb061411d165565c0a3 DEBU[0001] Downloading /v2/ansible/fedora30-test-container/blobs/sha256:dcd083c58b68e1479ef4329838eb2daa8cff2378de63b819944e9b5be46b566e DEBU[0001] GET https://quay.io/v2/ansible/fedora30-test-container/blobs/sha256:dcd083c58b68e1479ef4329838eb2daa8cff2378de63b819944e9b5be46b566e DEBU[0001] Downloading /v2/ansible/fedora30-test-container/blobs/sha256:51d311e57670a37eed0250da375783d6076100f57901e519a6b92cbd202c48ee DEBU[0001] GET https://quay.io/v2/ansible/fedora30-test-container/blobs/sha256:51d311e57670a37eed0250da375783d6076100f57901e519a6b92cbd202c48ee DEBU[0001] Downloading /v2/ansible/fedora30-test-container/blobs/sha256:b110bbd255d9861e74095b73b43548e6d8cafdf2646473b6d464620ef927d58a DEBU[0001] GET https://quay.io/v2/ansible/fedora30-test-container/blobs/sha256:b110bbd255d9861e74095b73b43548e6d8cafdf2646473b6d464620ef927d58a Copying blob a3ed95caeb02 [======================================] 1.5GiB / 1.5GiB Copying blob a3ed95caeb02 [======================================] 1.5GiB / 1.5GiB Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob 8f6ac7ed4a91 [--------------------------------------] 1.9MiB / 1.5GiB Copying blob a065df4b1b04 done Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob 8f6ac7ed4a91 [--------------------------------------] 2.5MiB / 1.5GiB Copying blob a065df4b1b04 done Copying blob f18795182921 [--------------------------------------] 416.6KiB / 1.5GiB Copying blob dcd083c58b68 [======================================] 1.5GiB / 1.5GiB Copying blob a3ed95caeb02 [======================================] 1.5GiB / 1.5GiB DEBU[0001] Detected compression format gzip DEBU[0001] Using original blob without modification Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob 8f6ac7ed4a91 [--------------------------------------] 5.0MiB / 1.5GiB Copying blob a065df4b1b04 done Copying blob f18795182921 [--------------------------------------] 1.3MiB / 1.5GiB Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob 8f6ac7ed4a91 done Copying blob a065df4b1b04 done Copying blob f18795182921 done Copying blob dcd083c58b68 done Copying blob a3ed95caeb02 skipped: already exists Copying blob 51d311e57670 done Copying blob b110bbd255d9 done Copying blob 0f02c63902c8 done Copying blob a3ed95caeb02 skipped: already exists Copying blob a3ed95caeb02 skipped: already exists Copying blob ed0c45bc6ae0 done Copying blob 6d5237e0818c done Copying blob b0e81ef959ed done Writing manifest to image destination Storing signatures DEBU[0040] Applying tar in /home/admiller/.local/share/containers/storage/overlay/9f15ecbdf2f3e1927bc620e14cc6b4a78ca52336896b0efdfb5d853d65852719/diff DEBU[0046] Applying tar in /home/admiller/.local/share/containers/storage/overlay/afbd9f544e5bc95badcc4138863ed3fffd1b3750a8c71b8a8c860a21aeb75924/diff DEBU[0046] Applying tar in /home/admiller/.local/share/containers/storage/overlay/9c46503363efa06133e5d0828e8dad23a8d430ef14acba5d73fab4906bdc742f/diff DEBU[0056] Applying tar in /home/admiller/.local/share/containers/storage/overlay/856f4b9f0ec1280d9c95b09d08dfb2050e367c6742487d56b3488be11ac6bed4/diff DEBU[0057] Applying tar in /home/admiller/.local/share/containers/storage/overlay/8e67f47b469470f273336eabf80e673c46322d2f13c305187aeaf3a66b9e6bb3/diff DEBU[0057] Applying tar in /home/admiller/.local/share/containers/storage/overlay/4f26c5fbcf1448343717afeab5c3086c95c14efde5e79e1ca339c9c279b89aa3/diff DEBU[0057] Applying tar in /home/admiller/.local/share/containers/storage/overlay/648e142880751b4d3b3dc92510836d718d7d66ba2f75d57edb507f301c434fac/diff DEBU[0057] Applying tar in /home/admiller/.local/share/containers/storage/overlay/10d745c67fd89df28dfc998cf36f687865d640eee5654ce7f0bea2734aeb997d/diff DEBU[0057] Applying tar in /home/admiller/.local/share/containers/storage/overlay/b872cb8d1e2adc111bd4e2ae8a0573613959f42dc02f0af293c57400604bad56/diff DEBU[0057] Applying tar in /home/admiller/.local/share/containers/storage/overlay/7cea2c9d1a41aadce5a2ed1171d8387c474f955271458b6f84f44a98db2906c6/diff DEBU[0057] setting image creation date to 2019-06-20 18:45:50.564489368 +0000 UTC DEBU[0057] created new image ID "13a3af920530b594d8a0d91e91d768a49c5782bcefeb24db00ae96fb213ad625" DEBU[0057] set names of image "13a3af920530b594d8a0d91e91d768a49c5782bcefeb24db00ae96fb213ad625" to [quay.io/ansible/fedora30-test-container:1.9.2] DEBU[0057] saved image metadata "{}" DEBU[0057] parsed reference into "[overlay@/home/admiller/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]quay.io/ansible/fedora30-test-container:1.9.2" DEBU[0057] parsed reference into "[overlay@/home/admiller/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]@13a3af920530b594d8a0d91e91d768a49c5782bcefeb24db00ae96fb213ad625" DEBU[0057] parsed reference into "[overlay@/home/admiller/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]@13a3af920530b594d8a0d91e91d768a49c5782bcefeb24db00ae96fb213ad625" DEBU[0057] parsed reference into "[overlay@/home/admiller/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]@13a3af920530b594d8a0d91e91d768a49c5782bcefeb24db00ae96fb213ad625" DEBU[0057] User mount /sys/fs/cgroup:/sys/fs/cgroup options [ro] DEBU[0057] Got mounts: [{/sys/fs/cgroup bind /sys/fs/cgroup [ro]}] DEBU[0057] Got volumes: [0xc00029c280 0xc00029c1c0] DEBU[0057] Using slirp4netns netmode DEBU[0057] Adding mount /proc DEBU[0057] Adding mount /dev DEBU[0057] Adding mount /dev/pts DEBU[0057] Adding mount /dev/mqueue DEBU[0057] Adding mount /sys DEBU[0057] created OCI spec and options for new container DEBU[0057] Allocated lock 0 for container 62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7 DEBU[0057] parsed reference into "[overlay@/home/admiller/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs]@13a3af920530b594d8a0d91e91d768a49c5782bcefeb24db00ae96fb213ad625" DEBU[0058] created container "62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7" DEBU[0058] container "62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7" has work directory "/home/admiller/.local/share/containers/storage/overlay-containers/62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7/userdata" DEBU[0058] container "62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7" has run directory "/run/user/1000/overlay-containers/62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7/userdata" DEBU[0058] Creating new volume 0fecdd202c4ee902681fd6687ac8cf0b74b432a2d4ef2711c2c7e4f52aabca9d for container DEBU[0058] overlay: mount_data=lowerdir=/home/admiller/.local/share/containers/storage/overlay/l/6AMYBTAVKHTNN7XZHBKUPSJMVB:/home/admiller/.local/share/containers/storage/overlay/l/M3S4Y32KKIRB3JNCQ5EZKSJYK6:/home/admiller/.local/share/containers/storage/overlay/l/HH4ULBTFU6OJRF2LTRCFR3H3JH:/home/admiller/.local/share/containers/storage/overlay/l/DQPYOWHFWLDJV76IMZYJPJ3GFJ:/home/admiller/.local/share/containers/storage/overlay/l/3OS23LUVJDLQPOBQMLHJXWYKGY:/home/admiller/.local/share/containers/storage/overlay/l/76A5JLIGL5KUZWAOAKPABDTGWY:/home/admiller/.local/share/containers/storage/overlay/l/CIVNPOMDJTVL22T3XRH773XQHZ:/home/admiller/.local/share/containers/storage/overlay/l/2CZ7NWIFGE37YLFN3DOM3BF3IX:/home/admiller/.local/share/containers/storage/overlay/l/FAQTXBU24LG7P4SPLGXJX6M6PP:/home/admiller/.local/share/containers/storage/overlay/l/UH4SK44YDLWI5IMH6OW7SDKBSF,upperdir=/home/admiller/.local/share/containers/storage/overlay/f9044e2d9c91af77db328fed0d8a007af676c128fa984829ead355424899fecb/diff,workdir=/home/admiller/.local/share/containers/storage/overlay/f9044e2d9c91af77db328fed0d8a007af676c128fa984829ead355424899fecb/work,context="system_u:object_r:container_file_t:s0:c87,c169" DEBU[0058] mounted container "62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7" at "/home/admiller/.local/share/containers/storage/overlay/f9044e2d9c91af77db328fed0d8a007af676c128fa984829ead355424899fecb/merged" DEBU[0058] Creating dest directory: /home/admiller/.local/share/containers/storage/volumes/0fecdd202c4ee902681fd6687ac8cf0b74b432a2d4ef2711c2c7e4f52aabca9d/_data DEBU[0058] Calling TarUntar(/home/admiller/.local/share/containers/storage/overlay/f9044e2d9c91af77db328fed0d8a007af676c128fa984829ead355424899fecb/merged/tmp, /home/admiller/.local/share/containers/storage/volumes/0fecdd202c4ee902681fd6687ac8cf0b74b432a2d4ef2711c2c7e4f52aabca9d/_data) DEBU[0058] TarUntar(/home/admiller/.local/share/containers/storage/overlay/f9044e2d9c91af77db328fed0d8a007af676c128fa984829ead355424899fecb/merged/tmp /home/admiller/.local/share/containers/storage/volumes/0fecdd202c4ee902681fd6687ac8cf0b74b432a2d4ef2711c2c7e4f52aabca9d/_data) DEBU[0058] Creating new volume d9b891f9ce0a0b5fedb7a939eea193a0e6f56cad0c677763d3540c6cac1ec9e4 for container DEBU[0058] mounted container "62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7" at "/home/admiller/.local/share/containers/storage/overlay/f9044e2d9c91af77db328fed0d8a007af676c128fa984829ead355424899fecb/merged" DEBU[0058] Creating dest directory: /home/admiller/.local/share/containers/storage/volumes/d9b891f9ce0a0b5fedb7a939eea193a0e6f56cad0c677763d3540c6cac1ec9e4/_data DEBU[0058] Calling TarUntar(/home/admiller/.local/share/containers/storage/overlay/f9044e2d9c91af77db328fed0d8a007af676c128fa984829ead355424899fecb/merged/run, /home/admiller/.local/share/containers/storage/volumes/d9b891f9ce0a0b5fedb7a939eea193a0e6f56cad0c677763d3540c6cac1ec9e4/_data) DEBU[0058] TarUntar(/home/admiller/.local/share/containers/storage/overlay/f9044e2d9c91af77db328fed0d8a007af676c128fa984829ead355424899fecb/merged/run /home/admiller/.local/share/containers/storage/volumes/d9b891f9ce0a0b5fedb7a939eea193a0e6f56cad0c677763d3540c6cac1ec9e4/_data) DEBU[0058] New container created "62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7" DEBU[0058] container "62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7" has CgroupParent "/libpod_parent/libpod-62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7" DEBU[0058] mounted container "62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7" at "/home/admiller/.local/share/containers/storage/overlay/f9044e2d9c91af77db328fed0d8a007af676c128fa984829ead355424899fecb/merged" DEBU[0058] Created root filesystem for container 62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7 at /home/admiller/.local/share/containers/storage/overlay/f9044e2d9c91af77db328fed0d8a007af676c128fa984829ead355424899fecb/merged WARN[0058] error mounting secrets, skipping: getting host secret data failed: failed to read secrets from "/usr/share/rhel/secrets": open /usr/share/rhel/secrets: permission denied DEBU[0058] /etc/system-fips does not exist on host, not mounting FIPS mode secret DEBU[0058] Created OCI spec for container 62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7 at /home/admiller/.local/share/containers/storage/overlay-containers/62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7/userdata/config.json DEBU[0058] /usr/libexec/podman/conmon messages will be logged to syslog DEBU[0058] running conmon: /usr/libexec/podman/conmon args=[-c 62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7 -u 62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7 -n awesome_matsumoto -r /usr/bin/runc -b /home/admiller/.local/share/containers/storage/overlay-containers/62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7/userdata -p /run/user/1000/overlay-containers/62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7/userdata/pidfile --exit-dir /run/user/1000/libpod/tmp/exits --conmon-pidfile /run/user/1000/overlay-containers/62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/admiller/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7 --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/admiller/.local/share/containers/storage/overlay-containers/62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7/userdata/ctr.log --log-level debug --syslog] [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied WARN[0058] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied DEBU[0058] Received container pid: 14800 DEBU[0058] Created container 62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7 in OCI runtime DEBU[0058] Starting container 62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7 with command [/usr/sbin/init] DEBU[0058] Started container 62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7 62f0f533382af4154bdf2afa65f65158a8154d958af14dcdaa412be79494b6d7 [admiller@rhel81beta ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [admiller@rhel81beta ~]$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 62f0f533382a quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 2 minutes ago Exited (255) 2 minutes ago awesome_matsumoto
I think this is https://github.com/containers/libpod/issues/3024 Is there any chance at all that you've somehow clobbered /usr/libexec/podman/conmon ? Or that you've built a custom newer podman instead of using the one provided in the rpm?
I get a similar error with a completely default RHEL 8 setup (aka no packages have been replaced): podman run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2 container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"rootfs_linux.go:58: mounting \\\"/sys/fs/cgroup\\\" to rootfs \\\"/home/fatherlinux/.local/share/containers/storage/overlay/67fe8189d37696358439dd4a4ca509e0a065ca3691c6e49242737ad54fd6b9b3/merged\\\" at \\\"/home/fatherlinux/.local/share/containers/storage/overlay/67fe8189d37696358439dd4a4ca509e0a065ca3691c6e49242737ad54fd6b9b3/merged/sys/fs/cgroup\\\" caused \\\"operation not permitted\\\"\"" : internal libpod error
It's probably relevant here that the image is running systemd as pid 1. I haven't tried this before with rootless, but it seems like we should be testing this. from podman inspect: { "created": "2019-06-20T18:45:50.564489368Z", "created_by": "/bin/sh -c #(nop) CMD [\"/usr/sbin/init\"]", "empty_layer": true }
The /sys/fs/cgroup error is coming out of runc - investigating. The first one, from the original reproducer - from Podman's perspective, there are no errors in there. It seems like the app in the container starts, but immediately exits with an error. Either we're not configuring things properly for systemd, or something like SELinux is blocking something the container wants.
Jul 24 17:15:22 Agincourt.redhat.com audit[25503]: AVC avc: denied { write } for pid=25503 comm="systemd" name="gnome-terminal-server.service" dev="cgroup2" ino=2755 scontext=system_u:sy> Suspicion: systemd is attempting to configure CGroups but has no permissions to do so
I seem to remember that something like sudo setsebool container_manage_cgroup=true is needed for nesting systemd. ...but that still didn't fix it. Perhaps there's another one that's also needed? Matt you're on the right track because the container runs fine w/ setenforce=0
On F30, things work fine after that `setsebool` command. Going to spin up an 8.1 VM tomorrow morning and see what the differences are... Potentially container-selinux version differences?
I swear I already posted this information but apparently bugzilla "lost" it now that I've refreshed .... This is a fresh RHEL 8.1 Beta install, subscribed to subscription-manager, and podman installed via the CDN. [admiller@rhel81beta ~]$ rpm -qf /usr/libexec/podman/conmon podman-1.4.2-1.module+el8.1.0+3423+f0eda5e0.x86_64 [admiller@rhel81beta ~]$ rpm -Vv podman-1.4.2-1.module+el8.1.0+3423+f0eda5e0.x86_64 ......... c /etc/cni/net.d/87-podman-bridge.conflist ......... /usr/bin/podman ......... a /usr/lib/.build-id ......... a /usr/lib/.build-id/81 ......... a /usr/lib/.build-id/81/4387cefcdc0d513f40545b0065e70fd68b056a ......... a /usr/lib/.build-id/8b ......... a /usr/lib/.build-id/8b/0628b3f6c8101f947948d9b7571bc3c7d0faed ......... /usr/lib/systemd/system/io.podman.service ......... /usr/lib/systemd/system/io.podman.socket ......... /usr/lib/tmpfiles.d/podman.conf ......... /usr/libexec/podman ......... /usr/libexec/podman/conmon ......... /usr/share/bash-completion/completions/podman ......... /usr/share/containers/libpod.conf ......... /usr/share/doc/podman ......... d /usr/share/doc/podman/CONTRIBUTING.md ......... d /usr/share/doc/podman/README-hooks.md ......... d /usr/share/doc/podman/README.md ......... d /usr/share/doc/podman/code-of-conduct.md ......... d /usr/share/doc/podman/install.md ......... d /usr/share/doc/podman/transfer.md ......... /usr/share/licenses/podman ......... l /usr/share/licenses/podman/LICENSE ......... /usr/share/zsh/site-functions ......... /usr/share/zsh/site-functions/_podman
Reproduced on 8.1 beta nightly. I can still reproduce with SELinux disabled, so I think we might be looking at different issues here?
No AVCs even with SELinux enabled. I don't think this is SELinux. Investigating further.
Not Seccomp, either. Going to need to get some debug attached to systemd here to figure out what's going on.
It's specific to F30, and specific to Podman 1.4.2. I tried a bettery of Fedora images and UBI8, and got the following: [cloud-user@mheon-rhel8 ~]$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ecaf9d7bc281 registry.access.redhat.com/ubi8/ubi:latest init 2 seconds ago Up 1 second ago recursing_cohen d0b24bf921d1 docker.io/library/fedora:28 init 10 seconds ago Up 9 seconds ago xenodochial_morse 6c9d9e578806 docker.io/library/fedora:29 init 16 seconds ago Up 16 seconds ago elastic_gagarin 7c6f1f44ff78 docker.io/library/fedora:30 init 24 seconds ago Exited (255) 24 seconds ago admiring_maxwell Here, only F30 based images fail. Something specific to systemd on F30?
Native build of 1.4.2 on Fedora does not have this problem.
Did a build of 1.4.4, started working. Investigating further.
I scratch-build 1.4.2 and it also works fine. Something's fishy...
Alright, the system podman is working now. Potentially something was set by one of the newer builds that made things start working?
I am surprised that you can run systemd at all inside of a rootless container. It wants to write to the cgroup file system, and is not allowed as a non privileged user. You need cgroupsv2 for this to work.
you can use systemd inside of a rootless container, but you cannot limit resources without cgroups v2. systemd requires a bunch of other mounts to work correctly, so it is better if you just use the run --systemd feature that will automatically set everything up. It requires systemd to be the command you are launching. For me it was enough to run something like: podman run --rm fedora /usr/bin/init
The --systemd flag is automatically applied, and it works for everything except Fedora 30 based images. I think we have a problem with the systemd version there. I've gone back and confirmed, and my earlier testing of manual builds was actually done as root (oops). I retested without root, and version does not appear to be an issue - no Podman version on RHEl8.1 works with F30 images and systemd. I'm going to contact the Systemd team to ask for assistance debugging - getting meaningful logs out of systemd would probably solve this.
Btw, it does not work that well on Fedora either. I tried podman run --rm -it fedora /usr/bin/systemd and it runs, but the processes from the container are placed under cgroup that belongs to my dbus.service, which is, well weird. https://paste.fedoraproject.org/paste/wHfl~lD0sTcJz-lHdFtk2A
Per discussion with the systemd maintainers, it seems like this is likely systemd refusing to delegate from CGroups v1 hierarchy to an unprivileged user. As such, it seems like this never worked properly, and systemd has begun erroring more loudly on this fact, preventing the container from running.
Per discussion with the systemd folks, it seems that systemd in rootless containers cannot be supported until CGroups v2 lands, which is RHEL9 at earliest.
I actually think cgroups V2 will arrive in RHEL8 earlier then that. It will not be default until RHEL9 at earliest.
This is beyond 8.2, but that is the latest I can set for the bugzilla.
Moved to 8.3 release.
Dan Walsh, another crun and 8.3. Can we close this out or set it to Post for Jindrich?
Right lets say that cgroup V2 support will be in RHEL8.3 along with crun.
I'm assuming this too will require https://bugzilla.redhat.com/show_bug.cgi?id=1844322 to be completed.
Assigning to Jindrich for any packaging needs that might be required once the blocking BZ clears.
crun is now part of container-tools-rhel8-8.3.0
Does --cgroups=disabled or --cgroupns=host fix the problem?
(In reply to Daniel Walsh from comment #47) > Does --cgroups=disabled or --cgroupns=host fix the problem? I just appended above option to podman run w/o removing other options, I still got error "[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied" like before, although the fedora30-test-container:1.9.2 container is running state. 1. --cgroups=disabled [test@kvm-08-guest29 ~]$ podman --cgroup-manager=systemd --runtime=`which crun` --log-level=debug run --cgroups=disabled --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --cgroup-manager=systemd --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0000] Running with no CGroups DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 -u fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 -r /usr/bin/crun -b /home/test/.local/share/containers/storage/overlay-containers/fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78/userdata -p /run/user/1000/containers/overlay-containers/fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78/userdata/pidfile -n beautiful_ishizaka --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78/userdata/ctr.log --log-level debug --syslog --runtime-arg --cgroup-manager --runtime-arg disabled --conmon-pidfile /run/user/1000/containers/overlay-containers/fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78]" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0000] Received: 24075 INFO[0000] Got Conmon PID as 24072 DEBU[0000] Created container fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 in OCI runtime DEBU[0000] Starting container fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 with command [/usr/sbin/init] DEBU[0000] Started container fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 fd1d00fdef1186b42c91a4bf60407d97c7f4bb759a3373894a27c2f08dd1ad78 DEBU[0000] Called run.PersistentPostRunE(podman --cgroup-manager=systemd --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2) [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5bbe09629ef4 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 3 seconds ago Up 3 seconds ago beautiful_lichterman 2. --cgroupns=host [test@kvm-08-guest29 ~]$ podman --cgroup-manager=systemd --runtime=`which crun` --log-level=debug run --cgroupns=host --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --cgroup-manager=systemd --runtime=/usr/bin/crun --log-level=debug run --cgroupns=host --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a -u 08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a -r /usr/bin/crun -b /home/test/.local/share/containers/storage/overlay-containers/08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a/userdata -p /run/user/1000/containers/overlay-containers/08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a/userdata/pidfile -n festive_moore --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -s -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a]" INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a.scope [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0000] Received: -1 DEBU[0000] Cleaning up container 08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a DEBU[0000] Tearing down network namespace at /run/user/1000/netns/cni-cf83cf57-f996-5c26-d1b4-15d957d2a7d5 for container 08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a DEBU[0000] unmounted container "08d17b6db0f5ca3b038cc9bc54c75653c137799a6f92f95049b47aad8d18086a" DEBU[0000] ExitCode msg: "writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/cgroup.subtree_control`: no such file or directory: oci runtime command not found error" Error: writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/cgroup.subtree_control`: No such file or directory: OCI runtime command not found error [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5bbe09629ef4 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 48 seconds ago Up 48 seconds ago beautiful_lichterman
I think you can remove all of the other options and it should work. The issue is RHEL8 does not have all of the cgroupV2 enabled. I think we need to get input from Giuseppe.
(In reply to Daniel Walsh from comment #49) > I think you can remove all of the other options and it should work. The > issue is RHEL8 does not have all of the cgroupV2 enabled. I think we need > to get input from Giuseppe. Thank you Daniel! ACK, it works for me after removing other options. [root@kvm-08-guest29 ~]# podman --runtime=`which crun` --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0019] /usr/bin/conmon messages will be logged to syslog DEBU[0019] Running with no CGroups DEBU[0019] running conmon: /usr/bin/conmon args="[--api-version 1 -c 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 -u 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57/userdata -p /var/run/containers/storage/overlay-containers/4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57/userdata/pidfile -n eloquent_aryabhata --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket -l k8s-file:/var/lib/containers/storage/overlay-containers/4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57/userdata/ctr.log --log-level debug --syslog --runtime-arg --cgroup-manager --runtime-arg disabled --conmon-pidfile /var/run/containers/storage/overlay-containers/4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57]" DEBU[0019] Received: 49783 INFO[0019] Got Conmon PID as 49780 DEBU[0019] Created container 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 in OCI runtime DEBU[0019] Starting container 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 with command [/usr/sbin/init] DEBU[0019] Started container 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 4c0d9b5ab96b38e145a36cc88813763a9ecd13b104c0f5ae6e932b814bb8dc57 DEBU[0019] Called run.PersistentPostRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2) [root@kvm-08-guest29 ~]# echo $? 0 [root@kvm-08-guest29 ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4c0d9b5ab96b quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 25 seconds ago Up 25 seconds ago eloquent_aryabhata [root@kvm-08-guest29 ~]# podman --runtime=`which crun` --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0019] /usr/bin/conmon messages will be logged to syslog DEBU[0019] running conmon: /usr/bin/conmon args="[--api-version 1 -c d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 -u d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1/userdata -p /var/run/containers/storage/overlay-containers/d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1/userdata/pidfile -n reverent_wilson --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket -s -l k8s-file:/var/lib/containers/storage/overlay-containers/d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /var/run/containers/storage/overlay-containers/d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1]" INFO[0019] Running conmon under slice machine.slice and unitName libpod-conmon-d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1.scope DEBU[0019] Received: 50267 INFO[0019] Got Conmon PID as 50264 DEBU[0019] Created container d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 in OCI runtime DEBU[0019] Starting container d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 with command [/usr/sbin/init] DEBU[0019] Started container d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 d0205dfbba18803452156933b556b61e78906d833e39a125a6b78ceea84201f1 DEBU[0019] Called run.PersistentPostRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2) [root@kvm-08-guest29 ~]# echo $? 0 [root@kvm-08-guest29 ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d0205dfbba18 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 18 seconds ago Up 18 seconds ago reverent_wilson
Alex can we set this to verified? Dan is that OK with you given the current state or should we set it back to assigned and change the Target Version to 8.4?
I am still worked about this working out of the box. IE If I take a rhel8.4 box and reboot it to cgroupv2 mode, does Podman just work. Is cgroups configured in the expected way for Podman to just work.
(In reply to Alex Jia from comment #50) > (In reply to Daniel Walsh from comment #49) > > I think you can remove all of the other options and it should work. The > > issue is RHEL8 does not have all of the cgroupV2 enabled. I think we need > > to get input from Giuseppe. > > Thank you Daniel! > > ACK, it works for me after removing other options. > > I forget to change user to rootless, I still got error "[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied" like before > [root@kvm-08-guest29 ~]# podman --runtime=`which crun` --log-level=debug run > --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2 [test@kvm-08-guest29 ~]$ podman unshare cat /proc/self/uid_map 0 1000 1 1 100000 65536 [test@kvm-08-guest29 ~]$ podman --runtime=`which crun` --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0019] /usr/bin/conmon messages will be logged to syslog DEBU[0019] Running with no CGroups DEBU[0019] running conmon: /usr/bin/conmon args="[--api-version 1 -c d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 -u d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 -r /usr/bin/crun -b /home/test/.local/share/containers/storage/overlay-containers/d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4/userdata -p /run/user/1000/containers/overlay-containers/d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4/userdata/pidfile -n hopeful_ramanujan --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4/userdata/ctr.log --log-level debug --syslog --runtime-arg --cgroup-manager --runtime-arg disabled --conmon-pidfile /run/user/1000/containers/overlay-containers/d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4]" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0019] Received: 57339 INFO[0019] Got Conmon PID as 57336 DEBU[0019] Created container d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 in OCI runtime DEBU[0019] Starting container d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 with command [/usr/sbin/init] DEBU[0019] Started container d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 d77bd92068cd70bfe693e59a20b5517122ab197f5a5b5a60d3ecbb759080dbf4 DEBU[0019] Called run.PersistentPostRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2) [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d77bd92068cd quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 8 seconds ago Up 7 seconds ago hopeful_ramanujan > > [root@kvm-08-guest29 ~]# podman --runtime=`which crun` --log-level=debug run > --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2 [test@kvm-08-guest29 ~]$ podman --runtime=`which crun` --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... DEBU[0019] /usr/bin/conmon messages will be logged to syslog DEBU[0019] running conmon: /usr/bin/conmon args="[--api-version 1 -c f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259 -u f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259 -r /usr/bin/crun -b /home/test/.local/share/containers/storage/overlay-containers/f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259/userdata -p /run/user/1000/containers/overlay-containers/f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259/userdata/pidfile -n serene_bhabha --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -s -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259]" INFO[0019] Running conmon under slice user.slice and unitName libpod-conmon-f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259.scope [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0019] Received: -1 DEBU[0019] Cleaning up container f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259 DEBU[0019] Tearing down network namespace at /run/user/1000/netns/cni-98accba7-3c78-4bb5-8866-7e7e7c1e1647 for container f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259 DEBU[0019] unmounted container "f6d942e136930e7b6114c60a35bb6ef7afe9de11549aea042bb0d82916d9c259" DEBU[0019] ExitCode msg: "writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/cgroup.subtree_control`: no such file or directory: oci runtime command not found error" Error: writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/cgroup.subtree_control`: No such file or directory: OCI runtime command not found error [test@kvm-08-guest29 ~]$ echo $? 127 [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
(In reply to Daniel Walsh from comment #52) > I am still worked about this working out of the box. IE If I take a rhel8.4 > box and reboot it to cgroupv2 mode, does Podman just work. Is cgroups > configured in the expected way for Podman to just work. Got the same result to Comment 54 on 8.4. [test@kvm-05-guest11 ~]$ cat /etc/redhat-release Red Hat Enterprise Linux release 8.4 Beta (Ootpa) [test@kvm-05-guest11 ~]$ rpm -q podman crun runc kernel podman-2.0.5-4.module+el8.3.0+8152+c5c3262e.x86_64 crun-0.14.1-2.module+el8.3.0+8152+c5c3262e.x86_64 runc-1.0.0-68.rc92.module+el8.3.0+8152+c5c3262e.x86_64 kernel-4.18.0-239.el8.x86_64 [test@kvm-05-guest11 ~]$ mount|grep cgroup cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate) [test@kvm-05-guest11 ~]$ podman info | grep -iA2 runtime ociRuntime: name: crun package: crun-0.14.1-2.module+el8.3.0+8152+c5c3262e.x86_64 1. --cgroups=disabled [test@kvm-05-guest11 ~]$ podman --runtime=`which crun` --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroups=disabled --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied ...ignore... [test@kvm-05-guest11 ~]$ echo $? 0 [test@kvm-05-guest11 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b8a50b2cb46e quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 18 seconds ago Up 18 seconds ago stoic_tu 2. --cgroupns=host [test@kvm-05-guest11 ~]$ podman --runtime=`which crun` --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2 INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --runtime=/usr/bin/crun --log-level=debug run --cgroupns=host --detach quay.io/ansible/fedora30-test-container:1.9.2) ...ignore... [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied ...ignore... DEBU[0034] Error unmounting /home/test/.local/share/containers/storage/overlay/1850403959b8445219258e18e547484f1fbf757e2f76a350c471a356a79e7e31/merged with fusermount3 - exec: "fusermount3": executable file not found in $PATH DEBU[0034] Error unmounting /home/test/.local/share/containers/storage/overlay/1850403959b8445219258e18e547484f1fbf757e2f76a350c471a356a79e7e31/merged with fusermount - exec: "fusermount": executable file not found in $PATH DEBU[0034] unmounted container "d31ea90bede4bfe19c62cc2d205b8316314710c489dd70d30705c0d625186ecb" DEBU[0034] ExitCode msg: "writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/cgroup.subtree_control`: no such file or directory: oci runtime command not found error" Error: writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/cgroup.subtree_control`: No such file or directory: OCI runtime command not found error [test@kvm-05-guest11 ~]$ echo $? 127 [test@kvm-05-guest11 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [test@kvm-05-guest11 ~]$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d31ea90bede4 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 11 seconds ago Created optimistic_bose
Tested on latest podman-2.0.5-5.module+el8.3.0+8221+97165c3f.x86_64 w/ crun-0.14.1-2.module+el8.3.0+8221+97165c3f.x86_64, although I still got error "[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied", the container is running status, Is it acceptable on 8.3? or need we change Target Version to 8.4? ...ignore... INFO[0020] Running conmon under slice user.slice and unitName libpod-conmon-99e54eac1f92b97c99300a125f068ca2f31bfaf3715c418e9e8d1389a4fa6806.scope [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0020] Received: 71971 INFO[0020] Got Conmon PID as 71968 DEBU[0020] Created container 99e54eac1f92b97c99300a125f068ca2f31bfaf3715c418e9e8d1389a4fa6806 in OCI runtime DEBU[0020] Starting container 99e54eac1f92b97c99300a125f068ca2f31bfaf3715c418e9e8d1389a4fa6806 with command [/usr/sbin/init] DEBU[0020] Started container 99e54eac1f92b97c99300a125f068ca2f31bfaf3715c418e9e8d1389a4fa6806 99e54eac1f92b97c99300a125f068ca2f31bfaf3715c418e9e8d1389a4fa6806 DEBU[0020] Called run.PersistentPostRunE(podman --log-level=debug run --detach --privileged=false quay.io/ansible/fedora30-test-container:1.9.2) [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 99e54eac1f92 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 11 seconds ago Up 11 seconds ago sharp_curie ...ignore... INFO[0020] Running conmon under slice user.slice and unitName libpod-conmon-3cc0b276246640d23c8d1537078c95324de07c9c56db26067c8e07556afd8505.scope [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0020] Received: 72186 INFO[0020] Got Conmon PID as 72183 DEBU[0020] Created container 3cc0b276246640d23c8d1537078c95324de07c9c56db26067c8e07556afd8505 in OCI runtime DEBU[0020] Starting container 3cc0b276246640d23c8d1537078c95324de07c9c56db26067c8e07556afd8505 with command [/usr/sbin/init] DEBU[0020] Started container 3cc0b276246640d23c8d1537078c95324de07c9c56db26067c8e07556afd8505 3cc0b276246640d23c8d1537078c95324de07c9c56db26067c8e07556afd8505 DEBU[0020] Called run.PersistentPostRunE(podman --log-level=debug run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false quay.io/ansible/fedora30-test-container:1.9.2) [test@kvm-08-guest29 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3cc0b2762466 quay.io/ansible/fedora30-test-container:1.9.2 /usr/sbin/init 5 seconds ago Up 5 seconds ago mystifying_lehmann
Alex thanks for the testing updates. Dan Walsh, I'm thinking we need to bump this RHEL 8.4. Dan or Giuseppe any contrary thoughts?