Bug 1989481
| Summary: | Error: OCI runtime error: the requested cgroup controller `pids` is not available | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Alex Jia <ajia> | ||||
| Component: | podman | Assignee: | Tom Sweeney <tsweeney> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | atomic-bugs <atomic-bugs> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 8.5 | CC: | bbaude, dwalsh, gscrivan, jligon, jnovy, lsm5, mheon, pehunt, pthomas, tsweeney, umohnani | ||||
| Target Milestone: | beta | Keywords: | Regression | ||||
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
||||
| Hardware: | All | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2021-08-06 15:41:23 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 1947432, 1960948, 2035227 | ||||||
| Attachments: |
|
||||||
It's okay for rootfull mode. @ajia I'm not seeing the error listed in the report within your problem description and this looks more like a Podman rather than a conmon issue to me. Was there a bad cut/paste along the way? Giuseppe, can you take a look please? It seem like a rootless error to me. (In reply to Tom Sweeney from comment #2) > @ajia I'm not seeing the error listed in the report within your > problem description and this looks more like a Podman rather than a conmon > issue to me. Was there a bad cut/paste along the way? Giuseppe, can you > take a look please? It seem like a rootless error to me. Hi Tom, the above error is one of errors I have ever met, but for now, I can't hit it again, however, an interesting thing is I still met other different errors, I have added a attachment, please have a look if you have the time, thanks! (In reply to Alex Jia from comment #0) > Description of problem: > Failed to run podman run command in rootless mode, it's okay for > podman-3.3.0-0.11.module+el8.5.0+11598+600219b6 w/ > kernel-4.18.0-316.el8.x86_64. > > Version-Release number of selected component (if applicable): > > [test@kvm-02-guest15 ~]$ cat /etc/redhat-release > Red Hat Enterprise Linux release 8.5 Beta (Ootpa) > > [test@kvm-02-guest15 ~]$ rpm -q conmon podman runc crun kernel > conmon-2.0.29-1.module+el8.5.0+12014+438a5746.x86_64 > podman-3.3.0-0.17.module+el8.5.0+12014+438a5746.x86_64 > runc-1.0.1-3.module+el8.5.0+12014+438a5746.x86_64 > crun-0.21-1.module+el8.5.0+12014+438a5746.x86_64 > kernel-4.18.0-325.el8.x86_64 > > How reproducible: > always > > Steps to Reproduce: > 1. to configure rootless user > 2. podman run -td quay.io/libpod/alpine ls > > Actual results: > > [test@kvm-02-guest15 ~]$ podman unshare cat /proc/self/uid_map > 0 1000 1 > 1 100000 65536 > > [test@kvm-02-guest15 ~]$ podman info --format json|jq .host.ociRuntime > { > "name": "runc", > "package": "runc-1.0.1-3.module+el8.5.0+12014+438a5746.x86_64", > "path": "/usr/bin/runc", > "version": "runc version unknown\nspec: 1.0.2-dev\ngo: > go1.16.6\nlibseccomp: 2.5.1" > } > [test@kvm-07-guest24 ~]$ podman run --log-level=debug -d quay.io/libpod/alpine ls INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman run --log-level=debug -d quay.io/libpod/alpine ls) DEBU[0000] cached value indicated that overlay is supported DEBU[0000] Merged system config "/usr/share/containers/containers.conf" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] Using conmon: "/usr/bin/conmon" DEBU[0000] Initializing boltdb state at /home/test/.local/share/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /home/test/.local/share/containers/storage DEBU[0000] Using run root /run/user/1000/containers DEBU[0000] Using static dir /home/test/.local/share/containers/storage/libpod DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp DEBU[0000] Using volume path /home/test/.local/share/containers/storage/volumes DEBU[0000] cached value indicated that overlay is supported DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is not being used DEBU[0000] cached value indicated that native-diff is usable DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false DEBU[0000] Initializing event backend file DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument DEBU[0000] Using OCI runtime "/usr/bin/runc" INFO[0000] Found CNI network podman (type=bridge) at /home/test/.config/cni/net.d/87-podman.conflist DEBU[0000] Default CNI network name podman is unchangeable INFO[0000] Setting parallel job count to 7 DEBU[0000] Pulling image quay.io/libpod/alpine (policy: missing) DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage DEBU[0000] Trying "quay.io/libpod/alpine" ... DEBU[0000] Trying "quay.io/libpod/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) DEBU[0000] Looking up image "quay.io/libpod/alpine:latest" in local containers storage DEBU[0000] Trying "quay.io/libpod/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Found image "quay.io/libpod/alpine:latest" as "quay.io/libpod/alpine:latest" in local containers storage DEBU[0000] Found image "quay.io/libpod/alpine:latest" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage DEBU[0000] Trying "quay.io/libpod/alpine" ... DEBU[0000] Trying "quay.io/libpod/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage DEBU[0000] Trying "quay.io/libpod/alpine" ... DEBU[0000] Trying "quay.io/libpod/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 DEBU[0000] using systemd mode: false DEBU[0000] No hostname set; container's hostname will default to runtime default DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" DEBU[0000] Allocated lock 12 for container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] created container "16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a" DEBU[0000] container "16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a" has work directory "/home/test/.local/share/containers/storage/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata" DEBU[0000] container "16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a" has run directory "/run/user/1000/containers/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata" DEBU[0000] Made network namespace at /run/user/1000/netns/cni-69102c4c-c898-ac54-2fe2-7fb9f0817876 for container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is not being used DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false DEBU[0000] overlay: mount_data=,lowerdir=/home/test/.local/share/containers/storage/overlay/l/NFKZDVVDSRGSLNPHPCIJGAGGKE,upperdir=/home/test/.local/share/containers/storage/overlay/9aa161972c7935f9c78f2b539526a111e19c96dfc4b2b00634e58c79cd413bcf/diff,workdir=/home/test/.local/share/containers/storage/overlay/9aa161972c7935f9c78f2b539526a111e19c96dfc4b2b00634e58c79cd413bcf/work,userxattr,context="system_u:object_r:container_file_t:s0:c180,c923" DEBU[0000] mounted container "16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a" at "/home/test/.local/share/containers/storage/overlay/9aa161972c7935f9c78f2b539526a111e19c96dfc4b2b00634e58c79cd413bcf/merged" DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-69102c4c-c898-ac54-2fe2-7fb9f0817876 tap0 DEBU[0000] Created root filesystem for container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a at /home/test/.local/share/containers/storage/overlay/9aa161972c7935f9c78f2b539526a111e19c96dfc4b2b00634e58c79cd413bcf/merged DEBU[0000] Workdir "/" resolved to host path "/home/test/.local/share/containers/storage/overlay/9aa161972c7935f9c78f2b539526a111e19c96dfc4b2b00634e58c79cd413bcf/merged" DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d DEBU[0000] Created OCI spec for container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a at /home/test/.local/share/containers/storage/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata/config.json DEBU[0000] /usr/bin/conmon messages will be logged to syslog DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a -u 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a -r /usr/bin/runc -b /home/test/.local/share/containers/storage/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata -p /run/user/1000/containers/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata/pidfile -n sad_grothendieck --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a]" INFO[0000] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpuset: mkdir /sys/fs/cgroup/cpuset/conmon: permission denied [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied DEBU[0000] Received: 5960 INFO[0000] Got Conmon PID as 5949 DEBU[0000] Created container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a in OCI runtime DEBU[0000] Starting container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a with command [ls] DEBU[0000] Started container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a DEBU[0000] Called run.PersistentPostRunE(podman run --log-level=debug -d quay.io/libpod/alpine ls) For details, please see attachment 1810652 [details]. The actual error is: `Error: OCI runtime error: the requested cgroup controller `pids` is not available` The conmon messages are expected for rootless - it always tries to alter its OOM score, but as rootless it does not have permission and fails, logging that message. This has no effect on the functionality of Podman or Conmon. I seem to recall us seeing issues with this controller before, but I can't find details. Tagging Giuseppe in hopes he remembers. I think the `Error: OCI runtime error: the requested cgroup controller `pids` is not available` error is caused by https://bugzilla.redhat.com/show_bug.cgi?id=1897579 Should we close this issue as a duplicate of #1897579? The error message in the other bug is different (Error: writing file `/sys/fs/cgroup/user.slice/user-992.slice/user/cgroup.subtree_control`: No such file or directory: OCI runtime command not found error) but it is still coming from crun. Newer versions of crun have a clearer error message (`Error: OCI runtime error: the requested cgroup controller `pids` is not available`). Giuseppe, thanks for digging. I'll go with your thinking and will close this as a dupe of 1897579, thanks! *** This bug has been marked as a duplicate of bug 1897579 *** |
Description of problem: Failed to run podman run command in rootless mode, it's okay for podman-3.3.0-0.11.module+el8.5.0+11598+600219b6 w/ kernel-4.18.0-316.el8.x86_64. Version-Release number of selected component (if applicable): [test@kvm-02-guest15 ~]$ cat /etc/redhat-release Red Hat Enterprise Linux release 8.5 Beta (Ootpa) [test@kvm-02-guest15 ~]$ rpm -q conmon podman runc crun kernel conmon-2.0.29-1.module+el8.5.0+12014+438a5746.x86_64 podman-3.3.0-0.17.module+el8.5.0+12014+438a5746.x86_64 runc-1.0.1-3.module+el8.5.0+12014+438a5746.x86_64 crun-0.21-1.module+el8.5.0+12014+438a5746.x86_64 kernel-4.18.0-325.el8.x86_64 How reproducible: always Steps to Reproduce: 1. to configure rootless user 2. podman run -td quay.io/libpod/alpine ls Actual results: [test@kvm-02-guest15 ~]$ podman unshare cat /proc/self/uid_map 0 1000 1 1 100000 65536 [test@kvm-02-guest15 ~]$ podman info --format json|jq .host.ociRuntime { "name": "runc", "package": "runc-1.0.1-3.module+el8.5.0+12014+438a5746.x86_64", "path": "/usr/bin/runc", "version": "runc version unknown\nspec: 1.0.2-dev\ngo: go1.16.6\nlibseccomp: 2.5.1" } [test@kvm-02-guest15 ~]$ podman run -td quay.io/libpod/alpine ls Trying to pull quay.io/libpod/alpine:latest... Getting image source signatures Copying blob 9d16cba9fb96 done Copying config 9617696764 done Writing manifest to image destination Storing signatures Error: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: open /sys/fs/cgroup/user.slice/user-1000.slice/user/user.slice/libpod-d90ea224ee9da3c0e17f7d5b6e82d198fdc0f27220e943c445dae41fdcd3d176.scope/pids.max: no such file or directory: OCI runtime attempted to invoke a command that was not found [test@kvm-02-guest15 ~]$ podman run --runtime=crun -td quay.io/libpod/alpine ls Error: OCI runtime error: the requested cgroup controller `pids` is not available Expected results: can run podman run command in rootless mode Additional info: [test@kvm-02-guest15 ~]$ podman --log-level=debug run -td quay.io/libpod/alpine ls INFO[0000] podman filtering at log level debug DEBU[0000] Called run.PersistentPreRunE(podman --log-level=debug run -td quay.io/libpod/alpine ls) DEBU[0000] cached value indicated that overlay is supported DEBU[0000] Merged system config "/usr/share/containers/containers.conf" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] Using conmon: "/usr/bin/conmon" DEBU[0000] Initializing boltdb state at /home/test/.local/share/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /home/test/.local/share/containers/storage DEBU[0000] Using run root /run/user/1000/containers DEBU[0000] Using static dir /home/test/.local/share/containers/storage/libpod DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp DEBU[0000] Using volume path /home/test/.local/share/containers/storage/volumes DEBU[0000] cached value indicated that overlay is supported DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is not being used DEBU[0000] cached value indicated that native-diff is usable DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false DEBU[0000] Initializing event backend file DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument DEBU[0000] Using OCI runtime "/usr/bin/runc" INFO[0000] Found CNI network podman (type=bridge) at /home/test/.config/cni/net.d/87-podman.conflist DEBU[0000] Default CNI network name podman is unchangeable INFO[0000] Setting parallel job count to 4 DEBU[0000] Pulling image quay.io/libpod/alpine (policy: missing) DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage DEBU[0000] Trying "quay.io/libpod/alpine" ... DEBU[0000] Trying "quay.io/libpod/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) DEBU[0000] Looking up image "quay.io/libpod/alpine:latest" in local containers storage DEBU[0000] Trying "quay.io/libpod/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Found image "quay.io/libpod/alpine:latest" as "quay.io/libpod/alpine:latest" in local containers storage DEBU[0000] Found image "quay.io/libpod/alpine:latest" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage DEBU[0000] Trying "quay.io/libpod/alpine" ... DEBU[0000] Trying "quay.io/libpod/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage DEBU[0000] Trying "quay.io/libpod/alpine" ... DEBU[0000] Trying "quay.io/libpod/alpine:latest" ... DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 DEBU[0000] using systemd mode: false DEBU[0000] No hostname set; container's hostname will default to runtime default DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" DEBU[0000] Allocated lock 11 for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" DEBU[0000] created container "89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1" DEBU[0000] container "89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1" has work directory "/home/test/.local/share/containers/storage/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata" DEBU[0000] container "89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1" has run directory "/run/user/1000/containers/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is not being used DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false DEBU[0000] overlay: mount_data=,lowerdir=/home/test/.local/share/containers/storage/overlay/l/UQDMUWSRCUFSQTDU2QTMMTZ5KZ,upperdir=/home/test/.local/share/containers/storage/overlay/9bc1ef6fde31f334d3bc2cc0f73586565cae6f14527cd862f8463784aaf75b97/diff,workdir=/home/test/.local/share/containers/storage/overlay/9bc1ef6fde31f334d3bc2cc0f73586565cae6f14527cd862f8463784aaf75b97/work,userxattr,context="system_u:object_r:container_file_t:s0:c835,c915" DEBU[0000] mounted container "89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1" at "/home/test/.local/share/containers/storage/overlay/9bc1ef6fde31f334d3bc2cc0f73586565cae6f14527cd862f8463784aaf75b97/merged" DEBU[0000] Created root filesystem for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 at /home/test/.local/share/containers/storage/overlay/9bc1ef6fde31f334d3bc2cc0f73586565cae6f14527cd862f8463784aaf75b97/merged DEBU[0000] Made network namespace at /run/user/1000/netns/cni-6e3da4c0-2949-ebfe-29a7-6fdae4572dfd for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-6e3da4c0-2949-ebfe-29a7-6fdae4572dfd tap0 DEBU[0000] Workdir "/" resolved to host path "/home/test/.local/share/containers/storage/overlay/9bc1ef6fde31f334d3bc2cc0f73586565cae6f14527cd862f8463784aaf75b97/merged" DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription DEBU[0000] Setting CGroups for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 to user.slice:libpod:89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d DEBU[0000] Created OCI spec for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 at /home/test/.local/share/containers/storage/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata/config.json DEBU[0000] /usr/bin/conmon messages will be logged to syslog DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 -u 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 -r /usr/bin/runc -b /home/test/.local/share/containers/storage/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata -p /run/user/1000/containers/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata/pidfile -n silly_wilson --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata/ctr.log --log-level debug --syslog -t --conmon-pidfile /run/user/1000/containers/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1]" INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1.scope DEBU[0000] Received: -1 DEBU[0000] Cleaning up container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 DEBU[0000] Tearing down network namespace at /run/user/1000/netns/cni-6e3da4c0-2949-ebfe-29a7-6fdae4572dfd for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 DEBU[0000] unmounted container "89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1" DEBU[0000] ExitCode msg: "time=\"2021-08-03t06:18:09-04:00\" level=error msg=\"container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for prochooks process caused: open /sys/fs/cgroup/user.slice/user-1000.slice/user/user.slice/libpod-89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1.scope/pids.max: no such file or directory\": oci runtime attempted to invoke a command that was not found" Error: time="2021-08-03T06:18:09-04:00" level=error msg="container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: open /sys/fs/cgroup/user.slice/user-1000.slice/user/user.slice/libpod-89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1.scope/pids.max: no such file or directory": OCI runtime attempted to invoke a command that was not found [test@kvm-02-guest15 ~]$ ls /sys/fs/cgroup/user.slice/user-1000.slice/user/user.slice/ cgroup.controllers cgroup.events cgroup.freeze cgroup.max.depth cgroup.max.descendants cgroup.procs cgroup.stat cgroup.subtree_control cgroup.threads cgroup.type cpu.pressure cpu.stat io.pressure memory.pressure