RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1989481 - Error: OCI runtime error: the requested cgroup controller `pids` is not available
Summary: Error: OCI runtime error: the requested cgroup controller `pids` is not avail...
Keywords:
Status: CLOSED DUPLICATE of bug 1897579
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.5
Hardware: All
OS: Linux
high
high
Target Milestone: beta
: ---
Assignee: Tom Sweeney
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1947432 1960948 2035227
TreeView+ depends on / blocked
 
Reported: 2021-08-03 10:25 UTC by Alex Jia
Modified: 2021-12-23 10:38 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-06 15:41:23 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)
test result (100.09 KB, text/plain)
2021-08-04 04:12 UTC, Alex Jia
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-92039 0 None None None 2021-08-03 10:28:42 UTC

Description Alex Jia 2021-08-03 10:25:45 UTC
Description of problem:
Failed to run podman run command in rootless mode, it's okay for podman-3.3.0-0.11.module+el8.5.0+11598+600219b6 w/ kernel-4.18.0-316.el8.x86_64.

Version-Release number of selected component (if applicable):

[test@kvm-02-guest15 ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.5 Beta (Ootpa)

[test@kvm-02-guest15 ~]$ rpm -q conmon podman runc crun kernel
conmon-2.0.29-1.module+el8.5.0+12014+438a5746.x86_64
podman-3.3.0-0.17.module+el8.5.0+12014+438a5746.x86_64
runc-1.0.1-3.module+el8.5.0+12014+438a5746.x86_64
crun-0.21-1.module+el8.5.0+12014+438a5746.x86_64
kernel-4.18.0-325.el8.x86_64

How reproducible:
always

Steps to Reproduce:
1. to configure rootless user
2. podman run -td quay.io/libpod/alpine ls

Actual results:

[test@kvm-02-guest15 ~]$ podman unshare cat /proc/self/uid_map 
         0       1000          1
         1     100000      65536

[test@kvm-02-guest15 ~]$ podman info --format json|jq .host.ociRuntime
{
  "name": "runc",
  "package": "runc-1.0.1-3.module+el8.5.0+12014+438a5746.x86_64",
  "path": "/usr/bin/runc",
  "version": "runc version unknown\nspec: 1.0.2-dev\ngo: go1.16.6\nlibseccomp: 2.5.1"
}

[test@kvm-02-guest15 ~]$ podman run -td quay.io/libpod/alpine ls
Trying to pull quay.io/libpod/alpine:latest...
Getting image source signatures
Copying blob 9d16cba9fb96 done  
Copying config 9617696764 done  
Writing manifest to image destination
Storing signatures
Error: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: open /sys/fs/cgroup/user.slice/user-1000.slice/user/user.slice/libpod-d90ea224ee9da3c0e17f7d5b6e82d198fdc0f27220e943c445dae41fdcd3d176.scope/pids.max: no such file or directory: OCI runtime attempted to invoke a command that was not found

[test@kvm-02-guest15 ~]$ podman run --runtime=crun -td quay.io/libpod/alpine ls
Error: OCI runtime error: the requested cgroup controller `pids` is not available

Expected results:
can run podman run command in rootless mode

Additional info:

[test@kvm-02-guest15 ~]$ podman --log-level=debug run -td quay.io/libpod/alpine ls
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman --log-level=debug run -td quay.io/libpod/alpine ls) 
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/test/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/test/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/test/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/test/.local/share/containers/storage/volumes 
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] cached value indicated that metacopy is not being used 
DEBU[0000] cached value indicated that native-diff is usable 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] Initializing event backend file              
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/runc"            
INFO[0000] Found CNI network podman (type=bridge) at /home/test/.config/cni/net.d/87-podman.conflist 
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 4              
DEBU[0000] Pulling image quay.io/libpod/alpine (policy: missing) 
DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage 
DEBU[0000] Trying "quay.io/libpod/alpine" ...           
DEBU[0000] Trying "quay.io/libpod/alpine:latest" ...    
DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage 
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) 
DEBU[0000] Looking up image "quay.io/libpod/alpine:latest" in local containers storage 
DEBU[0000] Trying "quay.io/libpod/alpine:latest" ...    
DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] Found image "quay.io/libpod/alpine:latest" as "quay.io/libpod/alpine:latest" in local containers storage 
DEBU[0000] Found image "quay.io/libpod/alpine:latest" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) 
DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage 
DEBU[0000] Trying "quay.io/libpod/alpine" ...           
DEBU[0000] Trying "quay.io/libpod/alpine:latest" ...    
DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage 
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) 
DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage 
DEBU[0000] Trying "quay.io/libpod/alpine" ...           
DEBU[0000] Trying "quay.io/libpod/alpine:latest" ...    
DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage 
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4) 
DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 
DEBU[0000] using systemd mode: false                    
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" 
DEBU[0000] Allocated lock 11 for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 
DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] created container "89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1" 
DEBU[0000] container "89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1" has work directory "/home/test/.local/share/containers/storage/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata" 
DEBU[0000] container "89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1" has run directory "/run/user/1000/containers/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata" 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] cached value indicated that metacopy is not being used 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] overlay: mount_data=,lowerdir=/home/test/.local/share/containers/storage/overlay/l/UQDMUWSRCUFSQTDU2QTMMTZ5KZ,upperdir=/home/test/.local/share/containers/storage/overlay/9bc1ef6fde31f334d3bc2cc0f73586565cae6f14527cd862f8463784aaf75b97/diff,workdir=/home/test/.local/share/containers/storage/overlay/9bc1ef6fde31f334d3bc2cc0f73586565cae6f14527cd862f8463784aaf75b97/work,userxattr,context="system_u:object_r:container_file_t:s0:c835,c915" 
DEBU[0000] mounted container "89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1" at "/home/test/.local/share/containers/storage/overlay/9bc1ef6fde31f334d3bc2cc0f73586565cae6f14527cd862f8463784aaf75b97/merged" 
DEBU[0000] Created root filesystem for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 at /home/test/.local/share/containers/storage/overlay/9bc1ef6fde31f334d3bc2cc0f73586565cae6f14527cd862f8463784aaf75b97/merged 
DEBU[0000] Made network namespace at /run/user/1000/netns/cni-6e3da4c0-2949-ebfe-29a7-6fdae4572dfd for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 
DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-6e3da4c0-2949-ebfe-29a7-6fdae4572dfd tap0 
DEBU[0000] Workdir "/" resolved to host path "/home/test/.local/share/containers/storage/overlay/9bc1ef6fde31f334d3bc2cc0f73586565cae6f14527cd862f8463784aaf75b97/merged" 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription 
DEBU[0000] Setting CGroups for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 to user.slice:libpod:89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Created OCI spec for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 at /home/test/.local/share/containers/storage/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 -u 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 -r /usr/bin/runc -b /home/test/.local/share/containers/storage/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata -p /run/user/1000/containers/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata/pidfile -n silly_wilson --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata/ctr.log --log-level debug --syslog -t --conmon-pidfile /run/user/1000/containers/overlay-containers/89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1.scope 
DEBU[0000] Received: -1                                 
DEBU[0000] Cleaning up container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 
DEBU[0000] Tearing down network namespace at /run/user/1000/netns/cni-6e3da4c0-2949-ebfe-29a7-6fdae4572dfd for container 89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1 
DEBU[0000] unmounted container "89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1" 
DEBU[0000] ExitCode msg: "time=\"2021-08-03t06:18:09-04:00\" level=error msg=\"container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for prochooks process caused: open /sys/fs/cgroup/user.slice/user-1000.slice/user/user.slice/libpod-89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1.scope/pids.max: no such file or directory\": oci runtime attempted to invoke a command that was not found" 
Error: time="2021-08-03T06:18:09-04:00" level=error msg="container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: open /sys/fs/cgroup/user.slice/user-1000.slice/user/user.slice/libpod-89cbf941f8c6d0b18ace7c170839541891305dcbf10995155005ad075b9857a1.scope/pids.max: no such file or directory": OCI runtime attempted to invoke a command that was not found

[test@kvm-02-guest15 ~]$ ls /sys/fs/cgroup/user.slice/user-1000.slice/user/user.slice/
cgroup.controllers  cgroup.events  cgroup.freeze  cgroup.max.depth  cgroup.max.descendants  cgroup.procs  cgroup.stat  cgroup.subtree_control  cgroup.threads  cgroup.type  cpu.pressure  cpu.stat  io.pressure  memory.pressure

Comment 1 Alex Jia 2021-08-03 10:26:31 UTC
It's okay for rootfull mode.

Comment 2 Tom Sweeney 2021-08-03 19:40:39 UTC
@ajia I'm not seeing the error listed in the report within your problem description and this looks more like a Podman rather than a conmon issue to me.  Was there a bad cut/paste along the way?  Giuseppe, can you take a look please?  It seem like a rootless error to me.

Comment 4 Alex Jia 2021-08-04 04:17:30 UTC
(In reply to Tom Sweeney from comment #2)
> @ajia I'm not seeing the error listed in the report within your
> problem description and this looks more like a Podman rather than a conmon
> issue to me.  Was there a bad cut/paste along the way?  Giuseppe, can you
> take a look please?  It seem like a rootless error to me.

Hi Tom, the above error is one of errors I have ever met, but for now, I can't
hit it again, however, an interesting thing is I still met other different errors,
I have added a attachment, please have a look if you have the time, thanks!

Comment 5 Alex Jia 2021-08-04 05:10:16 UTC
(In reply to Alex Jia from comment #0)
> Description of problem:
> Failed to run podman run command in rootless mode, it's okay for
> podman-3.3.0-0.11.module+el8.5.0+11598+600219b6 w/
> kernel-4.18.0-316.el8.x86_64.
> 
> Version-Release number of selected component (if applicable):
> 
> [test@kvm-02-guest15 ~]$ cat /etc/redhat-release 
> Red Hat Enterprise Linux release 8.5 Beta (Ootpa)
> 
> [test@kvm-02-guest15 ~]$ rpm -q conmon podman runc crun kernel
> conmon-2.0.29-1.module+el8.5.0+12014+438a5746.x86_64
> podman-3.3.0-0.17.module+el8.5.0+12014+438a5746.x86_64
> runc-1.0.1-3.module+el8.5.0+12014+438a5746.x86_64
> crun-0.21-1.module+el8.5.0+12014+438a5746.x86_64
> kernel-4.18.0-325.el8.x86_64
> 
> How reproducible:
> always
> 
> Steps to Reproduce:
> 1. to configure rootless user
> 2. podman run -td quay.io/libpod/alpine ls
> 
> Actual results:
> 
> [test@kvm-02-guest15 ~]$ podman unshare cat /proc/self/uid_map 
>          0       1000          1
>          1     100000      65536
> 
> [test@kvm-02-guest15 ~]$ podman info --format json|jq .host.ociRuntime
> {
>   "name": "runc",
>   "package": "runc-1.0.1-3.module+el8.5.0+12014+438a5746.x86_64",
>   "path": "/usr/bin/runc",
>   "version": "runc version unknown\nspec: 1.0.2-dev\ngo:
> go1.16.6\nlibseccomp: 2.5.1"
> }
> 

[test@kvm-07-guest24 ~]$ podman run --log-level=debug -d quay.io/libpod/alpine ls
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level=debug -d quay.io/libpod/alpine ls)
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/test/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/test/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/test/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/test/.local/share/containers/storage/volumes
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] cached value indicated that metacopy is not being used
DEBU[0000] cached value indicated that native-diff is usable
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/runc"
INFO[0000] Found CNI network podman (type=bridge) at /home/test/.config/cni/net.d/87-podman.conflist
DEBU[0000] Default CNI network name podman is unchangeable
INFO[0000] Setting parallel job count to 7
DEBU[0000] Pulling image quay.io/libpod/alpine (policy: missing)
DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage
DEBU[0000] Trying "quay.io/libpod/alpine" ...
DEBU[0000] Trying "quay.io/libpod/alpine:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4)
DEBU[0000] Looking up image "quay.io/libpod/alpine:latest" in local containers storage
DEBU[0000] Trying "quay.io/libpod/alpine:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] Found image "quay.io/libpod/alpine:latest" as "quay.io/libpod/alpine:latest" in local containers storage
DEBU[0000] Found image "quay.io/libpod/alpine:latest" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4)
DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage
DEBU[0000] Trying "quay.io/libpod/alpine" ...
DEBU[0000] Trying "quay.io/libpod/alpine:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4)
DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] Looking up image "quay.io/libpod/alpine" in local containers storage
DEBU[0000] Trying "quay.io/libpod/alpine" ...
DEBU[0000] Trying "quay.io/libpod/alpine:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage
DEBU[0000] Found image "quay.io/libpod/alpine" as "quay.io/libpod/alpine:latest" in local containers storage ([overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4)
DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] Inspecting image 961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4
DEBU[0000] using systemd mode: false
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Allocated lock 12 for container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a
DEBU[0000] parsed reference into "[overlay@/home/test/.local/share/containers/storage+/run/user/1000/containers]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
DEBU[0000] created container "16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a"
DEBU[0000] container "16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a" has work directory "/home/test/.local/share/containers/storage/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata"
DEBU[0000] container "16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a" has run directory "/run/user/1000/containers/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata"
DEBU[0000] Made network namespace at /run/user/1000/netns/cni-69102c4c-c898-ac54-2fe2-7fb9f0817876 for container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] cached value indicated that metacopy is not being used
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] overlay: mount_data=,lowerdir=/home/test/.local/share/containers/storage/overlay/l/NFKZDVVDSRGSLNPHPCIJGAGGKE,upperdir=/home/test/.local/share/containers/storage/overlay/9aa161972c7935f9c78f2b539526a111e19c96dfc4b2b00634e58c79cd413bcf/diff,workdir=/home/test/.local/share/containers/storage/overlay/9aa161972c7935f9c78f2b539526a111e19c96dfc4b2b00634e58c79cd413bcf/work,userxattr,context="system_u:object_r:container_file_t:s0:c180,c923"
DEBU[0000] mounted container "16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a" at "/home/test/.local/share/containers/storage/overlay/9aa161972c7935f9c78f2b539526a111e19c96dfc4b2b00634e58c79cd413bcf/merged"
DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-69102c4c-c898-ac54-2fe2-7fb9f0817876 tap0
DEBU[0000] Created root filesystem for container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a at /home/test/.local/share/containers/storage/overlay/9aa161972c7935f9c78f2b539526a111e19c96dfc4b2b00634e58c79cd413bcf/merged
DEBU[0000] Workdir "/" resolved to host path "/home/test/.local/share/containers/storage/overlay/9aa161972c7935f9c78f2b539526a111e19c96dfc4b2b00634e58c79cd413bcf/merged"
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Created OCI spec for container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a at /home/test/.local/share/containers/storage/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a -u 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a -r /usr/bin/runc -b /home/test/.local/share/containers/storage/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata -p /run/user/1000/containers/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata/pidfile -n sad_grothendieck --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -l k8s-file:/home/test/.local/share/containers/storage/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a]"
INFO[0000] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpuset: mkdir /sys/fs/cgroup/cpuset/conmon: permission denied
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: 5960
INFO[0000] Got Conmon PID as 5949
DEBU[0000] Created container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a in OCI runtime
DEBU[0000] Starting container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a with command [ls]
DEBU[0000] Started container 16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a
16c81255799e0eb73910b9e8517b114ca10cd8cc3554ffdef5b98a2d181e380a
DEBU[0000] Called run.PersistentPostRunE(podman run --log-level=debug -d quay.io/libpod/alpine ls)
For details, please see attachment 1810652 [details].

Comment 6 Matthew Heon 2021-08-04 15:08:20 UTC
The actual error is:
`Error: OCI runtime error: the requested cgroup controller `pids` is not available`

The conmon messages are expected for rootless - it always tries to alter its OOM score, but as rootless it does not have permission and fails, logging that message. This has no effect on the functionality of Podman or Conmon.

Comment 7 Matthew Heon 2021-08-04 15:09:35 UTC
I seem to recall us seeing issues with this controller before, but I can't find details. Tagging Giuseppe in hopes he remembers.

Comment 8 Giuseppe Scrivano 2021-08-06 07:29:33 UTC
I think the `Error: OCI runtime error: the requested cgroup controller `pids` is not available` error is caused by https://bugzilla.redhat.com/show_bug.cgi?id=1897579

Should we close this issue as a duplicate of #1897579?

The error message in the other bug is different (Error: writing file `/sys/fs/cgroup/user.slice/user-992.slice/user/cgroup.subtree_control`: No such file or directory: OCI runtime command not found error) but it is still coming from crun.  Newer versions of crun have a clearer error message (`Error: OCI runtime error: the requested cgroup controller `pids` is not available`).

Comment 9 Tom Sweeney 2021-08-06 15:41:23 UTC
Giuseppe, thanks for digging.  I'll go with your thinking and will close this as a dupe of 1897579, thanks!

*** This bug has been marked as a duplicate of bug 1897579 ***


Note You need to log in before you can comment on or make changes to this bug.