Bug 1768125 - podman run OCI permission error, bpf create
Summary: podman run OCI permission error, bpf create
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: podman
Version: 31
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Lokesh Mandvekar
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-11-02 18:59 UTC by Brian
Modified: 2019-11-29 16:42 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-04 16:50:24 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Brian 2019-11-02 18:59:05 UTC
Description of problem:

After upgrading to F31 my systemd service to run a coredns container failed to start. I'm also able to reproduce the error from a shell.


Version-Release number of selected component (if applicable):

$ podman version
Version:            1.6.2
RemoteAPI Version:  1
Go Version:         go1.13.1
OS/Arch:            linux/amd64


How reproducible:

consistently

Steps to Reproduce:
1. $ sudo /usr/bin/podman run --name coredns-container --volume=/var/lib/coredns:/root -p 10.100.10.5:53:53/udp --privileged --workdir /root coredns/coredns -conf /root/Corefile

Actual results:

Error: bpf create : Operation not permitted: OCI runtime permission denied error

Expected results:

No error, container started.

Additional info:

Comment 1 Matthew Heon 2019-11-04 14:33:34 UTC
Since you upgraded to Fedora 31, have you run the `podman system migrate --runtime crun` command to convert legacy containers to the new, CGroups v2 compatible runtime?

Comment 2 Brian 2019-11-09 12:44:19 UTC
Sorry for the delayed response but I couldn't find time this week to work on this.

Though this bug has already been closed the suggested fix `podman system migrate --runtime crun` did not resolve my issue:

$ sudo podman system migrate --runtime crun
$ sudo /usr/bin/podman run --name coredns-container --volume=/var/lib/coredns:/root -p 10.100.10.5:53:53/udp --privileged --workdir /root coredns/coredns -conf /root/Corefile
Error: bpf create : Operation not permitted: OCI runtime permission denied error
$ 

How should I proceed? Should I reopen this ticket?

Thx in advance.

Comment 3 Daniel Walsh 2019-11-11 13:13:34 UTC
Are you on an SELinux system.  This could be an SELinux issue, since  a confined container would not be alloed to  do anything in the /root directory unless you relabeled with a :Z on the volume mount.

Comment 4 Brian 2019-11-12 03:59:08 UTC
Dan,

I'm on Fedora 31 with SELinux set to enforcing. If I disable enforcing the `podman run` command still fails with the same error:

[bfallik@groot ~]$ sudo getenforce
Permissive
[bfallik@groot ~]$ sudo /usr/bin/podman run --name coredns-container --volume=/var/lib/coredns:/root -p 10.100.10.5:53:53/udp --privileged --workdir /root coredns/coredns -conf /root/Corefile
Error: bpf create : Operation not permitted: OCI runtime permission denied error

Does that imply that this is not an SELinux issue or does enforcing|permissive behave differently inside the container?

brian

Comment 5 Daniel Walsh 2019-11-12 12:45:48 UTC
Not SELinux.

This looks like crun is attempting to setup a BPF fule for the -p 10.100.10.5:53:53/udp  line and getting permission denied.  I have never seen anyone use that syntax.

Could you open this as an issue on Podman and I will ask Giuseppe to look at it.  

BTW Did the syntax work on F30 with runc?

Comment 6 Giuseppe Scrivano 2019-11-12 14:39:18 UTC
bpf(2) may fail if you have no CAP_SYS_ADMIN.  Could you please share the output for sudo grep Cap /proc/self/status?

Also, do you see anything in journalctl/dmesg that could help us?

Comment 7 Brian 2019-11-13 04:08:14 UTC
Dan - This syntax used to work in F30. I created this ticket because it stopped working after upgrading to F31.

Giuseppe - I don't know specifically what to look for in journalctl/dmesg but nothing jumped out at me.

$ sudo grep Cap /proc/self/status
CapInh:	0000000000000000
CapPrm:	0000003fffffffff
CapEff:	0000003fffffffff
CapBnd:	0000003fffffffff
CapAmb:	0000000000000000

Comment 8 Giuseppe Scrivano 2019-11-13 13:40:30 UTC
is there any error generated while you run the container?

Comment 9 Brian 2019-11-16 17:14:21 UTC
Hi Giuseppe,

I'm not sure what you're asking. When I run the container at a terminal I see the output I shared above:

  $ sudo /usr/bin/podman run --name coredns-container --volume=/var/lib/coredns:/root -p 10.100.10.5:53:53/udp -- privileged --workdir /root coredns/coredns -conf /root/Corefile
  Error: bpf create : Operation not permitted: OCI runtime permission denied error

I can't seem to access any logs from the dead container:
  $ sudo podman logs $(sudo podman ps -a -q -f 'name=coredns-container')
  $ 

I don't see any other interesting errors in dmesg.

$ sudo dmesg | tail
[1205339.751883] IPv6: ADDRCONF(NETDEV_CHANGE): veth8da68ceb: link becomes ready
[1205339.752015] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[1205339.753925] cni-podman0: port 1(veth8da68ceb) entered blocking state
[1205339.753929] cni-podman0: port 1(veth8da68ceb) entered disabled state
[1205339.754194] device veth8da68ceb entered promiscuous mode
[1205339.754539] cni-podman0: port 1(veth8da68ceb) entered blocking state
[1205339.754542] cni-podman0: port 1(veth8da68ceb) entered forwarding state
[1205340.045639] cni-podman0: port 1(veth8da68ceb) entered disabled state
[1205340.050007] device veth8da68ceb left promiscuous mode
[1205340.050025] cni-podman0: port 1(veth8da68ceb) entered disabled state


Is there somewhere else to look for errors?

Thanks!

Comment 10 Matthew Heon 2019-11-19 17:27:49 UTC
Can you rerun, adding `--log-level=debug` to your Podman command to invoke verbose logging? Including the full logs of the Podman command and anything printed into the journal by `conmon` would be appreciated. I would not expect to see anything in `podman logs` and dmesg.

Comment 11 Brian 2019-11-19 22:56:50 UTC
Matthew,

Sure, here you go:

$ sudo /usr/bin/podman run --name coredns-container --volume=/var/lib/coredns:/root -p 10.100.10.5:53:53/udp --privileged --workdir /root --log-level=debug coredns/coredns -conf /root/Corefile
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/lib/containers/storage 
DEBU[0000] Using run root /var/run/containers/storage   
DEBU[0000] Using static dir /var/lib/containers/storage/libpod 
DEBU[0000] Using tmp dir /var/run/libpod                
DEBU[0000] Using volume path /var/lib/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] cached value indicated that metacopy is being used 
DEBU[0000] cached value indicated that native-diff is not being used 
WARN[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/coredns/coredns:latest" 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@811d31b1fde80a5392b8ae9d42b30e0f865f9c76e8004e01b1419c17906cf9f8" 
DEBU[0000] exporting opaque data as blob "sha256:811d31b1fde80a5392b8ae9d42b30e0f865f9c76e8004e01b1419c17906cf9f8" 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@811d31b1fde80a5392b8ae9d42b30e0f865f9c76e8004e01b1419c17906cf9f8" 
DEBU[0000] exporting opaque data as blob "sha256:811d31b1fde80a5392b8ae9d42b30e0f865f9c76e8004e01b1419c17906cf9f8" 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@811d31b1fde80a5392b8ae9d42b30e0f865f9c76e8004e01b1419c17906cf9f8" 
DEBU[0000] User mount /var/lib/coredns:/root options [] 
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Using bridge netmode                         
DEBU[0000] Adding mount /proc                           
DEBU[0000] Adding mount /dev                            
DEBU[0000] Adding mount /dev/pts                        
DEBU[0000] Adding mount /dev/mqueue                     
DEBU[0000] Adding mount /sys                            
DEBU[0000] Adding mount /sys/fs/cgroup                  
DEBU[0000] setting container name coredns-container     
DEBU[0000] created OCI spec and options for new container 
DEBU[0000] Allocated lock 0 for container 9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@811d31b1fde80a5392b8ae9d42b30e0f865f9c76e8004e01b1419c17906cf9f8" 
DEBU[0000] exporting opaque data as blob "sha256:811d31b1fde80a5392b8ae9d42b30e0f865f9c76e8004e01b1419c17906cf9f8" 
DEBU[0000] created container "9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3" 
DEBU[0000] container "9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3" has work directory "/var/lib/containers/storage/overlay-containers/9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3/userdata" 
DEBU[0000] container "9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3" has run directory "/var/run/containers/storage/overlay-containers/9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3/userdata" 
DEBU[0000] New container created "9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3" 
DEBU[0000] container "9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3" has CgroupParent "machine.slice/libpod-9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3.scope" 
DEBU[0000] Not attaching to stdin                       
DEBU[0000] Made network namespace at /var/run/netns/cni-e4a5c663-c0ed-136c-3c79-8d573c8bd583 for container 9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 
INFO[0000] Got pod network &{Name:coredns-container Namespace:coredns-container ID:9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 NetNS:/var/run/netns/cni-e4a5c663-c0ed-136c-3c79-8d573c8bd583 Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[{HostPort:53 ContainerPort:53 Protocol:udp HostIP:10.100.10.5}] Bandwidth:<nil> IpRanges:[]}]} 
INFO[0000] About to add CNI network cni-loopback (type=loopback) 
DEBU[0000] overlay: mount_data=metacopy=on,lowerdir=/var/lib/containers/storage/overlay/l/DE5TUMYOIRPVPXQPMP3M5RSISM:/var/lib/containers/storage/overlay/l/4BOJEM45MJXOWTHRL4SCRTL35B,upperdir=/var/lib/containers/storage/overlay/49717a1f443affaebbef806bb1302c1053c3c9cd6d106850574b30aaa55d72e2/diff,workdir=/var/lib/containers/storage/overlay/49717a1f443affaebbef806bb1302c1053c3c9cd6d106850574b30aaa55d72e2/work,context="system_u:object_r:container_file_t:s0:c128,c677" 
DEBU[0000] mounted container "9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3" at "/var/lib/containers/storage/overlay/49717a1f443affaebbef806bb1302c1053c3c9cd6d106850574b30aaa55d72e2/merged" 
DEBU[0000] Created root filesystem for container 9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 at /var/lib/containers/storage/overlay/49717a1f443affaebbef806bb1302c1053c3c9cd6d106850574b30aaa55d72e2/merged 
INFO[0000] Got pod network &{Name:coredns-container Namespace:coredns-container ID:9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 NetNS:/var/run/netns/cni-e4a5c663-c0ed-136c-3c79-8d573c8bd583 Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[{HostPort:53 ContainerPort:53 Protocol:udp HostIP:10.100.10.5}] Bandwidth:<nil> IpRanges:[]}]} 
INFO[0000] About to add CNI network podman (type=bridge) 
DEBU[0000] [0] CNI result: Interfaces:[{Name:cni-podman0 Mac:1e:1f:92:a4:b0:ce Sandbox:} {Name:vethaaaf59c7 Mac:3e:4c:b1:a1:1c:69 Sandbox:} {Name:eth0 Mac:32:90:55:9d:b3:7f Sandbox:/var/run/netns/cni-e4a5c663-c0ed-136c-3c79-8d573c8bd583}], IP:[{Version:4 Interface:0xc0006d6098 Address:{IP:10.88.0.70 Mask:ffff0000} Gateway:10.88.0.1}], Routes:[{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}], DNS:{Nameservers:[] Domain: Search:[] Options:[]} 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Setting CGroups for container 9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 to machine.slice:libpod:9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] reading hooks from /etc/containers/oci/hooks.d 
DEBU[0000] Created OCI spec for container 9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 at /var/lib/containers/storage/overlay-containers/9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -s -c 9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 -u 9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3/userdata -p /var/run/containers/storage/overlay-containers/9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3/userdata/pidfile -l k8s-file:/var/lib/containers/storage/overlay-containers/9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog --conmon-pidfile /var/run/containers/storage/overlay-containers/9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3]"
INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3.scope 
DEBU[0000] Received: -1                                 
DEBU[0000] Cleaning up container 9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 
DEBU[0000] Tearing down network namespace at /var/run/netns/cni-e4a5c663-c0ed-136c-3c79-8d573c8bd583 for container 9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 
INFO[0000] Got pod network &{Name:coredns-container Namespace:coredns-container ID:9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3 NetNS:/var/run/netns/cni-e4a5c663-c0ed-136c-3c79-8d573c8bd583 Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[{HostPort:53 ContainerPort:53 Protocol:udp HostIP:10.100.10.5}] Bandwidth:<nil> IpRanges:[]}]} 
INFO[0000] About to del CNI network podman (type=bridge) 
DEBU[0000] unmounted container "9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3" 
DEBU[0000] ExitCode msg: "bpf create : operation not permitted: oci runtime permission denied error" 
ERRO[0000] bpf create : Operation not permitted: OCI runtime permission denied error 



$ journalctl -u conmon
-- Logs begin at Mon 2018-08-13 18:51:04 EDT, end at Tue 2019-11-19 17:54:09 ES>
-- No entries --

Comment 12 Matthew Heon 2019-11-19 23:13:32 UTC
Technically it's not the Conmon unit that sends the logs, so you'll have to use old-fashioned `grep conmon` to get logs out of journalctl - sorry for not being specific

Comment 13 Brian 2019-11-19 23:35:11 UTC
Ah, ok. Here you go:

$ journalctl -n 10000 | grep conmon
Nov 19 17:53:53 XXX systemd[1]: Started libpod-conmon-9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3.scope.
Nov 19 17:53:53 XXX conmon[15165]: conmon 9dd30d12cdf810ec9d53 <ninfo>: attach sock path: /var/run/libpod/socket/9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3/attach
Nov 19 17:53:53 XXX conmon[15165]: conmon 9dd30d12cdf810ec9d53 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/var/run/libpod/socket/9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3/attach}
Nov 19 17:53:53 XXX conmon[15165]: conmon 9dd30d12cdf810ec9d53 <ninfo>: ctl fifo path: /var/lib/containers/storage/overlay-containers/9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3/userdata/ctl
Nov 19 17:53:53 XXX conmon[15165]: conmon 9dd30d12cdf810ec9d53 <ninfo>: terminal_ctrl_fd: 17
Nov 19 17:53:53 XXX conmon[15165]: conmon 9dd30d12cdf810ec9d53 <error>: Failed to create container: exit status 2
Nov 19 17:53:53 XXX systemd[1]: libpod-conmon-9dd30d12cdf810ec9d539dcc6b6c1d4299e463228674fa5fe4f22692417622e3.scope: Succeeded.

Comment 14 Wolfgang Ocker 2019-11-27 11:10:12 UTC
I had the same problem and disabling kernel-lockdown fixed it.

Comment 15 Giuseppe Scrivano 2019-11-27 12:45:58 UTC
thanks for figuring it out.

Is there a way from user space to detect whether lockdown is enabled?

Comment 16 Brian 2019-11-28 13:40:51 UTC
Nice, thanks Wolfgang! After disabling secure boot and verifying lockdown was disabled I'm able to launch the container again.

In case it saves anyone future Googling I found this article with more details: https://gehrcke.de/2019/09/running-an-ebpf-program-may-require-lifting-the-kernel-lockdown/.

Comment 17 Daniel Walsh 2019-11-29 11:40:26 UTC
Brian would you be interestred in opening a PR on libpod/troubleshoot.md describing what happened and how you fixed it.

Comment 18 Brian 2019-11-29 16:42:21 UTC
Sure, I can look into adding notes to the troubleshooting doc.


Note You need to log in before you can comment on or make changes to this bug.