RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1907870 - cannot run podman in 8.3
Summary: cannot run podman in 8.3
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: fapolicyd
Version: 8.3
Hardware: x86_64
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 8.0
Assignee: Radovan Sroka
QA Contact: BaseOS QE Security Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-15 12:06 UTC by bixlerjd
Modified: 2022-10-05 16:09 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-10-05 16:08:26 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
The tailoring for ospp profile to skip the "fapolicy enabled" rule (556 bytes, application/xml)
2021-05-06 10:33 UTC, Igor Mironov
no flags Details

Description bixlerjd 2020-12-15 12:06:44 UTC
Description of problem: OCI runtime error whether using runc or crun in RHEL 8.3 with beta STIG profile applied. 


Version-Release number of selected component (if applicable): podman-2.0.5-5.module+el8.3.0.+8221_97165c3f.x86_64


How reproducible: Install RHEL 8.3 with DISA STIG 8 beta profile applied. Install container tools, install Podman. 


Steps to Reproduce:
1. podman pull registry.access.redhat.com/ubi8/ubi
2. podman run registry.access.redhat.com/ubi8/ubi cat /etc/os-release
3. podman run --runtime /usr/bin/crun registry.access.redhat.com/ubi8/ubi cat /etc/os-release

Actual results: podman run registry.access.redhat.com/ubi8/ubi cat /etc/os-release returns Error: /usr/bin/runc: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Operation not permitted: OCI runtime permissions denied error. 

Try using crun, another error: podman run --runtime /usr/bin/crun registry.access.redhat.com/ubi8/ubi cat /etc/os-release returns /usr/bin/crun: error while loading shared libraries: libyajl.so.2: cannot open shared object file: Operation not permitted ERRO[0000] Error removing container 6694e19d99e41376dd69c21c7d5de165f1678358decd4d297580c7fef17d7519 from runtime after operation failed Error: /usr/bin/crun: error while loading shared libraries: libyajl.so.2: cannot open shared object file: Operation not permitted: OCI runtime permission denied error 


Expected results: to run the container


Additional info: sestatus currently showing disabled to eliminate that as a possible issue with troubleshooting. running as root user, have tested with separate user that is in wheel group.

Comment 1 Tom Sweeney 2020-12-15 19:24:35 UTC
Giuseppe could you look at this please?  Dan Walsh, could this be a selinux issue?

Comment 2 Daniel Walsh 2020-12-15 20:27:11 UTC
Yes this looks like SELinux.
Could you check if you are getting AVCs in the audit log?

Might need to run 

restorecon -R -v /var/lib/containers

Comment 3 bixlerjd 2020-12-16 01:42:42 UTC
I actually turned off SELinux to further troubleshoot this issue. sestatus returns SELinux status: disabled. Just to be sure, ausearch -m avc has no recent entries.

Comment 4 Giuseppe Scrivano 2020-12-16 11:29:32 UTC
do you get any error if you try to run runc and crun manually?  e.g. runc --version and crun --version.

does "rpm -Va" show any issue with your installed packages?  In particular, what is the mode for the libpthread.so.0 file (ls -lL /usr/lib64/libpthread.so.0)?

Comment 5 bixlerjd 2020-12-16 11:51:05 UTC
runc --version
runc version spec: 1.0.2-dev

crun --version
crun version 0.14.1
commit: 598ea5e192ca12d4f6378217d3ab1415efeddefa
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL 

rpm -Va
S.5....T.  c /etc/bashrc
..5....T.  c /etc/csh.cshrc
S.5....T.  c /etc/profile
.M....G..  g /var/log/lastlog
S.5....T.  c /var/lib/unbound/root.key
S.5....T.  c /etc/audit/auditd.conf
.M.......  g /var/cache/dnf/packages.db
S.5....T.  c /etc/chrony.conf
..5....T.  c /etc/default/useradd
S.5....T.  c /etc/login.defs
S.5....T.  c /etc/security/pwquality.conf
.......T.  c /etc/selinux/targeted/contexts/customizable_types
..5....T.    /var/lib/selinux/targeted/active/commit_num
S.5....T.    /var/lib/selinux/targeted/active/file_contexts
.......T.    /var/lib/selinux/targeted/active/homedir_template
S.5....T.    /var/lib/selinux/targeted/active/policy.kern
.......T.    /var/lib/selinux/targeted/active/seusers
.......T.    /var/lib/selinux/targeted/active/users_extra
S.5....T.  c /etc/issue
.M.......  g /var/lib/plymouth/boot-duration
.M.......  g /var/lib/selinux/targeted/active/modules/200/usbguard
S.5....T.  c /etc/rhsm/rhsm.conf
.M.......  g /etc/yum.repos.d/redhat.repo
S.5....T.  c /etc/audit/plugins.d/syslog.conf
S.5....T.  c /etc/ssh/sshd_config
S.5....T.  c /etc/sysconfig/sshd
.M.......  c /etc/crypto-policies/back-ends/bind.config
.M.......  c /etc/crypto-policies/back-ends/gnutls.config
.M.......  c /etc/crypto-policies/back-ends/java.config
.M.......  c /etc/crypto-policies/back-ends/krb5.config
.M.......  c /etc/crypto-policies/back-ends/libreswan.config
.M.......  c /etc/crypto-policies/back-ends/libssh.config
.M.......  c /etc/crypto-policies/back-ends/nss.config
.M.......  c /etc/crypto-policies/back-ends/openssh.config
.M.......  c /etc/crypto-policies/back-ends/opensshserver.config
.M.......  c /etc/crypto-policies/back-ends/openssl.config
.M.......  c /etc/crypto-policies/back-ends/opensslcnf.config
.M...UG..  c /var/lib/rpm/__db.001
.M...UG..  c /var/lib/rpm/__db.002
.M...UG..  c /var/lib/rpm/__db.003
S.5....T.  c /etc/pam.d/password-auth
S.5....T.  c /etc/pam.d/system-auth
S.5....T.  c /etc/security/limits.conf
.M.......  g /var/lib/selinux/targeted/active/modules/200/fapolicyd
S.5....T.  c /etc/usbguard/rules.conf
S.5....T.  c /etc/usbguard/usbguard-daemon.conf
S.5....T.  c /etc/dnf/automatic.conf
S.5....T.  c /etc/sysctl.conf
S.5....T.  c /etc/systemd/coredump.conf
S.5....T.  c /etc/systemd/system.conf
.M...UG..  g /var/lib/fapolicyd/data.mdb
.M...UG..  g /var/lib/fapolicyd/lock.mdb
.M...UG..  g /var/log/fapolicyd-access.log
.M....G..  g /var/run/fapolicyd/fapolicyd.fifo
S.5....T.  c /etc/dnf/dnf.conf
.M.......  g /var/log/dnf.librepo.log
.M.......  g /var/log/hawkey.log

ls -lL /usr/lib64/libpthread.so.0
-rwxr-xr-x. 1 root root 320504 Jun 10  2020 /usr/lib64/libpthread.so.0

Comment 6 Giuseppe Scrivano 2020-12-16 13:40:38 UTC
thanks for the information.

Could you show me the content of /proc/self/mountinfo?

It might be that the containers storage is on a file system mounted with "noexec"

Comment 7 bixlerjd 2020-12-16 13:50:43 UTC
thanks, that might be a result of the STIG profile requirements? 

cat /proc/self/mountinfo 
21 96 0:21 / /sys rw,nosuid,nodev,noexec,relatime shared:2 - sysfs sysfs rw
22 96 0:4 / /proc rw,nosuid,nodev,noexec,relatime shared:25 - proc proc rw
23 96 0:6 / /dev rw,nosuid shared:21 - devtmpfs devtmpfs rw,size=3944656k,nr_inodes=986164,mode=755
24 21 0:7 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:3 - securityfs securityfs rw
25 23 0:22 / /dev/shm rw,nosuid,nodev,noexec,relatime shared:22 - tmpfs tmpfs rw
26 23 0:23 / /dev/pts rw,nosuid,noexec,relatime shared:23 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
27 96 0:24 / /run rw,nosuid,nodev shared:24 - tmpfs tmpfs rw,mode=755
28 21 0:25 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:4 - tmpfs tmpfs ro,mode=755
29 28 0:26 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:5 - cgroup cgroup rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
30 21 0:27 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:17 - pstore pstore rw
31 21 0:28 / /sys/fs/bpf rw,nosuid,nodev,noexec,relatime shared:18 - bpf bpf rw,mode=700
32 28 0:29 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:6 - cgroup cgroup rw,cpuset
33 28 0:30 / /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:7 - cgroup cgroup rw,rdma
34 28 0:31 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:8 - cgroup cgroup rw,devices
35 28 0:32 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:9 - cgroup cgroup rw,cpu,cpuacct
36 28 0:33 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:10 - cgroup cgroup rw,memory
37 28 0:34 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:11 - cgroup cgroup rw,net_cls,net_prio
38 28 0:35 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:12 - cgroup cgroup rw,blkio
39 28 0:36 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:13 - cgroup cgroup rw,freezer
40 28 0:37 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:14 - cgroup cgroup rw,perf_event
41 28 0:38 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:15 - cgroup cgroup rw,pids
42 28 0:39 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,hugetlb
43 21 0:12 / /sys/kernel/tracing rw,relatime shared:19 - tracefs none rw
93 21 0:41 / /sys/kernel/config rw,relatime shared:20 - configfs configfs rw
96 0 253:0 / / rw,relatime shared:1 - xfs /dev/mapper/rhel-root rw,attr2,inode64,logbufs=8,logbsize=32k,noquota
20 22 0:20 / /proc/sys/fs/binfmt_misc rw,relatime shared:26 - autofs systemd-1 rw,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=24611
44 21 0:8 / /sys/kernel/debug rw,relatime shared:27 - debugfs debugfs rw
45 23 0:42 / /dev/hugepages rw,relatime shared:28 - hugetlbfs hugetlbfs rw,pagesize=2M
46 23 0:19 / /dev/mqueue rw,relatime shared:29 - mqueue mqueue rw
47 21 0:43 / /sys/fs/fuse/connections rw,relatime shared:30 - fusectl fusectl rw
116 96 253:4 / /var rw,nodev,relatime shared:61 - xfs /dev/mapper/rhel-var rw,attr2,inode64,logbufs=8,logbsize=32k,noquota
113 96 253:8 / /tmpfs rw,relatime shared:63 - xfs /dev/mapper/rhel-tmpfs rw,attr2,inode64,logbufs=8,logbsize=32k,noquota
114 96 253:5 / /home rw,nosuid,nodev,relatime shared:65 - xfs /dev/mapper/rhel-home rw,attr2,inode64,logbufs=8,logbsize=32k,noquota
115 96 253:2 / /tmp rw,nosuid,nodev,noexec,relatime shared:67 - xfs /dev/mapper/rhel-tmp rw,attr2,inode64,logbufs=8,logbsize=32k,noquota
123 116 253:7 / /var/tmp rw,nosuid,nodev,noexec,relatime shared:69 - xfs /dev/mapper/rhel-var_tmp rw,attr2,inode64,logbufs=8,logbsize=32k,noquota
124 116 253:6 / /var/log rw,nosuid,nodev,noexec,relatime shared:71 - xfs /dev/mapper/rhel-var_log rw,attr2,inode64,logbufs=8,logbsize=32k,noquota
131 124 253:3 / /var/log/audit rw,nosuid,nodev,noexec,relatime shared:73 - xfs /dev/mapper/rhel-var_log_audit rw,attr2,inode64,logbufs=8,logbsize=32k,noquota
112 96 259:1 / /boot rw,nosuid,nodev,relatime shared:75 - xfs /dev/nvme0n1p1 rw,attr2,inode64,logbufs=8,logbsize=32k,noquota
461 27 0:45 / /run/user/0 rw,nosuid,nodev,relatime shared:246 - tmpfs tmpfs rw,size=792716k,mode=700
495 27 0:24 /netns /run/netns rw,nosuid,nodev shared:24 - tmpfs tmpfs rw,mode=755
473 96 0:46 / /mnt/hgfs rw,nosuid,nodev,relatime shared:253 - fuse.vmhgfs-fuse vmhgfs-fuse rw,user_id=0,group_id=0,allow_other

Comment 8 Giuseppe Scrivano 2020-12-16 15:17:48 UTC
yes, I thought that could be a result of the STIG profile requirements.

I don't see any noexec that could cause the issue you've seen though.

I'll try to reproduce locally.

Could you show me the output for "podman info" to verify where your storage is located, and to make sure I'll use the same settings.

Comment 9 bixlerjd 2020-12-16 15:26:39 UTC
Thank you. 

host:
  arch: amd64
  buildahVersion: 1.15.1
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.20-2.module+el8.3.0+8221+97165c3f.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.20, commit: 77ce9fd1e61ea89bd6cdc621b07446dd9e80e5b6'
  cpus: 2
  distribution:
    distribution: '"rhel"'
    version: "8.3"
  eventLogger: file
  hostname: localhost.localdomain
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-240.1.1.el8_3.x86_64
  linkmode: dynamic
  memFree: 5250383872
  memTotal: 8117448704
  ociRuntime:
    name: runc
    package: runc-1.0.0-68.rc92.module+el8.3.0+8221+97165c3f.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 8455712768
  swapTotal: 8455712768
  uptime: 27h 50m 31.16s (Approximately 1.12 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 20
    paused: 0
    running: 0
    stopped: 20
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 6
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 1
  Built: 1600877882
  BuiltTime: Wed Sep 23 12:18:02 2020
  GitCommit: ""
  GoVersion: go1.14.7
  OsArch: linux/amd64
  Version: 2.0.5

Comment 10 Daniel Walsh 2020-12-16 15:45:08 UTC
/tmp and /var/tmp are mounted noexec.  Not sure if anything is happening there?

Comment 11 Daniel Walsh 2020-12-16 15:46:33 UTC
If you run crun by hand do you see the errors.
crun list

Comment 12 bixlerjd 2020-12-16 15:56:54 UTC
I was outputting crun list to txt file when I noticed it appears to be intermittent. Yes, I do see the error with not being able to open libyajl.so.2. But then if I run it randomly again it will show NAME PID STATUS BUNDLE PATH table as if acting normally. Then it will intermittently start erroring again. 

runc list I do not get the error.

Comment 13 Giuseppe Scrivano 2020-12-16 17:52:56 UTC
could you try disabling fapolicyd.service (systemctl stop fapolicyd.service)?  Does it make any difference?

Comment 14 bixlerjd 2020-12-16 17:59:25 UTC
Yes it does with facpolicyd stopped I am able to run podman with either runc or crun now.

Comment 15 Daniel Walsh 2020-12-16 18:28:45 UTC
So the whitelisting daemon needs to white list these libraries or just runc and crun?

Comment 16 Steve Grubb 2020-12-16 18:43:16 UTC
What kind of audit events are you getting?    ausearch -m fanotify -i    And which policy are you running with? There is a known-libs and restrictive policy.

Comment 17 Steve Grubb 2020-12-16 18:45:46 UTC
Also, there is bug #1905906 and bug #1905895 that may be applicable. These are supposed to be moving along towards async errata.

Comment 18 Giuseppe Scrivano 2020-12-16 18:59:00 UTC
I've set a VM with the STIG profile and I am able to reproduce the issue, these are the events I get:

----                                                                                                                                                                                                                                           
node=localhost.localdomain type=PROCTITLE msg=audit(12/16/2020 19:54:52.505:25078) : proctitle=/usr/bin/runc init                                                                                                                              
node=localhost.localdomain type=PATH msg=audit(12/16/2020 19:54:52.505:25078) : item=0 name=/lib64/libpthread.so.0 inode=16806020 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:lib_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0                                                                                                                                                                                             
node=localhost.localdomain type=CWD msg=audit(12/16/2020 19:54:52.505:25078) : cwd=/var/lib/containers/storage/overlay/fb28dd4f44d9205ab1dfebc1c7b21203c5160062c17118cf196d1e2176877011/merged                                                 
node=localhost.localdomain type=SYSCALL msg=audit(12/16/2020 19:54:52.505:25078) : arch=x86_64 syscall=openat success=no exit=EPERM(Operation not permitted) a0=0xffffff9c a1=0x7f6ac5e29d20 a2=O_RDONLY|O_CLOEXEC a3=0x0 items=1 ppid=2213 pid
=2221 auid=giuseppe uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=5 comm=7 exe=/ subj=unconfined_u:system_r:container_runtime_t:s0 key=unsuccessful-access                                   
node=localhost.localdomain type=FANOTIFY msg=audit(12/16/2020 19:54:52.505:25078) : resp=deny                                                                                                                                                  ----                                                                                                                                                                                                                                           
node=localhost.localdomain type=PROCTITLE msg=audit(12/16/2020 19:54:52.506:25081) : proctitle=/usr/bin/runc init                                                                                                                              
node=localhost.localdomain type=PATH msg=audit(12/16/2020 19:54:52.506:25081) : item=0 name=/lib64/libpthread.so.0 inode=16806020 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:lib_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0                                                                                                                                                                                             
node=localhost.localdomain type=CWD msg=audit(12/16/2020 19:54:52.506:25081) : cwd=/var/lib/containers/storage/overlay/fb28dd4f44d9205ab1dfebc1c7b21203c5160062c17118cf196d1e2176877011/merged                                                 
node=localhost.localdomain type=SYSCALL msg=audit(12/16/2020 19:54:52.506:25081) : arch=x86_64 syscall=openat success=no exit=EPERM(Operation not permitted) a0=0xffffff9c a1=0x7ffd0a7cb4f0 a2=O_RDONLY|O_CLOEXEC a3=0x0 items=1 ppid=2213 pid
=2221 auid=giuseppe uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=5 comm=7 exe=/ subj=unconfined_u:system_r:container_runtime_t:s0 key=unsuccessful-access                                   
node=localhost.localdomain type=FANOTIFY msg=audit(12/16/2020 19:54:52.506:25081) : resp=deny                                                                                                                                                  ----
node=localhost.localdomain type=PROCTITLE msg=audit(12/16/2020 19:55:17.709:29085) : proctitle=/usr/bin/conmon --api-version 1 -c d9a442263ea12b0e0809e344e084bdaccb4528338cb273fc11fda3accf462399 -u d9a442263ea12b0e0809e344e 
node=localhost.localdomain type=PATH msg=audit(12/16/2020 19:55:17.709:29085) : item=0 name=/lib64/libyajl.so.2 inode=17410634 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:lib_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
node=localhost.localdomain type=CWD msg=audit(12/16/2020 19:55:17.709:29085) : cwd=/var/lib/containers/storage/overlay-containers/d9a442263ea12b0e0809e344e084bdaccb4528338cb273fc11fda3accf462399/userdata 
node=localhost.localdomain type=SYSCALL msg=audit(12/16/2020 19:55:17.709:29085) : arch=x86_64 syscall=openat success=no exit=EPERM(Operation not permitted) a0=0xffffff9c a1=0x7ffd71a2e300 a2=O_RDONLY|O_CLOEXEC a3=0x0 items=1 ppid=2428 pid=2429 auid=giuseppe uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=5 comm=3 exe=/ subj=unconfined_u:system_r:container_runtime_t:s0 key=unsuccessful-access 
node=localhost.localdomain type=FANOTIFY msg=audit(12/16/2020 19:55:17.709:29085) : resp=deny 
----
node=localhost.localdomain type=PROCTITLE msg=audit(12/16/2020 19:55:17.809:29641) : proctitle=/usr/bin/crun delete --force d9a442263ea12b0e0809e344e084bdaccb4528338cb273fc11fda3accf462399 
node=localhost.localdomain type=PATH msg=audit(12/16/2020 19:55:17.809:29641) : item=0 name=/lib64/libyajl.so.2 inode=17410634 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:lib_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
node=localhost.localdomain type=CWD msg=audit(12/16/2020 19:55:17.809:29641) : cwd=/home/giuseppe 
node=localhost.localdomain type=SYSCALL msg=audit(12/16/2020 19:55:17.809:29641) : arch=x86_64 syscall=openat success=no exit=EPERM(Operation not permitted) a0=0xffffff9c a1=0x7fccb17ced20 a2=O_RDONLY|O_CLOEXEC a3=0x0 items=1 ppid=2338 pid=2431 auid=giuseppe uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=5 comm=3 exe=/ subj=unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 key=unsuccessful-access 
node=localhost.localdomain type=FANOTIFY msg=audit(12/16/2020 19:55:17.809:29641) : resp=deny 
----
node=localhost.localdomain type=PROCTITLE msg=audit(12/16/2020 19:55:17.810:29645) : proctitle=/usr/bin/crun delete --force d9a442263ea12b0e0809e344e084bdaccb4528338cb273fc11fda3accf462399 
node=localhost.localdomain type=PATH msg=audit(12/16/2020 19:55:17.810:29645) : item=0 name=/lib64/libyajl.so.2 inode=17410634 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:lib_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
node=localhost.localdomain type=CWD msg=audit(12/16/2020 19:55:17.810:29645) : cwd=/home/giuseppe 
node=localhost.localdomain type=SYSCALL msg=audit(12/16/2020 19:55:17.810:29645) : arch=x86_64 syscall=openat success=no exit=EPERM(Operation not permitted) a0=0xffffff9c a1=0x7ffd317426f0 a2=O_RDONLY|O_CLOEXEC a3=0x0 items=1 ppid=2338 pid=2431 auid=giuseppe uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=5 comm=3 exe=/ subj=unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 key=unsuccessful-access 
node=localhost.localdomain type=FANOTIFY msg=audit(12/16/2020 19:55:17.810:29645) : resp=deny 
----
node=localhost.localdomain type=PROCTITLE msg=audit(12/16/2020 19:56:20.097:40570) : proctitle=/usr/bin/conmon --api-version 1 -c 16f14976fe4757217db5ac370d1d6e9ff99df5f8040117fbe8d8ebe17b681150 -u 16f14976fe4757217db5ac370 
node=localhost.localdomain type=PATH msg=audit(12/16/2020 19:56:20.097:40570) : item=0 name=/lib64/libyajl.so.2 inode=17410634 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:lib_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
node=localhost.localdomain type=CWD msg=audit(12/16/2020 19:56:20.097:40570) : cwd=/var/lib/containers/storage/overlay-containers/16f14976fe4757217db5ac370d1d6e9ff99df5f8040117fbe8d8ebe17b681150/userdata 
node=localhost.localdomain type=SYSCALL msg=audit(12/16/2020 19:56:20.097:40570) : arch=x86_64 syscall=openat success=no exit=EPERM(Operation not permitted) a0=0xffffff9c a1=0x7f2233449d20 a2=O_RDONLY|O_CLOEXEC a3=0x0 items=1 ppid=2733 pid=2734 auid=giuseppe uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=7 comm=3 exe=/ subj=unconfined_u:system_r:container_runtime_t:s0 key=unsuccessful-access 
node=localhost.localdomain type=FANOTIFY msg=audit(12/16/2020 19:56:20.097:40570) : resp=deny 
----
node=localhost.localdomain type=PROCTITLE msg=audit(12/16/2020 19:56:20.097:40571) : proctitle=/usr/bin/conmon --api-version 1 -c 16f14976fe4757217db5ac370d1d6e9ff99df5f8040117fbe8d8ebe17b681150 -u 16f14976fe4757217db5ac370 
node=localhost.localdomain type=PATH msg=audit(12/16/2020 19:56:20.097:40571) : item=0 name=/lib64/libyajl.so.2 inode=17410634 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:lib_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
node=localhost.localdomain type=CWD msg=audit(12/16/2020 19:56:20.097:40571) : cwd=/var/lib/containers/storage/overlay-containers/16f14976fe4757217db5ac370d1d6e9ff99df5f8040117fbe8d8ebe17b681150/userdata 
node=localhost.localdomain type=SYSCALL msg=audit(12/16/2020 19:56:20.097:40571) : arch=x86_64 syscall=openat success=no exit=EPERM(Operation not permitted) a0=0xffffff9c a1=0x7ffe0e8e2420 a2=O_RDONLY|O_CLOEXEC a3=0x0 items=1 ppid=2733 pid=2734 auid=giuseppe uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=7 comm=3 exe=/ subj=unconfined_u:system_r:container_runtime_t:s0 key=unsuccessful-access 
node=localhost.localdomain type=FANOTIFY msg=audit(12/16/2020 19:56:20.097:40571) : resp=deny 
----
node=localhost.localdomain type=PROCTITLE msg=audit(12/16/2020 19:56:20.212:40635) : proctitle=/usr/bin/crun delete --force 16f14976fe4757217db5ac370d1d6e9ff99df5f8040117fbe8d8ebe17b681150 
node=localhost.localdomain type=PATH msg=audit(12/16/2020 19:56:20.212:40635) : item=0 name=/lib64/libyajl.so.2 inode=17410634 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:lib_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
node=localhost.localdomain type=CWD msg=audit(12/16/2020 19:56:20.212:40635) : cwd=/root 
node=localhost.localdomain type=SYSCALL msg=audit(12/16/2020 19:56:20.212:40635) : arch=x86_64 syscall=openat success=no exit=EPERM(Operation not permitted) a0=0xffffff9c a1=0x7f3b222b1d20 a2=O_RDONLY|O_CLOEXEC a3=0x0 items=1 ppid=2644 pid=2736 auid=giuseppe uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=7 comm=3 exe=/ subj=unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 key=unsuccessful-access 
node=localhost.localdomain type=FANOTIFY msg=audit(12/16/2020 19:56:20.212:40635) : resp=deny 
----
node=localhost.localdomain type=PROCTITLE msg=audit(12/16/2020 19:56:20.212:40636) : proctitle=/usr/bin/crun delete --force 16f14976fe4757217db5ac370d1d6e9ff99df5f8040117fbe8d8ebe17b681150 
node=localhost.localdomain type=PATH msg=audit(12/16/2020 19:56:20.212:40636) : item=0 name=/lib64/libyajl.so.2 inode=17410634 dev=fd:00 mode=file,755 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:lib_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
node=localhost.localdomain type=CWD msg=audit(12/16/2020 19:56:20.212:40636) : cwd=/root 
node=localhost.localdomain type=SYSCALL msg=audit(12/16/2020 19:56:20.212:40636) : arch=x86_64 syscall=openat success=no exit=EPERM(Operation not permitted) a0=0xffffff9c a1=0x7fff39a46960 a2=O_RDONLY|O_CLOEXEC a3=0x0 items=1 ppid=2644 pid=2736 auid=giuseppe uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=7 comm=3 exe=/ subj=unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 key=unsuccessful-access 
node=localhost.localdomain type=FANOTIFY msg=audit(12/16/2020 19:56:20.212:40636) : resp=deny 


They might be caused by runc/crun re-exec'ing themselves, that code is the same: https://github.com/containers/crun/blob/46ead33b7734cffa8a220a1440ffc35b1ff657d9/src/libcrun/cloned_binary.c#L512-L536

Comment 19 Steve Grubb 2020-12-16 19:14:38 UTC
Looks like the executable is '/' and the command is '3' which is a problem. That would certainly cause a denial as that is very much wrong. For fapolicyd to work correctly it needs real info to work with. It may not be suitable for a container environment. I have never personally tested it around containers.

Comment 20 Steve Grubb 2020-12-16 19:32:13 UTC
In the mean time, I'd say that fapolicyd and containers don't work together. The fanotify kernel interface provides a fd and pid without any namespace information associated with either. This is probably necessary to correctly derive the information needed to make an access decision. This investigation is on the team's TODO list and I think there's a Jira ticket somewhere for this work.

Comment 21 bixlerjd 2020-12-17 13:14:25 UTC
Understood. Can proceed with fapolicyd off for now.

Comment 22 Igor Mironov 2021-05-06 10:30:21 UTC
I encountered the same issue--first with runc and then with crun, with the same symptoms as above. I still needed to be able to run oscap reports and apply mitigations (on the understanding that fapolicyd had to be temporarily disabled). The attached ospp-podman.xml file contains a tailoring that skips the rule that requires that fapolicyd be enabled. Perhaps this will help others.

Comment 23 Igor Mironov 2021-05-06 10:31:05 UTC
I encountered the same issue--first with runc and then with crun, with the same symptoms as above. I still needed to be able to run oscap reports and apply mitigations (on the understanding that fapolicyd had to be temporarily disabled). The attached ospp-podman.xml file contains a tailoring that skips the rule that requires that fapolicyd be enabled. Perhaps this will help others.

Comment 24 Igor Mironov 2021-05-06 10:33:44 UTC
Created attachment 1780185 [details]
The tailoring for ospp profile to skip the "fapolicy enabled" rule

Comment 29 Radovan Sroka 2022-10-05 16:08:26 UTC
(In reply to bixlerjd from comment #0)
> Description of problem: OCI runtime error whether using runc or crun in RHEL
> 8.3 with beta STIG profile applied. 
> 
> 
> Version-Release number of selected component (if applicable):
> podman-2.0.5-5.module+el8.3.0.+8221_97165c3f.x86_64
> 
> 
> How reproducible: Install RHEL 8.3 with DISA STIG 8 beta profile applied.
> Install container tools, install Podman. 
> 
> 
> Steps to Reproduce:
> 1. podman pull registry.access.redhat.com/ubi8/ubi
> 2. podman run registry.access.redhat.com/ubi8/ubi cat /etc/os-release
> 3. podman run --runtime /usr/bin/crun registry.access.redhat.com/ubi8/ubi
> cat /etc/os-release
> 
> Actual results: podman run registry.access.redhat.com/ubi8/ubi cat
> /etc/os-release returns Error: /usr/bin/runc: error while loading shared
> libraries: libpthread.so.0: cannot open shared object file: Operation not
> permitted: OCI runtime permissions denied error. 
> 
> Try using crun, another error: podman run --runtime /usr/bin/crun
> registry.access.redhat.com/ubi8/ubi cat /etc/os-release returns
> /usr/bin/crun: error while loading shared libraries: libyajl.so.2: cannot
> open shared object file: Operation not permitted ERRO[0000] Error removing
> container 6694e19d99e41376dd69c21c7d5de165f1678358decd4d297580c7fef17d7519
> from runtime after operation failed Error: /usr/bin/crun: error while
> loading shared libraries: libyajl.so.2: cannot open shared object file:
> Operation not permitted: OCI runtime permission denied error 
> 
> 
> Expected results: to run the container
> 
> 
> Additional info: sestatus currently showing disabled to eliminate that as alibyajl
> possible issue with troubleshooting. running as root user, have tested with
> separate user that is in wheel group.

I spent some time on the original issue and it seems to be gone. 
I tried all rhel >= 8.3 systems with the latest zstream changes and it just works.

I would like to point out that original issue was about libyajl.so not being trusted.
Which does not have anything to do with containers just with podman as a tool.

So I'm going to close the bug. Feel free to reopen if needed.


Note You need to log in before you can comment on or make changes to this bug.