RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1957904 - Confined selinux users of type staff_u and user_u cannot run rootless podman containers
Summary: Confined selinux users of type staff_u and user_u cannot run rootless podman ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: container-selinux
Version: 8.3
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: beta
: ---
Assignee: Daniel Walsh
QA Contact: Edward Shen
URL:
Whiteboard:
Depends On:
Blocks: 1186913
TreeView+ depends on / blocked
 
Reported: 2021-05-06 18:25 UTC by Rose Colombo
Modified: 2021-11-10 10:04 UTC (History)
7 users (show)

Fixed In Version: container-selinux-2.163.0-2.el8 or newer
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-09 17:37:47 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pmagotra: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:4154 0 None None None 2021-11-09 17:38:28 UTC

Description Rose Colombo 2021-05-06 18:25:06 UTC
Description of problem:
Confined selinux users of type staff_u and user_u cannot run rootless podman containers. 

Version-Release number of selected component (if applicable):
container-selinux-2.155.0-1.module+el8.3.1+9857+68fb1526.noarch

How reproducible:
100%

Steps to Reproduce:
# useradd -Z staff_u john
# useradd -Z user_u john2
# semanage login -l
Login Name           SELinux User         MLS/MCS Range        Service

__default__          unconfined_u         s0-s0:c0.c1023       *
john                 staff_u              s0-s0:c0.c1023       *
john2                user_u               s0                   *
root                 unconfined_u         s0-s0:c0.c1023       *


 # ssh john@localhost
[john@ate ~]$ podman run -it registry.access.redhat.com/rhel7 sleep 20
Trying to pull registry.access.redhat.com/rhel7:latest...
Getting image source signatures
Copying blob 6e121ccea590 done  
Copying blob 13f131153d86 done  
Copying config 5a286023e7 done  
Writing manifest to image destination
Storing signatures
standard_init_linux.go:219: exec user process caused: permission denied


$ exit

type=AVC msg=audit(1620308668.610:3560): avc:  denied  { transition } for  pid=201461 comm="runc:[2:INIT]" path="/usr/bin/sleep" dev="fuse" ino=33732127 scontext=staff_u:staff_r:container_runtime_t:s0 tcontext=system_u:system_r:container_t:s0:c96,c752 tclass=process permissive=0
type=SYSCALL msg=audit(1620308668.610:3560): arch=c000003e syscall=59 success=no exit=-13 a0=c00015d860 a1=c000160f20 a2=c000183950 a3=a items=0 ppid=201450 pid=201461 auid=1003 uid=1003 gid=1004 euid=1003 suid=1003 fsuid=1003 egid=1004 sgid=1004 fsgid=1004 tty=pts0 ses=17 comm="runc:[2:INIT]" exe="/" subj=staff_u:staff_r:container_runtime_t:s0 key=(null)ARCH=x86_64 SYSCALL=execve AUID="john" UID="john" GID="john" EUID="john" SUID="john" FSUID="john" EGID="john" SGID="john" FSGID="john"


 # ssh john2@localhost
[john2@ate ~]$  podman run -it registry.access.redhat.com/rhel7 sleep 20
Trying to pull registry.access.redhat.com/rhel7:latest...
Getting image source signatures
Copying blob 6e121ccea590 done  
Copying blob 13f131153d86 done  
Copying config 5a286023e7 done  
Writing manifest to image destination
Storing signatures
standard_init_linux.go:219: exec user process caused: permission denied

type=AVC msg=audit(1620308706.398:3595): avc:  denied  { transition } for  pid=201594 comm="runc:[2:INIT]" path="/usr/bin/sleep" dev="fuse" ino=55042778 scontext=user_u:user_r:container_runtime_t:s0 tcontext=system_u:system_r:container_t:s0:c897,c900 tclass=process permissive=0

[john2@ate ~]$  podman run -it registry.access.redhat.com/rhel7 bash
standard_init_linux.go:219: exec user process caused: permission denied

type=AVC msg=audit(1620325268.858:3758): avc:  denied  { transition } for  pid=205904 comm="runc:[2:INIT]" path="/usr/bin/bash" dev="fuse" ino=55042612 scontext=user_u:user_r:container_runtime_t:s0 tcontext=system_u:system_r:container_t:s0:c50,c171 tclass=process permissive=0



Expected results:
Expect the container to run

Additional info:
You can run the above examples in detached mode but it still Exits immediately:

[john@ate ~]$ podman run -d registry.access.redhat.com/rhel7 sleep 20
Trying to pull registry.access.redhat.com/rhel7:latest...
Getting image source signatures
Copying blob 6e121ccea590 done  
Copying blob 13f131153d86 done  
Copying config 5a286023e7 done  
Writing manifest to image destination
Storing signatures
e5bdc6673039c7a906428ea3e4df369c2903d7e355361b96463937b7c5697af3
[john@ate ~]$ podman ps
CONTAINER ID  IMAGE   COMMAND  CREATED  STATUS  PORTS   NAMES
[john@ate ~]$ podman ps -a
CONTAINER ID  IMAGE                                    COMMAND   CREATED         STATUS                     PORTS   NAMES
e5bdc6673039  registry.access.redhat.com/rhel7:latest  sleep 20  14 seconds ago  Exited (1) 13 seconds ago          wonderful_blackburn


[john2@ate ~]$ podman unshare cat /proc/self/uid_map 
         0       1004          1
         1     427680      65536

[john2@ate ~]$ podman info
host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.22-3.module+el8.3.1+9857+68fb1526.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.22, commit: a40e3092dbe499ea1d85ab339caea023b74829b9'
  cpus: 2
  distribution:
    distribution: '"rhel"'
    version: "8.3"
  eventLogger: file
  hostname: ate
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1005
      size: 1
    - container_id: 1
      host_id: 427680
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1004
      size: 1
    - container_id: 1
      host_id: 427680
      size: 65536
  kernel: 4.18.0-240.1.1.el8_3.x86_64
  linkmode: dynamic
  memFree: 224948224
  memTotal: 1904906240
  ociRuntime:
    name: runc
    package: runc-1.0.0-70.rc92.module+el8.3.1+9857+68fb1526.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    path: /run/user/1004/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.module+el8.3.1+9857+68fb1526.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.3
  swapFree: 2158227456
  swapTotal: 2218782720
  uptime: 216h 22m 18.19s (Approximately 9.00 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - 192.168.0.237:5000
store:
  configFile: /home/john2/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.3.0-2.module+el8.3.1+9857+68fb1526.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.3
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /home/john2/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 1
  runRoot: /run/user/1004/containers
  volumePath: /home/john2/.local/share/containers/storage/volumes
version:
  APIVersion: "2"
  Built: 1612819146
  BuiltTime: Mon Feb  8 16:19:06 2021
  GitCommit: ""
  GoVersion: go1.14.7
  OsArch: linux/amd64
  Version: 2.2.1

Comment 3 Daniel Walsh 2021-05-24 19:57:32 UTC
Could they just take the container-selinux upstream package and install/compile that and see if it fixes their problem.  If yes then just use this policy.

Comment 4 Rose Colombo 2021-05-25 14:45:25 UTC
I will try this in my test env first, which version do you suggest I try? I tried 2.165, but on a R8 system, this created quite the headache in dependency resolutions to be manually resolved. 

This does not work on the latest RHEL8.4 update which has container-selinux-2.158.0-1.module+el8.4.0+10607+f4da7515.noarch, is there a reason you think a lower version upstream would get this working?

Comment 7 Daniel Walsh 2021-06-11 13:44:48 UTC
I just got this working on my laptop.

https://github.com/containers/container-selinux/releases/tag/v2.163.0

Could you download and compile the policy to see if it works for you.

This will be in the next RHEL release of container-selinux policy.

Comment 8 Tom Sweeney 2021-06-11 14:03:02 UTC
Wayah, could you try Dan's fix please?

In the mean time, I'm going to set this to post and hand it off to Jindrich for packaging needs.

Comment 9 Rose Colombo 2021-06-11 16:00:12 UTC
@dwalsh Can you provide me with a build for this? I attempted to make a scratch-build but it's still not working for me after using the newer version.

Comment 10 Daniel Walsh 2021-06-12 09:58:46 UTC
If this is failing for scratch builds, then it should be the same,  What AVCs are you seeing?

Perhaps we can work on this together, next week.  I got it working on my laptop, but I am not sure how robust it is.

Comment 12 Daniel Walsh 2021-06-16 14:03:37 UTC
Can you just patch out that line.  This is a difference between Upstream selinux-policy and RHEL8.

Comment 13 Jindrich Novy 2021-06-16 14:12:27 UTC
I can certainly do it. It makes me thinking, would it make sense to create RHEL branch for container-selinux too? This is the second thing we need to maintain downstream which would become messy over time.

The first one is http://pkgs.devel.redhat.com/cgit/rpms/container-selinux/tree/rhel-fix.patch?h=stream-container-tools-rhel8-rhel-8.5.0&id=ad778a796ff386322f8d987e1d54d80c399703e2

Comment 14 Daniel Walsh 2021-06-16 14:15:47 UTC
Yup either way it gets messy.  Could you open a PR for container-selinux to add it upstream?

Comment 15 Jindrich Novy 2021-06-16 14:19:08 UTC
Confirming removing that line fixes the RHEL8.5 build. Can we get qa ack here please?

Comment 16 Jindrich Novy 2021-06-16 14:20:35 UTC
Fair enough, let's wait for the third build fix and I will do a PR upstream :-)

Comment 18 Daniel Walsh 2021-06-24 16:58:09 UTC
I actually think it would be best for us to look at changing the way the podman works. IE To stop using system_r and system_u, and start using the calling programs user and role.

Could you try to add 

podman run --security-opt label=user:user_u --security-opt label=role:user_r ...

And see if this solves the problem for you?

Comment 19 Edward Shen 2021-06-25 04:10:53 UTC
Dan, I got the below error when I try to run this command.

[john2@kvm-02-guest05 ~]$ podman run --security-opt label=user:user_u --security-opt label=role:user_r -it registry.access.redhat.com/rhel7 sleep 20
Error: OCI runtime error: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: failed to set /proc/self/attr/keycreate on procfs: write /proc/self/attr/keycreate: invalid argument

Is it because podman is too old? I have the same container-tools version as the reporter has,
[john2@kvm-02-guest05 ~]$ podman info
host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.22-3.module+el8.3.1+9857+68fb1526.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.22, commit: a40e3092dbe499ea1d85ab339caea023b74829b9'
  cpus: 1
  distribution:
    distribution: '"rhel"'
    version: "8.3"
  eventLogger: file
  hostname: kvm-02-guest05.rhts.eng.brq.redhat.com
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 231072
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 231072
      size: 65536
  kernel: 4.18.0-240.22.1.el8_3.x86_64
  linkmode: dynamic
  memFree: 1678761984
  memTotal: 3920019456
  ociRuntime:
    name: runc
    package: runc-1.0.0-70.rc92.module+el8.3.1+9857+68fb1526.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    path: /run/user/1002/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.module+el8.3.1+9857+68fb1526.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.3
  swapFree: 4257214464
  swapTotal: 4257214464
  uptime: 17h 16m 8.59s (Approximately 0.71 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /home/john2/.config/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 0
    stopped: 4
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.3.0-2.module+el8.3.1+9857+68fb1526.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.3
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /home/john2/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 1
  runRoot: /run/user/1002/containers
  volumePath: /home/john2/.local/share/containers/storage/volumes
version:
  APIVersion: "2"
  Built: 1612819146
  BuiltTime: Mon Feb  8 22:19:06 2021
  GitCommit: ""
  GoVersion: go1.14.7
  OsArch: linux/amd64
  Version: 2.2.1

Comment 20 Daniel Walsh 2021-06-25 08:14:28 UTC
What AVCs are you seeing?

Comment 21 Edward Shen 2021-06-28 08:31:48 UTC
----
time->Mon Jun 28 10:25:19 2021
type=USER_AVC msg=audit(1624868719.321:235): pid=940 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  received policyload notice (seqno=2)  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'
----
time->Mon Jun 28 10:25:34 2021
type=USER_AVC msg=audit(1624868734.500:236): pid=940 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  received policyload notice (seqno=3)  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'
----
time->Mon Jun 28 10:26:58 2021
type=USER_AVC msg=audit(1624868818.190:301): pid=940 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  received policyload notice (seqno=4)  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'

Comment 22 Daniel Walsh 2021-06-28 14:15:13 UTC
Those are not AVCs (Well denied AVCs anyways.)

I think this is an out of date runc.

Comment 23 Tom Sweeney 2021-06-28 14:26:00 UTC
Based on Dan's last comments, I'm setting this back to ON_QA.

Comment 31 errata-xmlrpc 2021-11-09 17:37:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4154


Note You need to log in before you can comment on or make changes to this bug.