RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1995900 - Podman does not honor the userns configuration
Summary: Podman does not honor the userns configuration
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: beta
: ---
Assignee: Jindrich Novy
QA Contact: Alex Jia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-20 05:53 UTC by Sameer
Modified: 2022-05-10 13:44 UTC (History)
12 users (show)

Fixed In Version: podman-4.0.0-0.29.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-10 13:27:31 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-94177 0 None None None 2021-08-20 05:54:09 UTC
Red Hat Product Errata RHSA-2022:1762 0 None None None 2022-05-10 13:27:47 UTC

Description Sameer 2021-08-20 05:53:49 UTC
Description of problem:

The "userns" parameter in /usr/share/containers/containers.conf set to "auto" but it has no effect by running the container.

Also, "root-auto-userns-user" in /etc/containers/storage.conf is set to a valid user with mappings in /etc/subuid and /etc/subgid file but still no luck.

However, it works expected by exporting environment variable 


Version-Release number of selected component (if applicable):

RHEL 8

Podman latest version (3.2.3)


How reproducible:

    - Reproducible 100%

Steps to Reproduce:

1. Add some subuid and subguid

echo 'storage:100000:65536' >> /etc/subuid
echo 'storage:100000:65536' >> /etc/subgid

2.In /etc/containers/storage.conf uncomment below paramenter

# root-auto-userns-user = "storage"

3. Add /etc/containers/containers.conf and/or uncommnent below parameters in /usr/share/containers/containers.conf

[containers]
userns = "auto"
[engine]
namespace = "storage"

4. Run the container using root user and check uid mappings

    # podman run --rm --name test -it ubi8 bash
    [root@f55bdea6dcae /]# cat /proc/self/uid_map 
         0          0 4294967295

Actual results:

    # podman run --rm --name test -it ubi8 bash
    [root@f55bdea6dcae /]# cat /proc/self/uid_map 
         0          0 4294967295

Expected results:

    # podman run --rm --name test1 --userns=auto -it ubi8 bash
    [root@f7a6504535b9 /]# cat /proc/self/uid_map 
         0     100000       1024    

Check Uid mapping in both cases 

Additional info:

This works well as expected using "--userns=auto" parameter in podman run command and also works well by exporting "PODMAN_USERNS=auto" and run the container later on.

1) root@servername# podman run --rm --name test1 --userns=auto -it ubi8 bash

2) root@servername# export PODMAN_USERNS=auto
   root@servername# podman run --rm --name test -it ubi8 bash

Comment 1 Tom Sweeney 2021-08-20 19:21:10 UTC
Giuseppe can you take a look at this please?

Comment 3 Giuseppe Scrivano 2022-01-03 16:44:51 UTC
fixed upstream with: https://github.com/containers/podman/pull/12621

Comment 4 Tom Sweeney 2022-01-04 18:28:50 UTC
Assigning to Jindrich for any further BZ and packaging needs.

Comment 5 Jindrich Novy 2022-01-05 12:59:23 UTC
This is merged into the main branch so leaving this in POST as it will appear packaged once podman-4.x is released.

Comment 6 Jindrich Novy 2022-01-05 13:03:57 UTC
Can we get qa ack?

Comment 7 Alex Jia 2022-01-25 10:29:27 UTC
This bug still exists on podman-4.0.0-0.20.module+el8.6.0+12939+aa33c90f.x86_64
with runc-1.0.2-1.module+el8.6.0+12698+b6644727.x86_64.

[root@kvm-05-guest07 ~]# grep 65536 /etc/subuid
test:100000:65536
storage:100000:65536
[root@kvm-05-guest07 ~]# grep 65536 /etc/subgid
test:100000:65536
storage:100000:65536
[root@kvm-05-guest07 ~]# vi /etc/containers/storage.conf
[root@kvm-05-guest07 ~]# grep root-auto-userns-user /etc/containers/storage.conf
root-auto-userns-user = "storage"
[root@kvm-05-guest07 ~]# vi /etc/containers/containers.conf
[root@kvm-05-guest07 ~]# grep containers -A5 /etc/containers/containers.conf
[containers]
userns = "auto"
[engine]
namespace = "storage"
[root@kvm-05-guest07 ~]# podman run --rm --name test -it ubi8 bash
[root@03abd13fa315 /]# cat /proc/self/uid_map
         0          0 4294967295

Comment 8 Jindrich Novy 2022-01-25 12:21:07 UTC
Please don't test against super old development build of podman-4.x-0.x - it is super old. I will switch this bug state to MODIFIED once a release candidate or newer build of podman-4.x is available.

Comment 10 Alex Jia 2022-02-10 05:00:37 UTC
A new issue is introduced by podman-4.0.0-0.25.el8, and the container can't be started w/
previous configuration in /etc/containers/containers.conf.

[root@kvm-08-guest18 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.6 Beta (Ootpa)

[root@kvm-08-guest18 ~]# rpm -q podman runc systemd kernel
podman-4.0.0-0.25.module+el8.6.0+14174+8e1b9b69.x86_64
runc-1.1.0-1.module+el8.6.0+14131+b9baa4cc.x86_64
systemd-239-56.el8.x86_64
kernel-4.18.0-364.el8.x86_64

[root@kvm-08-guest18 ~]# echo 'storage:100000:65536' >> /etc/subuid
[root@kvm-08-guest18 ~]# echo 'storage:100000:65536' >> /etc/subgid
[root@kvm-08-guest18 ~]# vi /etc/containers/storage.conf
[root@kvm-08-guest18 ~]# grep root-auto-userns-user /etc/containers/storage.conf
root-auto-userns-user = "storage"
[root@kvm-08-guest18 ~]# vi /etc/containers/containers.conf
[root@kvm-08-guest18 ~]# grep containers -A5 /etc/containers/containers.conf
[containers]
userns = "auto"
[engine]
namespace = "storage"
[root@kvm-08-guest18 ~]# podman run --rm --name test -it ubi8 bash
Resolved "ubi8" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf)
Trying to pull registry.access.redhat.com/ubi8:latest...
Getting image source signatures
Checking if image destination supports signatures
Copying blob 8671113e1c57 done
Copying blob 5dcbdc60ea6b done
Copying config b81e86a2cb done
Writing manifest to image destination
Storing signatures
Error: runc: runc create failed: unable to start container process: error during container init: error mounting "cgroup" to rootfs at "/sys/fs/cgroup": mount /proc/self/fd/12:/sys/fs/cgroup/systemd (via /proc/self/fd/13), flags: 0x20502f: operation not permitted: OCI permission denied

it's okay if all of configuration is removed from /etc/containers/containers.conf,
but the result is still not expected.

[root@kvm-08-guest18 ~]# podman run --rm --name test -it ubi8 bash
[root@e5136331bc19 /]# cat /proc/self/uid_map
         0          0 4294967295

Comment 11 Jindrich Novy 2022-02-10 06:48:52 UTC
Thanks Alex, can you please attach contents of your /etc/containers/containers.conf? Matt, should this be somehow handled when upgrading to podman-4 from older release?

Comment 12 Alex Jia 2022-02-10 12:37:01 UTC
(In reply to Jindrich Novy from comment #11)
> Thanks Alex, can you please attach contents of your
> /etc/containers/containers.conf? 

The containers.conf is an empty file originally, 
I only added the following content into it.

[root@kvm-08-guest18 ~]# cat /etc/containers/containers.conf
[containers]
userns = "auto"
[engine]
namespace = "storage"

Comment 13 Matthew Heon 2022-02-10 14:24:38 UTC
We think this may be a runc bug - we're seeing it in gating tests as well. Have not yet managed to track it down, but we are aware of it. Testing with crun works without issue.

Comment 14 Tom Sweeney 2022-02-10 21:46:32 UTC
Alex, nice find in the testing.  I'm going to set this back to Assigned and will hand it off to Matt.

Comment 16 Daniel Walsh 2022-02-11 02:03:26 UTC
I am not sure that this is fixed.

Comment 18 Giuseppe Scrivano 2022-02-15 16:31:50 UTC
I think it is fixed upstream with 89ee302a9f98e71138da5fd80a0a004f2b40160b

Comment 19 Alex Jia 2022-02-21 23:39:14 UTC
This bug has been verified on podman-4.0.0-0.29.module+el8.6.0+14295+adafbc4c.

[root@hpe-dl380pgen8-02-vm-9 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.6 Beta (Ootpa)

[root@hpe-dl380pgen8-02-vm-9 ~]# rpm -q podman runc systemd kernel
podman-4.0.0-0.29.module+el8.6.0+14295+adafbc4c.x86_64
runc-1.0.3-1.module+el8.6.0+14295+adafbc4c.x86_64
systemd-239-58.el8.x86_64
kernel-4.18.0-367.el8.x86_64

[root@hpe-dl380pgen8-02-vm-9 ~]# echo 'storage:100000:65536' >> /etc/subuid
[root@hpe-dl380pgen8-02-vm-9 ~]# echo 'storage:100000:65536' >> /etc/subgid
[root@hpe-dl380pgen8-02-vm-9 ~]# vi /etc/containers/storage.conf
[root@hpe-dl380pgen8-02-vm-9 ~]# grep root-auto-userns-user /etc/containers/storage.conf
root-auto-userns-user = "storage"
[root@hpe-dl380pgen8-02-vm-9 ~]# vi /etc/containers/containers.conf
[root@hpe-dl380pgen8-02-vm-9 ~]# grep containers -A5 /etc/containers/containers.conf
[containers]
userns = "auto"
[engine]
namespace = "storage"
[root@hpe-dl380pgen8-02-vm-9 ~]# podman run --rm --name test -it ubi8 bash
Resolved "ubi8" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf)
Trying to pull registry.access.redhat.com/ubi8:latest...
Getting image source signatures
Checking if image destination supports signatures
Copying blob 8671113e1c57 done
Copying blob 5dcbdc60ea6b done
Copying config b81e86a2cb done
Writing manifest to image destination
Storing signatures
[root@c9798282612a /]# cat /proc/self/uid_map
         0     100000       1024

Comment 22 Alex Jia 2022-02-28 16:41:42 UTC
This bug has been verified on podman-4.0.0-3.module+el8.6.0+14305+6b14f34e.

[root@ibm-x3650m4-01-vm-14 ~]# rpm -q podman runc systemd kernel
podman-4.0.0-3.module+el8.6.0+14305+6b14f34e.x86_64
runc-1.0.3-2.module+el8.6.0+14276+008f0e37.x86_64
systemd-239-58.el8.x86_64
kernel-4.18.0-369.el8.x86_64

[root@ibm-x3650m4-01-vm-14 ~]# grep containers -A5 /etc/containers/containers.conf
[containers]
userns = "auto"
[engine]
namespace = "storage"
[root@ibm-x3650m4-01-vm-14 ~]# grep root-auto-userns-user /etc/containers/storage.conf
root-auto-userns-user = "storage"
[root@ibm-x3650m4-01-vm-14 ~]# podman run --rm --name test -it ubi8 cat /proc/self/uid_map
         0     100000       1024

Comment 24 errata-xmlrpc 2022-05-10 13:27:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1762


Note You need to log in before you can comment on or make changes to this bug.