RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1903983 - rootless mode doesn't work
Summary: rootless mode doesn't work
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: kernel
Version: 8.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 8.4
Assignee: Scott Mayhew
QA Contact: Chao Ye
URL:
Whiteboard:
: 1931800 1935609 (view as bug list)
Depends On:
Blocks: 1817517
TreeView+ depends on / blocked
 
Reported: 2020-12-03 09:55 UTC by Marius Vollmer
Modified: 2023-09-15 00:52 UTC (History)
34 users (show)

Fixed In Version: kernel-4.18.0-293.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-18 14:24:06 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
podman-info (2.68 KB, text/plain)
2021-02-10 09:01 UTC, Katerina Koukiou
no flags Details
podman-run-debug (17.34 KB, text/plain)
2021-02-10 10:00 UTC, Katerina Koukiou
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/rhel/src/kernel rhel-8 merge_requests 45 0 None None None 2021-02-22 23:07:52 UTC

Internal Links: 1932739

Description Marius Vollmer 2020-12-03 09:55:22 UTC
Description of problem:

Running containers as non-root fails with

    Error: container_linux.go:370: starting container process caused: 
    process_linux.go:459: container init caused: rootfs_linux.go:59: mounting 
    "sysfs" to rootfs at "/sys" caused: operation not permitted: OCI runtime 
    permission denied error

Version-Release number of selected component (if applicable):
podman-2.1.1-3.module+el8.3.1+8686+2a59bca3.x86_64

How reproducible:
Always

Steps to Reproduce:
1. id
uid=1001(admin) gid=1001(admin) groups=1001(admin),10(wheel) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
2. podman run -d --name test-sh alpine sh
Trying to pull registry.access.redhat.com/alpine...
  name unknown: Repo not found
Trying to pull registry.redhat.io/alpine...
  unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication
Trying to pull docker.io/library/alpine...
Getting image source signatures
…  
…  
Writing manifest to image destination
Storing signatures
Error: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting "sysfs" to rootfs at "/sys" caused: operation not permitted: OCI runtime permission denied error

Comment 1 Tom Sweeney 2020-12-03 14:27:46 UTC
Marius,

The error indicates a authentication issue.  Did you first login to registry.access.redhat.com with `podman login` prior to doing the pull?

Does this command work for you?  If so, that's a further indication of an authentication error.

podman run -d --name test-sh docker.io/library/alpine sh

Comment 2 Matthew Heon 2020-12-03 14:37:13 UTC
Tom - no, this one is actually a known issue with rootless, though we originally started picking it up on the gating test runs for v2.2.0, not 2.1.1. I'm working on resolving it in https://github.com/containers/podman/pull/8561

Comment 3 Eduardo Minguez 2020-12-15 08:10:27 UTC
Can confirm it doesn't work in latest CentOS Stream either:

```
$ podman start mosquitto
Error: unable to start container "6ba0cc3844e1219b515a08a1219993160ae15d0f8c579621622df59e5e0c8887": container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting "sysfs" to rootfs at "/sys" caused: operation not permitted: OCI runtime permission denied error

$ rpm -qa | grep -i podman-2.1.1
podman-2.1.1-3.module_el8.4.0+575+63b40ad7.x86_64

$ podman --version
podman version 2.1.1
```

Comment 4 Eddie Jennings 2020-12-17 23:38:16 UTC
I can also confirm (CentOS Stream 8) with podman 2.1.1.

Comment 5 Matthew Heon 2021-01-20 21:46:23 UTC
Fix here is committed and will be in Podman 3.0, landing in 8.4.0. Moving to post. Tagging in Jindrich for errata and packaging.

Comment 15 Katerina Koukiou 2021-02-10 09:01:58 UTC
Created attachment 1756155 [details]
podman-info

Comment 16 Katerina Koukiou 2021-02-10 09:03:41 UTC
The issue persists for me as well:

$ runc --version
runc version spec: 1.0.2-dev

$ rpm -qf /usr/bin/runc
runc-1.0.0-70.rc92.module+el8.4.0+9804+5385893b.x86_64

podman-3.0.0-0.38rc2.module+el8.4.0+9804+5385893b.src.rpm

$ podman run -d --name swamped-crate-system --runtime /usr/bin/runc busybox:latest sleep 1000
Error: time="2021-02-10T03:59:09-05:00" level=error msg="container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting \"sysfs\" to rootfs at \"/sys\" caused: operation not permitted": OCI permission denied

Comment 17 Giuseppe Scrivano 2021-02-10 09:54:23 UTC
thanks for confirming it.

I am still not able to reproduce locally, do you have any file under ~/.config/containers?

Could you please attach the output you get when you run?

$ podman --runtime /usr/bin/runc --log-level debug run -d --name swamped-crate-system busybox:latest sleep 1000

Comment 18 Katerina Koukiou 2021-02-10 09:59:53 UTC
@Guiseppe, I don't have any file under ~/.config/containers or any custom configuration in general. Please find the debug run in the attachments.

Comment 19 Katerina Koukiou 2021-02-10 10:00:18 UTC
Created attachment 1756161 [details]
podman-run-debug

Comment 20 Giuseppe Scrivano 2021-02-10 10:11:38 UTC
thanks that helped.

I think it is a regression in the kernel, I can reproduce on 4.18.0-283

Simpler reproducer:

$ unshare -rmn mount -t sysfs sysfs /sys && echo it works
mount: /sys: permission denied.

expected result (tested both on Fedora 5.10.10-200.fc33 and 4.18.0-240.10.1.el8_3):

$ unshare -rmn mount -t sysfs sysfs /sys && echo it works
it works

The kernel refuses to mount sysfs even if the user namespace owns the network namespace.

Moving to the kernel for further triage.

Comment 23 Eric Biederman 2021-02-10 19:26:28 UTC
(In reply to Giuseppe Scrivano from comment #20)
> thanks that helped.
> 
> I think it is a regression in the kernel, I can reproduce on 4.18.0-283
> 
> Simpler reproducer:
> 
> $ unshare -rmn mount -t sysfs sysfs /sys && echo it works
> mount: /sys: permission denied.
> 
> expected result (tested both on Fedora 5.10.10-200.fc33 and
> 4.18.0-240.10.1.el8_3):
> 
> $ unshare -rmn mount -t sysfs sysfs /sys && echo it works
> it works
> 
> The kernel refuses to mount sysfs even if the user namespace owns the
> network namespace.
> 
> Moving to the kernel for further triage.

Did you have a previous mount of sysfs visible in the mount namespace you were mounting sysfs in?

If not it won't work by design.

Comment 24 Shane McDonald 2021-02-11 19:27:56 UTC
Hello. Just stumbled across this issue while doing a Google search. I am seeing this on a fresh install of CentOS Stream, fully updated.

Comment 31 Daniel Walsh 2021-02-15 22:16:07 UTC
Alexey the /sys file system allows containers to look at things like how much memory do they have in their cgroup.

Comment 32 Alexey Gladkov 2021-02-16 08:17:35 UTC
(In reply to Daniel Walsh from comment #31)
> Alexey the /sys file system allows containers to look at things like how
> much memory do they have in their cgroup.

sysfs itself does not provide information about available memory. You are using cgroup2 mounted in a subdirectory.

mount -t tmpfs tmpfs /sys
mkdir -p /sys/fs/cgroup
mount -t cgroup2 cgroup2 /sys/fs/cgroup

Can you just do this ?

I ask because sysfs is more about hardware and loaded kernel modules. I'm not very clear why this information is needed in a container in rootless mode.

Comment 35 Daniel Walsh 2021-02-16 21:28:27 UTC
Alexey the bottom line is /sys is considered part of the Linux environment and lots of container applications will fail to work if the /sys file system is not seen within the container.

Comment 45 Jindrich Novy 2021-02-23 10:58:05 UTC
*** Bug 1931800 has been marked as a duplicate of this bug. ***

Comment 50 Tom Sweeney 2021-02-24 21:48:51 UTC
*** Bug 1732957 has been marked as a duplicate of this bug. ***

Comment 62 Jan Stancek 2021-03-02 07:45:45 UTC
List of commits available on kernel-4.18.0-293.el8:
Related commit: 64153f716607 ("Unbreak mount_capable()")

Comment 67 Jindrich Novy 2021-03-05 09:59:31 UTC
*** Bug 1935609 has been marked as a duplicate of this bug. ***

Comment 70 errata-xmlrpc 2021-05-18 14:24:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: kernel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1578

Comment 71 Red Hat Bugzilla 2023-09-15 00:52:22 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.