Bug 2165875

Summary: podman can not start a rootless container with option --privileged and runtime runc
Product: Red Hat Enterprise Linux 8 Reporter: Joy Pu <ypu>
Component: podmanAssignee: Jindrich Novy <jnovy>
Status: CLOSED ERRATA QA Contact: Alex Jia <ajia>
Severity: high Docs Contact:
Priority: unspecified    
Version: 8.8CC: bbaude, dwalsh, jligon, jnovy, lsm5, mboddu, mheon, pthomas, tsweeney, umohnani
Target Milestone: rcKeywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: podman-4.4.0-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-05-16 08:23:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Joy Pu 2023-01-31 10:42:28 UTC
Description of problem:
When we try to start a container with --privileged and runc under a non-root user. It will always report following error:
error creating device nodes: open /home/test/.local/share/containers/storage/overlay/cbbd26c3f79944ed80536efacb9bf4866df2dbf79c43827a774df7795329384a/merged/dev/tty: no such device or address


Version-Release number of selected component (if applicable):
podman-4.4.0-0.7.module+el8.8.0+17988+c6d0f56e.x86_64
runc-1.1.4-1.module+el8.8.0+17823+c4e3c815.x86_64


How reproducible:
100%

Steps to Reproduce:
1. start a non-root container with --privileged:
$ podman run --rm -d --privileged quay.io/libpod/testimage:20221018 top
Trying to pull quay.io/libpod/testimage:20221018...
Getting image source signatures
Copying blob a3ed95caeb02 done  
Copying blob 578f06cc66c5 done  
Copying config f5a99120db done  
Writing manifest to image destination
Storing signatures
Resource limits are not supported and ignored on cgroups V1 rootless systems
Error: OCI runtime error: runc: runc create failed: unable to start container process: error during container init: error creating device nodes: open /home/test/.local/share/containers/storage/overlay/cbbd26c3f79944ed80536efacb9bf4866df2dbf79c43827a774df7795329384a/merged/dev/tty: no such device or address



Actual results:
The command exist with error message for create tty device

Expected results:
The command can finished as expected.

Additional info:

Comment 1 Matthew Heon 2023-01-31 15:32:46 UTC
Giuseppe, can you take a look at this?

Comment 2 Giuseppe Scrivano 2023-01-31 16:15:18 UTC
PR here: https://github.com/containers/podman/pull/17301

Comment 6 Alex Jia 2023-02-13 06:40:41 UTC
This bug has been verified on podman-4.4.0-1.module+el8.8.0+18060+3f21f2cc.x86_64
with runc-1.1.4-1.module+el8.8.0+18060+3f21f2cc.x86_64.

[root@kvm-02-guest12 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.8 Beta (Ootpa)

[root@kvm-02-guest12 ~]# rpm -q podman runc systemd kernel
podman-4.4.0-1.module+el8.8.0+18060+3f21f2cc.x86_64
runc-1.1.4-1.module+el8.8.0+18060+3f21f2cc.x86_64
systemd-239-71.el8.x86_64
kernel-4.18.0-458.el8.x86_64

[root@kvm-02-guest12 ~]# podman run --rm -d --privileged quay.io/libpod/testimage:20221018 top
Trying to pull quay.io/libpod/testimage:20221018...
Getting image source signatures
Copying blob a3ed95caeb02 done  
Copying blob 578f06cc66c5 done  
Copying config f5a99120db done  
Writing manifest to image destination
Storing signatures
57c3046fbd4241239fc92a23f8e300fe763c9e2bced47a4c8cb7143d0324a201

[root@kvm-02-guest12 ~]# podman ps
CONTAINER ID  IMAGE                              COMMAND     CREATED         STATUS         PORTS       NAMES
57c3046fbd42  quay.io/libpod/testimage:20221018  top         11 seconds ago  Up 12 seconds              youthful_shamir

Comment 8 errata-xmlrpc 2023-05-16 08:23:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:2758