Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1969996

Summary: Selinux is preventing non root user to build a container image
Product: Red Hat Enterprise Linux 9 Reporter: Katerina Koukiou <kkoukiou>
Component: container-selinuxAssignee: Jindrich Novy <jnovy>
Status: CLOSED CURRENTRELEASE QA Contact: Edward Shen <weshen>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 9.0CC: dwalsh, jnovy, lsm5, tsweeney, ypu, zpangwin
Target Milestone: betaKeywords: Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: container-selinux-2.163.0-2.el9 or newer Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-12-07 21:44:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Katerina Koukiou 2021-06-09 15:29:16 UTC
Description of problem:

Selinux is preventing non root user to build a container image. 

Version-Release number of selected component (if applicable):

container-selinux-2.160.0-2.el9.noarch
podman-3.1.0-0.15.el9.x86_64

Kernel: 5.13.0-0.rc3.25.el9.x86_64

How reproducible:
Always

Steps to Reproduce:
1. [admin@localhost ~]$ cat /tmp/tmp.v5RdMQdHeb/Dockerfile 
FROM quay.io/cockpit/registry:2
RUN ls

2. [admin@localhost ~]$ podman build /tmp/tmp.v5RdMQdHeb
STEP 1: FROM quay.io/cockpit/registry:2
STEP 2: RUN ls
Error relocating /lib/ld-musl-x86_64.so.1: RELRO protection failed: Permission denied
Error relocating /bin/sh: RELRO protection failed: Permission denied
Error: error building at STEP "RUN ls": error while running runtime: exit status 127


Actual results from journal"

Jun 09 11:04:17 localhost systemd[1266]: Started podman-4380.scope.
Jun 09 11:04:18 localhost systemd[1266]: Started libcrun container.
Jun 09 11:04:18 localhost audit[4432]: AVC avc:  denied  { read } for  pid=4432 comm="sh" path="/lib/ld-musl-x86_64.so.1" dev="vda3" ino=59005293 scontext=system_u:system_r:container_t:s0:c46,c88 tcontext=unconfined_u:object_r:data_home_>
Jun 09 11:04:18 localhost kernel: audit: type=1400 audit(1623251058.366:579): avc:  denied  { read } for  pid=4432 comm="sh" path="/lib/ld-musl-x86_64.so.1" dev="vda3" ino=59005293 scontext=system_u:system_r:container_t:s0:c46,c88 tconte>
Jun 09 11:04:18 localhost kernel: audit: type=1300 audit(1623251058.366:579): arch=c000003e syscall=10 success=no exit=-13 a0=7f0d9e60c000 a1=1000 a2=1 a3=7f0d9e60fb40 items=0 ppid=4425 pid=4432 auid=0 uid=1000 gid=1000 euid=1000 suid=10>
Jun 09 11:04:18 localhost audit[4432]: SYSCALL arch=c000003e syscall=10 success=no exit=-13 a0=7f0d9e60c000 a1=1000 a2=1 a3=7f0d9e60fb40 items=0 ppid=4425 pid=4432 auid=0 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=10>
Jun 09 11:04:18 localhost systemd[1266]: crun-buildah-buildah340970296.scope: Deactivated successfully.
Jun 09 11:04:18 localhost audit: PROCTITLE proctitle=2F62696E2F7368002D63006C73
Jun 09 11:04:18 localhost kernel: audit: type=1327 audit(1623251058.366:579): proctitle=2F62696E2F7368002D63006C73
Jun 09 11:04:18 localhost audit[4432]: AVC avc:  denied  { read } for  pid=4432 comm="sh" path="/bin/busybox" dev="vda3" ino=51418384 scontext=system_u:system_r:container_t:s0:c46,c88 tcontext=unconfined_u:object_r:data_home_t:s0 tclass=>
Jun 09 11:04:18 localhost kernel: audit: type=1400 audit(1623251058.370:580): avc:  denied  { read } for  pid=4432 comm="sh" path="/bin/busybox" dev="vda3" ino=51418384 scontext=system_u:system_r:container_t:s0:c46,c88 tcontext=unconfine>
Jun 09 11:04:18 localhost audit[4432]: SYSCALL arch=c000003e syscall=10 success=no exit=-13 a0=5642ea911000 a1=4000 a2=1 a3=7f0d9e60fb40 items=0 ppid=4425 pid=4432 auid=0 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=10>
Jun 09 11:04:18 localhost kernel: audit: type=1300 audit(1623251058.370:580): arch=c000003e syscall=10 success=no exit=-13 a0=5642ea911000 a1=4000 a2=1 a3=7f0d9e60fb40 items=0 ppid=4425 pid=4432 auid=0 uid=1000 gid=1000 euid=1000 suid=10>
Jun 09 11:04:18 localhost audit: PROCTITLE proctitle=2F62696E2F7368002D63006C73
Jun 09 11:04:18 localhost kernel: audit: type=1327 audit(1623251058.370:580): proctitle=2F62696E2F7368002D63006C73
Jun 09 11:04:18 localhost systemd[1266]: podman-4380.scope: Deactivated successfully.
Jun 09 11:04:18 localhost podman[1562]: time="2021-06-09T11:04:18-04:00" level=info msg="APIHandler(02b1fc8c-657a-485d-9308-8dd73255a875) -- GET /v1.12/libpod/images/json? BEGIN"
Jun 09 11:04:18 localhost podman[1562]: time="2021-06-09T11:04:18-04:00" level=info msg="APIHandler(af5953f5-1d21-46c6-b0d8-80199252a3c3) -- GET /v1.12/libpod/images/2d4f4b5309b1e41b4f83ae59b44df6d673ef44433c734b14c1c103ebca82c116/json? >


Additional info:

This looks very similar with the fedora issue reported here https://github.com/containers/podman/issues/2025

However the suggested container-selinux version in not available in the latest compose.

Comment 1 Daniel Walsh 2021-06-09 17:57:07 UTC
This is fixed in container-selinux-2.162.2

Comment 2 Tom Sweeney 2021-06-10 14:39:12 UTC
Setting to Post and assigning to Jindrich for any packaging needs.

Comment 7 Daniel Walsh 2021-06-12 10:44:22 UTC
Could you run `restorecon -R -v /home` And see if labels change?  Then attempt to run podman.

Comment 8 Edward Shen 2021-06-18 12:16:58 UTC
restorecon works. Sorry I missed this step last time.
Server one without restorecon:
ls -lZd .local/share/containers/storage/overlay*
drwx------. 8 weshen weshen unconfined_u:object_r:data_home_t:s0 4096 Jun 18 07:30 .local/share/containers/storage/overlay
drwx------. 2 weshen weshen unconfined_u:object_r:data_home_t:s0   52 Jun 18 07:30 .local/share/containers/storage/overlay-containers
drwx------. 3 weshen weshen unconfined_u:object_r:data_home_t:s0  116 Jun 18 07:28 .local/share/containers/storage/overlay-images
drwx------. 2 weshen weshen unconfined_u:object_r:data_home_t:s0 4096 Jun 18 07:30 .local/share/containers/storage/overlay-layers
Server two with restorecon:
ls -lZd .local/share/containers/storage/overlay*
drwx------. 9 weshen weshen unconfined_u:object_r:container_ro_file_t:s0 4096 Jun 18 04:41 .local/share/containers/storage/overlay
drwx------. 2 weshen weshen unconfined_u:object_r:data_home_t:s0           52 Jun 18 04:41 .local/share/containers/storage/overlay-containers
drwx------. 4 weshen weshen unconfined_u:object_r:container_ro_file_t:s0  188 Jun 18 04:41 .local/share/containers/storage/overlay-images
drwx------. 2 weshen weshen unconfined_u:object_r:container_ro_file_t:s0 4096 Jun 18 04:41 .local/share/containers/storage/overlay-layers

After restorecon podman can build the image successfully.
[weshen@hp-dl380-gen9-3 ~]$ podman --cgroup-manager cgroupfs build .
STEP 1: FROM quay.io/cockpit/registry:2
STEP 2: RUN ls
bin
dev
entrypoint.sh
etc
home
lib
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
STEP 3: COMMIT
--> 3ccdd792a7e
3ccdd792a7ed4ff965e5849714bc52572d07b834d98585c6a72905b58f3fe773

Comment 9 Daniel Walsh 2021-06-18 12:40:27 UTC
I think if this is a fresh install it should work.  If you did an upgrade it will not.

The Fedora rpm has a relabel of the /home directory in the postinstall

Jindrich can you copy the trigger from contianer-selinux spec for Fedora Rawhide into the one for RHEL9.

If someone is updating from RHEL8, we might need this.

Comment 10 Jindrich Novy 2021-06-21 09:27:57 UTC
https://src.fedoraproject.org/rpms/container-selinux/c/2e560c5e4950e6c22b5acb055dd3769bfbfbc248 is quite a nasty hack Dan :) If container-selinux is getting upgraded in 8.4.0 to container-selinux = 2:2.162.1-3 or higher this trigger will no longer execute while upgrading to RHEL9. This will absolutely happen as there will be (at least) container-selinux = 2:2.163.0-1 in 8.4.0.2. The same logic applies to Fedora.

Is there any way to detect this issue in the regular %post scriptlet? In such case we don't need any triggers/version dependencies.

If not, we need to hardcode container-selinux version in the trigger upon RHEL9 GA when it will be clear which version of container-selinux will be there and we need to be always sure the version in RHEL8 is lower than in RHEL9, i.e. matching the trigger condition. This is very fragile unless e.g. there will be container-selinux 3.0.0 in RHEL9 and RHEL8 will always be kept at 2.x.y with upstream maintenance branch for it.

Comment 11 Daniel Walsh 2021-06-22 12:32:10 UTC
If I am running a newer version of container-selinux then 2:2.162.1-3  that means the relabel has already happened and never needs to happen again.

Newer users of podman/container-selinux will get the labeling correct.  So this is only an upgrade issue from the older container-selinux policies which
did not label the content in the homedir the way newer versions of container-selinux expect.

No reason to worry abour different versions in RHEL9 versus RHEL8 for this labeling since I want them to be consistent as much as possible.

Comment 12 Tom Sweeney 2021-06-22 14:37:04 UTC
@jnovy are you set with this based on Dan's comment?

Comment 13 Jindrich Novy 2021-06-23 11:12:36 UTC
Ok, I modified the logic so that it runs only once upon upgrade from RHEL8->9. To distinguish upgrade and regular update in RHEL9 I bumped Epoch in RHEL9. The trigger now looks like this:

%triggerpostun -- container-selinux < 3:2.162.1-3
if %{_sbindir}/selinuxenabled ; then
  echo "Fixing Rootless SELinux labels in homedir"
  %{_sbindir}/restorecon -R /home/*/.local/share/containers/storage/overlay*  2> /dev/null || :
fi

(note the version in the trigger is not really relevant as only the Epoch matters)

I added the "|| :" at the end of restorecon so that %triggerpostun can never fail, otherwise it'd break the whole transaction leaving the system in the inconsistent state.

I tested the upgrade and update scenario and it works as expected.

Lokesh, do you mind modifying the trigger in Fedora like this so that it actually works there too?

Jindrich

Comment 21 Red Hat Bugzilla 2023-09-15 01:09:34 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days