Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
Trying to run a container with custom policy[0] fails as the starting process cannot access devices (such as /dev/null).
The produced avc is
type=AVC msg=audit(1678952803.346:303): avc: denied { open } for pid=1905336 comm="virt-launcher-m" path="/dev/null" dev="tmpfs" ino=6 scontext=system_u:system_r:virt_launcher.process:s0:c804,c995 tcontext=system_u:object_r:container_file_t:s0:c804,c995 tclass=chr_file permissive=0
This seems to originate from https://github.com/containers/container-selinux/commit/24e57848527bcddad025316fa57493926ff1dfbf#diff-1cdb378311a88c884861f2d5996bca97ec20a49b1d66211dced87b2619c17ba9L826
[0] https://github.com/kubevirt/kubevirt/blob/main/cmd/virt-handler/virt_launcher.cil
Version-Release number of selected component (if applicable):
Running Openshift 4.13.0-0.nightly-2023-03-14-053612 which uses
CentOS Stream CoreOS 413.92.202303061740-0 (Plow)
Additionally:
Name : container-selinux
Epoch : 3
Version : 2.199.0
Release : 1.el9
Name : selinux-policy
Version : 38.1.8
Release : 1.el9
How reproducible:
Run container/pod with custom SELinux policy
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
Let me just mention this was previously working, for example on 8.6 we are happily running this. My suspicion is that any workload on OCP using custom policy will face this issue. Are all policies expected to adjust to RHEL 9.2/OCP 4.13 now? In other words isn't this regression?
So this was intentional. Because we wanted to allow two podman instances to run on a system and be isolated from each other, with one supporting container_file_t and the other using a different file type.
Both being container_domains.
I guess I can create a new domain, which works just like container_domain, but does not have access to container_file_t.
If I add
+manage_dirs_pattern(svirt_sandbox_domain, container_file_t, container_file_t)
+manage_files_pattern(svirt_sandbox_domain, container_file_t, container_file_t)
+manage_lnk_files_pattern(svirt_sandbox_domain, container_file_t, container_file_t)
+manage_chr_files_pattern(svirt_sandbox_domain, container_file_t, container_file_t)
+manage_blk_files_pattern(svirt_sandbox_domain, container_file_t, container_file_t)
+manage_sock_files_pattern(svirt_sandbox_domain, container_file_t, container_file_t)
Then I think that will fix your problem, and still give me the isolation I want.
@dwalsh I'm assuming that adding container-selinux-2.205.0 needs to be a ZeroDay fix for RHEL 8.8 and 9.2. If you disagree, please let me know.
@jnovy heads up and assigning to you for any further packaging or BZ needs.
@travier heads up too.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (container-selinux bug fix and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2023:2206
Comment 39Red Hat Bugzilla
2023-09-19 04:34:33 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days
Description of problem: Trying to run a container with custom policy[0] fails as the starting process cannot access devices (such as /dev/null). The produced avc is type=AVC msg=audit(1678952803.346:303): avc: denied { open } for pid=1905336 comm="virt-launcher-m" path="/dev/null" dev="tmpfs" ino=6 scontext=system_u:system_r:virt_launcher.process:s0:c804,c995 tcontext=system_u:object_r:container_file_t:s0:c804,c995 tclass=chr_file permissive=0 This seems to originate from https://github.com/containers/container-selinux/commit/24e57848527bcddad025316fa57493926ff1dfbf#diff-1cdb378311a88c884861f2d5996bca97ec20a49b1d66211dced87b2619c17ba9L826 [0] https://github.com/kubevirt/kubevirt/blob/main/cmd/virt-handler/virt_launcher.cil Version-Release number of selected component (if applicable): Running Openshift 4.13.0-0.nightly-2023-03-14-053612 which uses CentOS Stream CoreOS 413.92.202303061740-0 (Plow) Additionally: Name : container-selinux Epoch : 3 Version : 2.199.0 Release : 1.el9 Name : selinux-policy Version : 38.1.8 Release : 1.el9 How reproducible: Run container/pod with custom SELinux policy Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: