Bug 1692369
Summary: | SELinux denies containers access to cephfs volume | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Jan Safranek <jsafrane> |
Component: | Containers | Assignee: | Lokesh Mandvekar <lsm5> |
Status: | CLOSED ERRATA | QA Contact: | weiwei jiang <wjiang> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.1.0 | CC: | aos-bugs, dwalsh, imcleod, jokerman, mmccomas, smilner |
Target Milestone: | --- | ||
Target Release: | 4.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | container-selinux-2.94 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-06-04 10:46:25 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1694045 |
Description
Jan Safranek
2019-03-25 13:03:26 UTC
Is it possbile to mount the cephs storage with a context mount? mount -o context="system_u:object_r:container_file_t:s0" (In reply to Daniel Walsh from comment #1) > Is it possbile to mount the cephs storage with a context mount? > > > mount -o context="system_u:object_r:container_file_t:s0" It is, however, all other shared volumes are handled using SELinux boolean and IMO CephFS should do the same. Sure we can add the boolean, but this is far less secure. Specifically labeling the specific share to use prevents an excaped container run reading and writing other cephs shares not intended for containers. We're trying to make Kubernetes as less aware of underlying storage labeling as possible. We just pass context to CSI. And volumes are already mounted at that time. Well at the expense of security. container-selinux 2.94 has support for containers using cephs. container_use_cephfs --> off Checked and container_use_cephfs --> on now. # oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.0-0.nightly-2019-04-22-005054 True False 23h Cluster version is 4.1.0-0.nightly-2019-04-22-005054 # oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-10-0-135-161.eu-west-1.compute.internal Ready worker 23h v1.13.4+da48e8391 10.0.135.161 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 ip-10-0-140-63.eu-west-1.compute.internal Ready master 23h v1.13.4+da48e8391 10.0.140.63 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 ip-10-0-147-114.eu-west-1.compute.internal Ready worker 23h v1.13.4+da48e8391 10.0.147.114 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 ip-10-0-157-194.eu-west-1.compute.internal Ready master 23h v1.13.4+da48e8391 10.0.157.194 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 ip-10-0-163-90.eu-west-1.compute.internal Ready worker 23h v1.13.4+da48e8391 10.0.163.90 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 ip-10-0-167-208.eu-west-1.compute.internal Ready master 23h v1.13.4+da48e8391 10.0.167.208 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 # oc debug node/ip-10-0-163-90.eu-west-1.compute.internal Starting pod/ip-10-0-163-90eu-west-1computeinternal-debug ... To use host binaries, run `chroot /host` If you don't see a command prompt, try pressing enter. sh-4.2# sh-4.2# chroot /host sh-4.4# getsebool -a |grep -i ceph container_use_cephfs --> on sh-4.4# rpm -qa|grep -i container-selinux container-selinux-2.94-1.git1e99f1d.module+el8.0.0+2958+4e823551.noarch Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |