Hide Forgot
Not sure if it's the right component, feel free to reassign to selinux-policy. When I start OCP 4.1 pod with a CephFS volume, the pod can't access the volume: ls: can't open '/vol1': Permission denied Corresponding AVC: Mar 25 12:02:31 ip-10-0-174-105 kernel: audit: type=1400 audit(1553515351.425:5): avc: denied { write } for pid=73814 comm="sh" name="/" dev="ceph" ino=1 scontext=system_u:system_r:container_t:s0:c2,c23 tcontext=system_u:object_r:cephfs_t:s0 tclass=dir permissive=0 Version-Release number of selected component (if applicable): RHCOS 410.8.20190322.0 selinux-policy-3.14.1-61.el8.noarch cri-o-1.12.9-1.rhaos4.0.gitaac6be5.el7.x86_64 How reproducible: Always Steps to Reproduce: 1. Get a CephFS volume (not Ceph RBD!) 2. Use it in a pod 3. Read the volume in the pod Additional info: CephFS is a shared filesystem similar to Gluster or NFS. In RHEL8, CephFS is handled by "ceph" kernel module. It might have been fuse in RHEL7. IMO, it should behave the same as other shared filesystems like Gluster or NFS - there should be a boolean like virt_use_ceph (or _cephfs?), which would be enabled by default on RHCOS.
Is it possbile to mount the cephs storage with a context mount? mount -o context="system_u:object_r:container_file_t:s0"
(In reply to Daniel Walsh from comment #1) > Is it possbile to mount the cephs storage with a context mount? > > > mount -o context="system_u:object_r:container_file_t:s0" It is, however, all other shared volumes are handled using SELinux boolean and IMO CephFS should do the same.
Sure we can add the boolean, but this is far less secure. Specifically labeling the specific share to use prevents an excaped container run reading and writing other cephs shares not intended for containers.
We're trying to make Kubernetes as less aware of underlying storage labeling as possible. We just pass context to CSI. And volumes are already mounted at that time.
Well at the expense of security. container-selinux 2.94 has support for containers using cephs. container_use_cephfs --> off
Checked and container_use_cephfs --> on now. # oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.0-0.nightly-2019-04-22-005054 True False 23h Cluster version is 4.1.0-0.nightly-2019-04-22-005054 # oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-10-0-135-161.eu-west-1.compute.internal Ready worker 23h v1.13.4+da48e8391 10.0.135.161 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 ip-10-0-140-63.eu-west-1.compute.internal Ready master 23h v1.13.4+da48e8391 10.0.140.63 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 ip-10-0-147-114.eu-west-1.compute.internal Ready worker 23h v1.13.4+da48e8391 10.0.147.114 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 ip-10-0-157-194.eu-west-1.compute.internal Ready master 23h v1.13.4+da48e8391 10.0.157.194 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 ip-10-0-163-90.eu-west-1.compute.internal Ready worker 23h v1.13.4+da48e8391 10.0.163.90 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 ip-10-0-167-208.eu-west-1.compute.internal Ready master 23h v1.13.4+da48e8391 10.0.167.208 <none> Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8 # oc debug node/ip-10-0-163-90.eu-west-1.compute.internal Starting pod/ip-10-0-163-90eu-west-1computeinternal-debug ... To use host binaries, run `chroot /host` If you don't see a command prompt, try pressing enter. sh-4.2# sh-4.2# chroot /host sh-4.4# getsebool -a |grep -i ceph container_use_cephfs --> on sh-4.4# rpm -qa|grep -i container-selinux container-selinux-2.94-1.git1e99f1d.module+el8.0.0+2958+4e823551.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758