Bug 1692369 - SELinux denies containers access to cephfs volume [NEEDINFO]
Summary: SELinux denies containers access to cephfs volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Containers
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.1.0
Assignee: Lokesh Mandvekar
QA Contact: weiwei jiang
URL:
Whiteboard:
Depends On:
Blocks: 1694045
TreeView+ depends on / blocked
 
Reported: 2019-03-25 13:03 UTC by Jan Safranek
Modified: 2019-06-04 10:46 UTC (History)
6 users (show)

Fixed In Version: container-selinux-2.94
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:46:25 UTC
Target Upstream Version:
smilner: needinfo? (imcleod)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:46:33 UTC

Internal Links: 1808187

Description Jan Safranek 2019-03-25 13:03:26 UTC
Not sure if it's the right component, feel free to reassign to selinux-policy.

When I start OCP 4.1 pod with a CephFS volume, the pod can't access the volume:

  ls: can't open '/vol1': Permission denied

Corresponding AVC:

Mar 25 12:02:31 ip-10-0-174-105 kernel: audit: type=1400 audit(1553515351.425:5): avc:  denied  { write } for  pid=73814 comm="sh" name="/" dev="ceph" ino=1 scontext=system_u:system_r:container_t:s0:c2,c23 tcontext=system_u:object_r:cephfs_t:s0 tclass=dir permissive=0


Version-Release number of selected component (if applicable):
RHCOS 410.8.20190322.0
selinux-policy-3.14.1-61.el8.noarch
cri-o-1.12.9-1.rhaos4.0.gitaac6be5.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Get a CephFS volume (not Ceph RBD!)
2. Use it in a pod
3. Read the volume in the pod

Additional info:
CephFS is a shared filesystem similar to Gluster or NFS. In RHEL8, CephFS is handled by "ceph" kernel module. It might have been fuse in RHEL7.

IMO, it should behave the same as other shared filesystems like Gluster or NFS - there should be a boolean like virt_use_ceph (or _cephfs?), which would be enabled by default on RHCOS.

Comment 1 Daniel Walsh 2019-03-26 10:48:34 UTC
Is it possbile to mount the cephs storage with a context mount?


mount -o context="system_u:object_r:container_file_t:s0"

Comment 2 Jan Safranek 2019-03-26 15:03:08 UTC
(In reply to Daniel Walsh from comment #1)
> Is it possbile to mount the cephs storage with a context mount?
> 
> 
> mount -o context="system_u:object_r:container_file_t:s0"

It is, however, all other shared volumes are handled using SELinux boolean and IMO CephFS should do the same.

Comment 3 Daniel Walsh 2019-03-27 10:28:03 UTC
Sure we can add the boolean, but this is far less secure.  

Specifically labeling the specific share to use prevents an excaped container run reading and writing other cephs shares not intended for containers.

Comment 4 Jan Safranek 2019-03-28 15:50:15 UTC
We're trying to make Kubernetes as less aware of underlying storage labeling as possible. We just pass context to CSI. And volumes are already mounted at that time.

Comment 5 Daniel Walsh 2019-03-28 23:55:56 UTC
Well at the expense of security.

container-selinux 2.94 has support for containers using cephs.

container_use_cephfs --> off

Comment 12 weiwei jiang 2019-04-23 02:34:58 UTC
Checked and container_use_cephfs --> on now.

# oc get clusterversion 
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.1.0-0.nightly-2019-04-22-005054   True        False         23h     Cluster version is 4.1.0-0.nightly-2019-04-22-005054

# oc get nodes -o wide 
NAME                                         STATUS   ROLES    AGE   VERSION             INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                                   KERNEL-VERSION         CONTAINER-RUNTIME
ip-10-0-135-161.eu-west-1.compute.internal   Ready    worker   23h   v1.13.4+da48e8391   10.0.135.161   <none>        Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa)   4.18.0-80.el8.x86_64   cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8
ip-10-0-140-63.eu-west-1.compute.internal    Ready    master   23h   v1.13.4+da48e8391   10.0.140.63    <none>        Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa)   4.18.0-80.el8.x86_64   cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8
ip-10-0-147-114.eu-west-1.compute.internal   Ready    worker   23h   v1.13.4+da48e8391   10.0.147.114   <none>        Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa)   4.18.0-80.el8.x86_64   cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8
ip-10-0-157-194.eu-west-1.compute.internal   Ready    master   23h   v1.13.4+da48e8391   10.0.157.194   <none>        Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa)   4.18.0-80.el8.x86_64   cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8
ip-10-0-163-90.eu-west-1.compute.internal    Ready    worker   23h   v1.13.4+da48e8391   10.0.163.90    <none>        Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa)   4.18.0-80.el8.x86_64   cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8
ip-10-0-167-208.eu-west-1.compute.internal   Ready    master   23h   v1.13.4+da48e8391   10.0.167.208   <none>        Red Hat Enterprise Linux CoreOS 410.8.20190418.1 (Ootpa)   4.18.0-80.el8.x86_64   cri-o://1.13.6-4.rhaos4.1.gita4b40b7.el8

# oc debug node/ip-10-0-163-90.eu-west-1.compute.internal
Starting pod/ip-10-0-163-90eu-west-1computeinternal-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# 
sh-4.2# chroot /host
sh-4.4# getsebool -a |grep -i ceph 
container_use_cephfs --> on
sh-4.4# rpm -qa|grep -i container-selinux
container-selinux-2.94-1.git1e99f1d.module+el8.0.0+2958+4e823551.noarch

Comment 14 errata-xmlrpc 2019-06-04 10:46:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.