Description of problem: Can not access the iscsi volume mount directory from the pod when the node has SElinux enforcing. Version-Release number of selected component (if applicable): openshift v3.2.0.15 kubernetes v1.2.0-36-g4a3f9c5 etcd 2.2.5 On Red Hat Enterprise Linux Server release 7.1 (Maipo) with Docker 1.9.1 How reproducible: Always Steps to Reproduce: 1. Create PV, PVC and Pod oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/iscsi/pv-rwo.json oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/iscsi/pvc-rwo.json oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/iscsi/pod-fsgroup.json 2. oc rsh --shell=/bin/sh iscsi / $ id uid=1001920000 gid=0(root) groups=1001920001 / $ ls -l /mnt/ total 4 drwxrwsr-x 3 root 10019200 4096 Apr 14 09:50 iscsi / $ ls -l /mnt/iscsi/ ls: can't open '/mnt/iscsi/': Permission denied total 0 3. On the node where this pod is scheduled to, run: setenforce 0 4. Repeat step 2 / $ ls -l /mnt/iscsi/ total 16 -rw-rwSr-- 1 1010 10019200 0 Apr 14 09:50 file1 drwxrwS--- 2 root 10019200 16384 Mar 25 04:57 lost+found 5. On the node where this pod is scheduled to, run: setenforece 1 6. Repeat step 2 / $ ls -l /mnt/iscsi/ ls: can't open '/mnt/iscsi/': Permission denied total 0 Actual results: The mount directory can not be accessed with SElinux enforcing Expected results: Should be able to access the directory with SElinux enforcing. Additional info: On node: [root@openshift-130 ~]# mount|grep iscsi /dev/sda on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/192.168.0.225:3260-iqn.2015-06.world.server:storage.target00-lun-0 type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered) /dev/sda on /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi/iscsi type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered) [root@openshift-130 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/192.168.0.225:3260-iqn.2015-06.world.server:storage.target00-lun-0 drwxrwsr-x. root 1001920001 system_u:object_r:svirt_sandbox_file_t:s0:c4,c7 /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/192.168.0.225:3260-iqn.2015-06.world.server:storage.target00-lun-0 [root@openshift-130 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi drwxr-x---. root root system_u:object_r:svirt_sandbox_file_t:s0 /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi [root@openshift-130 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi/iscsi/ drwxrwsr-x. root 1001920001 system_u:object_r:svirt_sandbox_file_t:s0:c4,c7 /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi/iscsi/
First of all thank you for this excellent bug report. I was not able to reproduce this. ls -l /mnt/iscsi works for me. And the files listed above seem to have the correct labeling. Perhaps the SELinux issue is outside of openshift For example guide I followed to set up the iSCSI target (https://fedoraproject.org/wiki/Scsi-target-utils_Quickstart_Guide#Create_a_new_target_device) requires the backing file to have the target type label be tgtd_var_lib_t. Is that something that might be an issue for you ?
In the future, any time that you find that setenforce 0 works around a problem, can you please include the output of `ausearch -ts recent` or all of /var/log/audit/audit.log as an attachment? Do you still have the those logs from this time period?
Thank you for the suggestion. I've attached `ausearch -ts recent` and /var/log/audit/audit.log Now I've found the way to reproduce it and why it happened. 1. Prepare iscsi target, create PV, PVC and setup iscsi initiator on the node 2. Create a first pod which has a value for pod.spec.securityContext.seLinuxOptions, eg: ``` "securityContext": { "runAsUser": 101010, "fsGroup": 123456, "seLinuxOptions": { "level": "s0:c13,c2" } }, ``` 3. Read/Write the mount dir from the pod, all operations are supported. 4. Delete the pod, then create a second pod using same PVC, make sure this pod does not have pod.spec.securityContext.seLinuxOptions, eg: ``` "securityContext": { "runAsUser": 101010, "fsGroup": 123456, }, ``` 5. Read/Write the mount dir, this issue appears. Possibly because the previous pod has set selinux level but the second pod has not.
Created attachment 1150749 [details] ausearch -ts recent
Created attachment 1150750 [details] audit.log
As described in comment 3, I think this could also happen to rbd, aws, gce, cinder. Is this a valid user scenario?
I see.. I don't think this is a bug we just have to get your settings correct. Can you follow the instructions here (starting at comment 5): https://bugzilla.redhat.com/show_bug.cgi?id=1326059#c5
Could you please post the result of `oc get <pod> -o yaml` (as opposed to the pod descriptor you submitted to the API server)?
Is this still an issue? If so, could you please provide what Paul asked for in comment 9? Thanks!
This is not an issue according to comment 7 and https://bugzilla.redhat.com/show_bug.cgi?id=1326059#c7. With selinuxContext type to MustRunAs in restricted scc, comment 3 is not reproducible.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1933