Bug 1327154 - Unable to access iscsi mount directory from pod when SElinux is enforcing
Summary: Unable to access iscsi mount directory from pod when SElinux is enforcing
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Paul Morie
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-14 11:16 UTC by Jianwei Hou
Modified: 2016-09-27 09:38 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-09-27 09:38:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ausearch -ts recent (20.01 KB, text/x-vhdl)
2016-04-26 06:46 UTC, Jianwei Hou
no flags Details
audit.log (1.31 MB, text/plain)
2016-04-26 06:46 UTC, Jianwei Hou
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1933 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.3 Release Advisory 2016-09-27 13:24:36 UTC

Description Jianwei Hou 2016-04-14 11:16:01 UTC
Description of problem:
Can not access the iscsi volume mount directory from the pod when the node has SElinux enforcing.

Version-Release number of selected component (if applicable):
openshift v3.2.0.15
kubernetes v1.2.0-36-g4a3f9c5
etcd 2.2.5
On Red Hat Enterprise Linux Server release 7.1 (Maipo) with Docker 1.9.1

How reproducible:
Always

Steps to Reproduce:
1. Create PV, PVC and Pod
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/iscsi/pv-rwo.json
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/iscsi/pvc-rwo.json
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/iscsi/pod-fsgroup.json

2. oc rsh --shell=/bin/sh iscsi
/ $ id
uid=1001920000 gid=0(root) groups=1001920001
/ $ ls -l /mnt/
total 4
drwxrwsr-x    3 root     10019200      4096 Apr 14 09:50 iscsi 
/ $ ls -l /mnt/iscsi/
ls: can't open '/mnt/iscsi/': Permission denied
total 0


3. On the node where this pod is scheduled to, run: setenforce 0

4. Repeat step 2
/ $ ls -l /mnt/iscsi/
total 16
-rw-rwSr--    1 1010     10019200         0 Apr 14 09:50 file1
drwxrwS---    2 root     10019200     16384 Mar 25 04:57 lost+found

5. On the node where this pod is scheduled to, run: setenforece 1

6. Repeat step 2
/ $ ls -l /mnt/iscsi/
ls: can't open '/mnt/iscsi/': Permission denied
total 0


Actual results:
The mount directory can not be accessed with SElinux enforcing

Expected results:
Should be able to access the directory with SElinux enforcing.

Additional info:
On node:
[root@openshift-130 ~]# mount|grep iscsi
/dev/sda on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/192.168.0.225:3260-iqn.2015-06.world.server:storage.target00-lun-0 type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
/dev/sda on /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi/iscsi type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)

[root@openshift-130 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/192.168.0.225:3260-iqn.2015-06.world.server:storage.target00-lun-0
drwxrwsr-x. root 1001920001 system_u:object_r:svirt_sandbox_file_t:s0:c4,c7 /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/192.168.0.225:3260-iqn.2015-06.world.server:storage.target00-lun-0

[root@openshift-130 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi
drwxr-x---. root root system_u:object_r:svirt_sandbox_file_t:s0 /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi

[root@openshift-130 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi/iscsi/
drwxrwsr-x. root 1001920001 system_u:object_r:svirt_sandbox_file_t:s0:c4,c7 /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi/iscsi/

Comment 1 Sami Wagiaalla 2016-04-19 14:43:55 UTC
First of all thank you for this excellent bug report.

I was not able to reproduce this. 
ls -l /mnt/iscsi works for me.

And the files listed above seem to have the correct labeling. Perhaps the SELinux issue is outside of openshift

For example guide I followed to set up the iSCSI target (https://fedoraproject.org/wiki/Scsi-target-utils_Quickstart_Guide#Create_a_new_target_device) requires the backing file to have the target type label be tgtd_var_lib_t.

Is that something that might be an issue for you ?

Comment 2 Eric Paris 2016-04-20 15:32:14 UTC
In the future, any time that you find that setenforce 0 works around a problem, can you please include the output of `ausearch -ts recent` or all of /var/log/audit/audit.log as an attachment? Do you still have the those logs from this time period?

Comment 3 Jianwei Hou 2016-04-26 06:45:31 UTC
Thank you for the suggestion. I've attached `ausearch -ts recent` and /var/log/audit/audit.log

Now I've found the way to reproduce it and why it happened.
1. Prepare iscsi target, create PV, PVC and setup iscsi initiator on the node
2. Create a first pod which has a value for pod.spec.securityContext.seLinuxOptions, eg:
```
          "securityContext": {
              "runAsUser": 101010,
              "fsGroup": 123456,
              "seLinuxOptions": {
                 "level": "s0:c13,c2"                                                                                                             
              }
          },
```
3. Read/Write the mount dir from the pod, all operations are supported.
4. Delete the pod, then create a second pod using same PVC, make sure this pod does not have  pod.spec.securityContext.seLinuxOptions, eg:
```
          "securityContext": {
              "runAsUser": 101010,
              "fsGroup": 123456,
          },
```
5. Read/Write the mount dir, this issue appears. Possibly because the previous pod has set selinux level but the second pod has not.

Comment 4 Jianwei Hou 2016-04-26 06:46:15 UTC
Created attachment 1150749 [details]
ausearch -ts recent

Comment 5 Jianwei Hou 2016-04-26 06:46:39 UTC
Created attachment 1150750 [details]
audit.log

Comment 6 Jianwei Hou 2016-04-26 06:52:44 UTC
As described in comment 3, I think this could also happen to rbd, aws, gce, cinder. Is this a valid user scenario?

Comment 7 Sami Wagiaalla 2016-04-26 14:04:18 UTC
I see.. I don't think this is a bug we just have to get your settings correct.
Can you follow the instructions here (starting at comment 5): https://bugzilla.redhat.com/show_bug.cgi?id=1326059#c5

Comment 9 Paul Morie 2016-05-26 21:12:00 UTC
Could you please post the result of `oc get <pod> -o yaml` (as opposed to the pod descriptor you submitted to the API server)?

Comment 10 Andy Goldstein 2016-06-27 20:41:56 UTC
Is this still an issue? If so, could you please provide what Paul asked for in comment 9? Thanks!

Comment 11 Jianwei Hou 2016-07-25 05:48:57 UTC
This is not an issue according to comment 7 and https://bugzilla.redhat.com/show_bug.cgi?id=1326059#c7.
With selinuxContext type to MustRunAs in restricted scc, comment 3 is not reproducible.

Comment 13 errata-xmlrpc 2016-09-27 09:38:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1933


Note You need to log in before you can comment on or make changes to this bug.