Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1327154 - Unable to access iscsi mount directory from pod when SElinux is enforcing
Unable to access iscsi mount directory from pod when SElinux is enforcing
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage (Show other bugs)
3.2.0
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Paul Morie
Jianwei Hou
: Regression
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-04-14 07:16 EDT by Jianwei Hou
Modified: 2016-09-27 05:38 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-09-27 05:38:00 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
ausearch -ts recent (20.01 KB, text/x-vhdl)
2016-04-26 02:46 EDT, Jianwei Hou
no flags Details
audit.log (1.31 MB, text/plain)
2016-04-26 02:46 EDT, Jianwei Hou
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1933 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.3 Release Advisory 2016-09-27 09:24:36 EDT

  None (edit)
Description Jianwei Hou 2016-04-14 07:16:01 EDT
Description of problem:
Can not access the iscsi volume mount directory from the pod when the node has SElinux enforcing.

Version-Release number of selected component (if applicable):
openshift v3.2.0.15
kubernetes v1.2.0-36-g4a3f9c5
etcd 2.2.5
On Red Hat Enterprise Linux Server release 7.1 (Maipo) with Docker 1.9.1

How reproducible:
Always

Steps to Reproduce:
1. Create PV, PVC and Pod
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/iscsi/pv-rwo.json
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/iscsi/pvc-rwo.json
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/iscsi/pod-fsgroup.json

2. oc rsh --shell=/bin/sh iscsi
/ $ id
uid=1001920000 gid=0(root) groups=1001920001
/ $ ls -l /mnt/
total 4
drwxrwsr-x    3 root     10019200      4096 Apr 14 09:50 iscsi 
/ $ ls -l /mnt/iscsi/
ls: can't open '/mnt/iscsi/': Permission denied
total 0


3. On the node where this pod is scheduled to, run: setenforce 0

4. Repeat step 2
/ $ ls -l /mnt/iscsi/
total 16
-rw-rwSr--    1 1010     10019200         0 Apr 14 09:50 file1
drwxrwS---    2 root     10019200     16384 Mar 25 04:57 lost+found

5. On the node where this pod is scheduled to, run: setenforece 1

6. Repeat step 2
/ $ ls -l /mnt/iscsi/
ls: can't open '/mnt/iscsi/': Permission denied
total 0


Actual results:
The mount directory can not be accessed with SElinux enforcing

Expected results:
Should be able to access the directory with SElinux enforcing.

Additional info:
On node:
[root@openshift-130 ~]# mount|grep iscsi
/dev/sda on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/192.168.0.225:3260-iqn.2015-06.world.server:storage.target00-lun-0 type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
/dev/sda on /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi/iscsi type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)

[root@openshift-130 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/192.168.0.225:3260-iqn.2015-06.world.server:storage.target00-lun-0
drwxrwsr-x. root 1001920001 system_u:object_r:svirt_sandbox_file_t:s0:c4,c7 /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/192.168.0.225:3260-iqn.2015-06.world.server:storage.target00-lun-0

[root@openshift-130 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi
drwxr-x---. root root system_u:object_r:svirt_sandbox_file_t:s0 /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi

[root@openshift-130 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi/iscsi/
drwxrwsr-x. root 1001920001 system_u:object_r:svirt_sandbox_file_t:s0:c4,c7 /var/lib/origin/openshift.local.volumes/pods/8b48a151-022e-11e6-a905-fa163e38609e/volumes/kubernetes.io~iscsi/iscsi/
Comment 1 Sami Wagiaalla 2016-04-19 10:43:55 EDT
First of all thank you for this excellent bug report.

I was not able to reproduce this. 
ls -l /mnt/iscsi works for me.

And the files listed above seem to have the correct labeling. Perhaps the SELinux issue is outside of openshift

For example guide I followed to set up the iSCSI target (https://fedoraproject.org/wiki/Scsi-target-utils_Quickstart_Guide#Create_a_new_target_device) requires the backing file to have the target type label be tgtd_var_lib_t.

Is that something that might be an issue for you ?
Comment 2 Eric Paris 2016-04-20 11:32:14 EDT
In the future, any time that you find that setenforce 0 works around a problem, can you please include the output of `ausearch -ts recent` or all of /var/log/audit/audit.log as an attachment? Do you still have the those logs from this time period?
Comment 3 Jianwei Hou 2016-04-26 02:45:31 EDT
Thank you for the suggestion. I've attached `ausearch -ts recent` and /var/log/audit/audit.log

Now I've found the way to reproduce it and why it happened.
1. Prepare iscsi target, create PV, PVC and setup iscsi initiator on the node
2. Create a first pod which has a value for pod.spec.securityContext.seLinuxOptions, eg:
```
          "securityContext": {
              "runAsUser": 101010,
              "fsGroup": 123456,
              "seLinuxOptions": {
                 "level": "s0:c13,c2"                                                                                                             
              }
          },
```
3. Read/Write the mount dir from the pod, all operations are supported.
4. Delete the pod, then create a second pod using same PVC, make sure this pod does not have  pod.spec.securityContext.seLinuxOptions, eg:
```
          "securityContext": {
              "runAsUser": 101010,
              "fsGroup": 123456,
          },
```
5. Read/Write the mount dir, this issue appears. Possibly because the previous pod has set selinux level but the second pod has not.
Comment 4 Jianwei Hou 2016-04-26 02:46 EDT
Created attachment 1150749 [details]
ausearch -ts recent
Comment 5 Jianwei Hou 2016-04-26 02:46 EDT
Created attachment 1150750 [details]
audit.log
Comment 6 Jianwei Hou 2016-04-26 02:52:44 EDT
As described in comment 3, I think this could also happen to rbd, aws, gce, cinder. Is this a valid user scenario?
Comment 7 Sami Wagiaalla 2016-04-26 10:04:18 EDT
I see.. I don't think this is a bug we just have to get your settings correct.
Can you follow the instructions here (starting at comment 5): https://bugzilla.redhat.com/show_bug.cgi?id=1326059#c5
Comment 9 Paul Morie 2016-05-26 17:12:00 EDT
Could you please post the result of `oc get <pod> -o yaml` (as opposed to the pod descriptor you submitted to the API server)?
Comment 10 Andy Goldstein 2016-06-27 16:41:56 EDT
Is this still an issue? If so, could you please provide what Paul asked for in comment 9? Thanks!
Comment 11 Jianwei Hou 2016-07-25 01:48:57 EDT
This is not an issue according to comment 7 and https://bugzilla.redhat.com/show_bug.cgi?id=1326059#c7.
With selinuxContext type to MustRunAs in restricted scc, comment 3 is not reproducible.
Comment 13 errata-xmlrpc 2016-09-27 05:38:00 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1933

Note You need to log in before you can comment on or make changes to this bug.