Bug 1356437 - [RFE] - Volume Security Story - "permission denied" error for writes on app container
Summary: [RFE] - Volume Security Story - "permission denied" error for writes on app c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: heketi
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: CNS 3.4
Assignee: Michael Adam
QA Contact: krishnaram Karthick
URL:
Whiteboard:
: 1386198 (view as bug list)
Depends On:
Blocks: 1385245
TreeView+ depends on / blocked
 
Reported: 2016-07-14 06:46 UTC by Bhaskarakiran
Modified: 2017-01-18 21:56 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-01-18 21:56:16 UTC
Embargoed:
hchiramm: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github heketi heketi issues 512 0 None None None 2016-09-21 03:00:09 UTC
Red Hat Product Errata RHEA-2017:0148 0 normal SHIPPED_LIVE heketi bug fix and enhancement update 2017-01-19 02:53:24 UTC

Description Bhaskarakiran 2016-07-14 06:46:49 UTC
Description of problem:
=======================

Spinned up a fedora container. Volume is mounted on the app but when tried touch a file it throws out "permission denied" error.

sh-4.3$ df -h .
Filesystem                                         Size  Used Avail Use% Mounted on
10.70.41.220:vol_cf226e24f9424425dd89e1df63f0f5de   50G   67M   50G   1% /mnt/glusterfs
sh-4.3$ pwd
/mnt/glusterfs
sh-4.3$ touch 1
touch: cannot touch '1': Permission denied
sh-4.3$ 

[root@dhcp43-179 ~]# oc get pvc
NAME              STATUS    VOLUME               CAPACITY   ACCESSMODES   AGE
glusterfs-claim   Bound     glusterfs-cf226e24   50Gi       RWX           33m
[root@dhcp43-179 ~]# oc get pv
NAME                 CAPACITY   ACCESSMODES   STATUS    CLAIM                  REASON    AGE
glusterfs-cf226e24   50Gi       RWX           Bound     aplo/glusterfs-claim             34m
[root@dhcp43-179 ~]# heketi-cli volume list
Id:cf226e24f9424425dd89e1df63f0f5de    Cluster:aac31e2c79ef28e65b08cd9151c1757c    Name:vol_cf226e24f9424425dd89e1df63f0f5de
Id:e1d7825e3ecdef513c394d289fbdea44    Cluster:aac31e2c79ef28e65b08cd9151c1757c    Name:heketidbstorage
[root@dhcp43-179 ~]# 



Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Luis Pabón 2016-07-14 12:01:17 UTC
== This is not a Heketi bug==

I will look into the OpenShift command to allow this, and it then should be documented.

Comment 6 Luis Pabón 2016-09-21 03:00:10 UTC
The issue is that volumes come with UID/GID of 0 (root).  For a container to be able to write to them, we need to have the container also have a root ID, but this, of course, is not a good solution.

Instead we have determined that Heketi can set a GID on a volume on creation and set the permissions to 775.  This would then allow the POD/container to have the GID set to that of the volume.  Allowing the container to write to the volume.

Comment 7 Luis Pabón 2016-10-05 15:57:09 UTC
Change committed upstream.

Comment 8 Luis Pabón 2016-10-18 11:53:13 UTC
*** Bug 1386198 has been marked as a duplicate of this bug. ***

Comment 14 Prasanth 2016-11-10 12:17:35 UTC
This seems to be not working with the latest OCP/CNS builds as a cluster admin user.

heketi-templates-3.0.0-2.el7rhgs.x86_64
heketi-client-3.0.0-2.el7rhgs.x86_64

openshift v3.4.0.24+52fd77b
kubernetes v1.4.0+776c994

#########
# oc project
Using project "storage-project" on server "https://dhcp47-127.lab.eng.blr.redhat.com:8443".

# oc whoami
admin1

# oc rsh busybox 

/ $ df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mapper/docker-253:0-34017480-6518d1dcb180eb42620c2778e4a079e12c0ef1da19800fd862dcb06d3820224a
                         10.0G     34.0M      9.9G   0% /
tmpfs                    23.5G         0     23.5G   0% /dev
tmpfs                    23.5G         0     23.5G   0% /sys/fs/cgroup
/dev/mapper/rhel_dhcp47--127-root
                         50.0G      2.2G     47.8G   4% /run/secrets
/dev/mapper/rhel_dhcp47--127-root
                         50.0G      2.2G     47.8G   4% /dev/termination-log
/dev/mapper/rhel_dhcp47--127-root
                         50.0G      2.2G     47.8G   4% /etc/resolv.conf
/dev/mapper/rhel_dhcp47--127-root
                         50.0G      2.2G     47.8G   4% /etc/hostname
/dev/mapper/rhel_dhcp47--127-root
                         50.0G      2.2G     47.8G   4% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
10.70.47.121:vol_5422034cc1391e6373905a3a71df7e64
                         12.0G     33.1M     12.0G   0% /usr/share/busybox
tmpfs                    23.5G     16.0K     23.5G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                    23.5G         0     23.5G   0% /proc/kcore
tmpfs                    23.5G         0     23.5G   0% /proc/timer_stats


/ $ cd /usr/share/busybox

/usr/share/busybox $ touch file1
touch: file1: Permission denied
#########

Comment 16 Humble Chirammal 2016-11-10 12:45:16 UTC
Are we seeing any AVC denials when we write to this mount point ?

Few things to check:

*) Are we able to write ( as root) to the mount point in the OSE node ?
*) If selinux is enabled, can we disable it and try a write from the pod?

Output to capture:

*) ls -lZ on the OSE node mount point and from the container.

Comment 17 Prasanth 2016-11-14 07:30:05 UTC
(In reply to Humble Chirammal from comment #16)
> Are we seeing any AVC denials when we write to this mount point ?

No

> 
> Few things to check:
> 
> *) Are we able to write ( as root) to the mount point in the OSE node ?

Yes

> *) If selinux is enabled, can we disable it and try a write from the pod?

Tried after setting SELinux to Permissive and write was still failing

> 
> Output to capture:
> 
> *) ls -lZ on the OSE node mount point and from the container.

ls -ldZ /var/lib/origin/openshift.local.volumes/pods/87a9659c-a72d-11e6-b89f-005056b3bd15/volumes/kubernetes.io~glusterfs/pvc-f134fbf5-a72b-11e6-b89f-005056b3bd15/
drwxr-xr-x. root root system_u:object_r:fusefs_t:s0    /var/lib/origin/openshift.local.volumes/pods/87a9659c-a72d-11e6-b89f-005056b3bd15/volumes/kubernetes.io~glusterfs/pvc-f134fbf5-a72b-11e6-b89f-005056b3bd15/


Let me try creating another app pod as mentioned in [1] and see how it goes:

[1] https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/installation-and-configuration/#create-priv-pvc

Comment 18 Humble Chirammal 2016-11-14 10:16:05 UTC
(In reply to Prasanth from comment #17)
> (In reply to Humble Chirammal from comment #16)
> > Are we seeing any AVC denials when we write to this mount point ?
> 
> No
> 
> > 
> > Few things to check:
> > 
> > *) Are we able to write ( as root) to the mount point in the OSE node ?
> 
> Yes
> 
> > *) If selinux is enabled, can we disable it and try a write from the pod?
> 
> Tried after setting SELinux to Permissive and write was still failing
> 
> > 
> > Output to capture:
> > 
> > *) ls -lZ on the OSE node mount point and from the container.
> 
> ls -ldZ
> /var/lib/origin/openshift.local.volumes/pods/87a9659c-a72d-11e6-b89f-
> 005056b3bd15/volumes/kubernetes.io~glusterfs/pvc-f134fbf5-a72b-11e6-b89f-
> 005056b3bd15/
> drwxr-xr-x. root root system_u:object_r:fusefs_t:s0   
> /var/lib/origin/openshift.local.volumes/pods/87a9659c-a72d-11e6-b89f-
> 005056b3bd15/volumes/kubernetes.io~glusterfs/pvc-f134fbf5-a72b-11e6-b89f-
> 005056b3bd15/
> 
> 
> Let me try creating another app pod as mentioned in [1] and see how it goes:
> 
> [1]
> https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/
> installation-and-configuration/#create-priv-pvc

Thanks Prasanth for providing the info:

Can you please make sure:

The application pods are running in privileged mode (mostly yes though) ?

Also, can you please run below  on each OSE node and reproduce the issue?

# setsebool -P virt_sandbox_use_fusefs on

Comment 19 Prasanth 2016-11-14 12:21:22 UTC
(In reply to Prasanth from comment #17)
 
> Let me try creating another app pod as mentioned in [1] and see how it goes:
> 
> [1]
> https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/
> installation-and-configuration/#create-priv-pvc


I've re-created my setup and tried spinning a fedora app in Privileged mode using the above sample file and this time it worked as expected. See below:

##################
metadata:
  annotations:
    openshift.io/scc: privileged


sh-4.3# mount |grep gluster
10.70.47.121:vol_b4c23161b52f78e122719443b8553935 on /mnt/glusteri3 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


sh-4.3# ls
dir1   dir16  dir4   file_0   file_15  file_21  file_28  file_34  file_40  file_47  file_53  file_6   file_66  file_72  file_79  file_85  file_91  file_98
dir10  dir17  dir5   file_1   file_16  file_22  file_29  file_35  file_41  file_48  file_54  file_60  file_67  file_73  file_8   file_86  file_92  file_99
dir11  dir18  dir6   file_10  file_17  file_23  file_3   file_36  file_42  file_49  file_55  file_61  file_68  file_74  file_80  file_87  file_93
dir12  dir19  dir7   file_11  file_18  file_24  file_30  file_37  file_43  file_5   file_56  file_62  file_69  file_75  file_81  file_88  file_94
dir13  dir2   dir8   file_12  file_19  file_25  file_31  file_38  file_44  file_50  file_57  file_63  file_7   file_76  file_82  file_89  file_95
dir14  dir20  dir9   file_13  file_2   file_26  file_32  file_39  file_45  file_51  file_58  file_64  file_70  file_77  file_83  file_9   file_96
dir15  dir3   file2  file_14  file_20  file_27  file_33  file_4   file_46  file_52  file_59  file_65  file_71  file_78  file_84  file_90  file_97
##################

Comment 20 Humble Chirammal 2016-11-14 12:32:50 UTC
(In reply to Prasanth from comment #19)
> (In reply to Prasanth from comment #17)
>  
> > Let me try creating another app pod as mentioned in [1] and see how it goes:
> > 
> > [1]
> > https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/
> > installation-and-configuration/#create-priv-pvc
> 
> 
> I've re-created my setup and tried spinning a fedora app in Privileged mode
> using the above sample file and this time it worked as expected. See below:
> 
> ##################
> metadata:
>   annotations:
>     openshift.io/scc: privileged
> 
> 
> sh-4.3# mount |grep gluster
> 10.70.47.121:vol_b4c23161b52f78e122719443b8553935 on /mnt/glusteri3 type
> fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,
> max_read=131072)
> 
> 
> sh-4.3# ls
> dir1   dir16  dir4   file_0   file_15  file_21  file_28  file_34  file_40 
> file_47  file_53  file_6   file_66  file_72  file_79  file_85  file_91 
> file_98
> dir10  dir17  dir5   file_1   file_16  file_22  file_29  file_35  file_41 
> file_48  file_54  file_60  file_67  file_73  file_8   file_86  file_92 
> file_99
> dir11  dir18  dir6   file_10  file_17  file_23  file_3   file_36  file_42 
> file_49  file_55  file_61  file_68  file_74  file_80  file_87  file_93
> dir12  dir19  dir7   file_11  file_18  file_24  file_30  file_37  file_43 
> file_5   file_56  file_62  file_69  file_75  file_81  file_88  file_94
> dir13  dir2   dir8   file_12  file_19  file_25  file_31  file_38  file_44 
> file_50  file_57  file_63  file_7   file_76  file_82  file_89  file_95
> dir14  dir20  dir9   file_13  file_2   file_26  file_32  file_39  file_45 
> file_51  file_58  file_64  file_70  file_77  file_83  file_9   file_96
> dir15  dir3   file2  file_14  file_20  file_27  file_33  file_4   file_46 
> file_52  file_59  file_65  file_71  file_78  file_84  file_90  file_97
> ##################

Thanks for the verification! Can you please move the bug to 'VERIFIED' status ?

Comment 21 Prasanth 2016-11-14 12:46:24 UTC
(In reply to Humble Chirammal from comment #18)

> Also, can you please run below  on each OSE node and reproduce the issue?
> 
> # setsebool -P virt_sandbox_use_fusefs on

FYI, the following SELinux boolean is on by default in all the nodes:

# getsebool -a |grep virt_sandbox_use_fusefs
virt_sandbox_use_fusefs --> on

Comment 22 Prasanth 2016-11-21 12:20:23 UTC
(In reply to Humble Chirammal from comment #20)

> Thanks for the verification! Can you please move the bug to 'VERIFIED'
> status ?

Humble, can you please provide some additional details and also a sample pod spec (with all the parameters required) for the complete verification of this BZ? The ones that are available in our official doc does not seems to be sufficient for verifying this feature and hence the request.

Comment 23 Humble Chirammal 2016-11-23 11:57:53 UTC
Prasanth, there are different ways to confirm this. One of them would be once the volume is created using a particular gid, use it in the pod's SGID spec. You can follow this doc https://docs.openshift.org/latest/install_config/storage_examples/gluster_example.html#complete-example-using-gusterfs-defining-glusterfs-volume-access from "Defining gluster volume access" section, considering the GID is the one you used when creating the volume. Please let me know if you need any help on this.

Comment 26 krishnaram Karthick 2016-12-29 08:33:14 UTC
The permission denied issue is no more seen when creating volume with specific gid and providing the id while creating pod as suggested in comment#23.

sh-4.2$ id
uid=1000060000 gid=0(root) groups=0(root),590,1000060000

Moving the bug to verified based on above results.

openshift cluster version used for verification,
openshift v3.4.0.38
kubernetes v1.4.0+776c994

Comment 28 errata-xmlrpc 2017-01-18 21:56:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2017-0148.html


Note You need to log in before you can comment on or make changes to this bug.