Description of problem: ======================= Spinned up a fedora container. Volume is mounted on the app but when tried touch a file it throws out "permission denied" error. sh-4.3$ df -h . Filesystem Size Used Avail Use% Mounted on 10.70.41.220:vol_cf226e24f9424425dd89e1df63f0f5de 50G 67M 50G 1% /mnt/glusterfs sh-4.3$ pwd /mnt/glusterfs sh-4.3$ touch 1 touch: cannot touch '1': Permission denied sh-4.3$ [root@dhcp43-179 ~]# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE glusterfs-claim Bound glusterfs-cf226e24 50Gi RWX 33m [root@dhcp43-179 ~]# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-cf226e24 50Gi RWX Bound aplo/glusterfs-claim 34m [root@dhcp43-179 ~]# heketi-cli volume list Id:cf226e24f9424425dd89e1df63f0f5de Cluster:aac31e2c79ef28e65b08cd9151c1757c Name:vol_cf226e24f9424425dd89e1df63f0f5de Id:e1d7825e3ecdef513c394d289fbdea44 Cluster:aac31e2c79ef28e65b08cd9151c1757c Name:heketidbstorage [root@dhcp43-179 ~]# Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
== This is not a Heketi bug== I will look into the OpenShift command to allow this, and it then should be documented.
The issue is that volumes come with UID/GID of 0 (root). For a container to be able to write to them, we need to have the container also have a root ID, but this, of course, is not a good solution. Instead we have determined that Heketi can set a GID on a volume on creation and set the permissions to 775. This would then allow the POD/container to have the GID set to that of the volume. Allowing the container to write to the volume.
Change committed upstream.
*** Bug 1386198 has been marked as a duplicate of this bug. ***
This seems to be not working with the latest OCP/CNS builds as a cluster admin user. heketi-templates-3.0.0-2.el7rhgs.x86_64 heketi-client-3.0.0-2.el7rhgs.x86_64 openshift v3.4.0.24+52fd77b kubernetes v1.4.0+776c994 ######### # oc project Using project "storage-project" on server "https://dhcp47-127.lab.eng.blr.redhat.com:8443". # oc whoami admin1 # oc rsh busybox / $ df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-34017480-6518d1dcb180eb42620c2778e4a079e12c0ef1da19800fd862dcb06d3820224a 10.0G 34.0M 9.9G 0% / tmpfs 23.5G 0 23.5G 0% /dev tmpfs 23.5G 0 23.5G 0% /sys/fs/cgroup /dev/mapper/rhel_dhcp47--127-root 50.0G 2.2G 47.8G 4% /run/secrets /dev/mapper/rhel_dhcp47--127-root 50.0G 2.2G 47.8G 4% /dev/termination-log /dev/mapper/rhel_dhcp47--127-root 50.0G 2.2G 47.8G 4% /etc/resolv.conf /dev/mapper/rhel_dhcp47--127-root 50.0G 2.2G 47.8G 4% /etc/hostname /dev/mapper/rhel_dhcp47--127-root 50.0G 2.2G 47.8G 4% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm 10.70.47.121:vol_5422034cc1391e6373905a3a71df7e64 12.0G 33.1M 12.0G 0% /usr/share/busybox tmpfs 23.5G 16.0K 23.5G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 23.5G 0 23.5G 0% /proc/kcore tmpfs 23.5G 0 23.5G 0% /proc/timer_stats / $ cd /usr/share/busybox /usr/share/busybox $ touch file1 touch: file1: Permission denied #########
Are we seeing any AVC denials when we write to this mount point ? Few things to check: *) Are we able to write ( as root) to the mount point in the OSE node ? *) If selinux is enabled, can we disable it and try a write from the pod? Output to capture: *) ls -lZ on the OSE node mount point and from the container.
(In reply to Humble Chirammal from comment #16) > Are we seeing any AVC denials when we write to this mount point ? No > > Few things to check: > > *) Are we able to write ( as root) to the mount point in the OSE node ? Yes > *) If selinux is enabled, can we disable it and try a write from the pod? Tried after setting SELinux to Permissive and write was still failing > > Output to capture: > > *) ls -lZ on the OSE node mount point and from the container. ls -ldZ /var/lib/origin/openshift.local.volumes/pods/87a9659c-a72d-11e6-b89f-005056b3bd15/volumes/kubernetes.io~glusterfs/pvc-f134fbf5-a72b-11e6-b89f-005056b3bd15/ drwxr-xr-x. root root system_u:object_r:fusefs_t:s0 /var/lib/origin/openshift.local.volumes/pods/87a9659c-a72d-11e6-b89f-005056b3bd15/volumes/kubernetes.io~glusterfs/pvc-f134fbf5-a72b-11e6-b89f-005056b3bd15/ Let me try creating another app pod as mentioned in [1] and see how it goes: [1] https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/installation-and-configuration/#create-priv-pvc
(In reply to Prasanth from comment #17) > (In reply to Humble Chirammal from comment #16) > > Are we seeing any AVC denials when we write to this mount point ? > > No > > > > > Few things to check: > > > > *) Are we able to write ( as root) to the mount point in the OSE node ? > > Yes > > > *) If selinux is enabled, can we disable it and try a write from the pod? > > Tried after setting SELinux to Permissive and write was still failing > > > > > Output to capture: > > > > *) ls -lZ on the OSE node mount point and from the container. > > ls -ldZ > /var/lib/origin/openshift.local.volumes/pods/87a9659c-a72d-11e6-b89f- > 005056b3bd15/volumes/kubernetes.io~glusterfs/pvc-f134fbf5-a72b-11e6-b89f- > 005056b3bd15/ > drwxr-xr-x. root root system_u:object_r:fusefs_t:s0 > /var/lib/origin/openshift.local.volumes/pods/87a9659c-a72d-11e6-b89f- > 005056b3bd15/volumes/kubernetes.io~glusterfs/pvc-f134fbf5-a72b-11e6-b89f- > 005056b3bd15/ > > > Let me try creating another app pod as mentioned in [1] and see how it goes: > > [1] > https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/ > installation-and-configuration/#create-priv-pvc Thanks Prasanth for providing the info: Can you please make sure: The application pods are running in privileged mode (mostly yes though) ? Also, can you please run below on each OSE node and reproduce the issue? # setsebool -P virt_sandbox_use_fusefs on
(In reply to Prasanth from comment #17) > Let me try creating another app pod as mentioned in [1] and see how it goes: > > [1] > https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/ > installation-and-configuration/#create-priv-pvc I've re-created my setup and tried spinning a fedora app in Privileged mode using the above sample file and this time it worked as expected. See below: ################## metadata: annotations: openshift.io/scc: privileged sh-4.3# mount |grep gluster 10.70.47.121:vol_b4c23161b52f78e122719443b8553935 on /mnt/glusteri3 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) sh-4.3# ls dir1 dir16 dir4 file_0 file_15 file_21 file_28 file_34 file_40 file_47 file_53 file_6 file_66 file_72 file_79 file_85 file_91 file_98 dir10 dir17 dir5 file_1 file_16 file_22 file_29 file_35 file_41 file_48 file_54 file_60 file_67 file_73 file_8 file_86 file_92 file_99 dir11 dir18 dir6 file_10 file_17 file_23 file_3 file_36 file_42 file_49 file_55 file_61 file_68 file_74 file_80 file_87 file_93 dir12 dir19 dir7 file_11 file_18 file_24 file_30 file_37 file_43 file_5 file_56 file_62 file_69 file_75 file_81 file_88 file_94 dir13 dir2 dir8 file_12 file_19 file_25 file_31 file_38 file_44 file_50 file_57 file_63 file_7 file_76 file_82 file_89 file_95 dir14 dir20 dir9 file_13 file_2 file_26 file_32 file_39 file_45 file_51 file_58 file_64 file_70 file_77 file_83 file_9 file_96 dir15 dir3 file2 file_14 file_20 file_27 file_33 file_4 file_46 file_52 file_59 file_65 file_71 file_78 file_84 file_90 file_97 ##################
(In reply to Prasanth from comment #19) > (In reply to Prasanth from comment #17) > > > Let me try creating another app pod as mentioned in [1] and see how it goes: > > > > [1] > > https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/ > > installation-and-configuration/#create-priv-pvc > > > I've re-created my setup and tried spinning a fedora app in Privileged mode > using the above sample file and this time it worked as expected. See below: > > ################## > metadata: > annotations: > openshift.io/scc: privileged > > > sh-4.3# mount |grep gluster > 10.70.47.121:vol_b4c23161b52f78e122719443b8553935 on /mnt/glusteri3 type > fuse.glusterfs > (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other, > max_read=131072) > > > sh-4.3# ls > dir1 dir16 dir4 file_0 file_15 file_21 file_28 file_34 file_40 > file_47 file_53 file_6 file_66 file_72 file_79 file_85 file_91 > file_98 > dir10 dir17 dir5 file_1 file_16 file_22 file_29 file_35 file_41 > file_48 file_54 file_60 file_67 file_73 file_8 file_86 file_92 > file_99 > dir11 dir18 dir6 file_10 file_17 file_23 file_3 file_36 file_42 > file_49 file_55 file_61 file_68 file_74 file_80 file_87 file_93 > dir12 dir19 dir7 file_11 file_18 file_24 file_30 file_37 file_43 > file_5 file_56 file_62 file_69 file_75 file_81 file_88 file_94 > dir13 dir2 dir8 file_12 file_19 file_25 file_31 file_38 file_44 > file_50 file_57 file_63 file_7 file_76 file_82 file_89 file_95 > dir14 dir20 dir9 file_13 file_2 file_26 file_32 file_39 file_45 > file_51 file_58 file_64 file_70 file_77 file_83 file_9 file_96 > dir15 dir3 file2 file_14 file_20 file_27 file_33 file_4 file_46 > file_52 file_59 file_65 file_71 file_78 file_84 file_90 file_97 > ################## Thanks for the verification! Can you please move the bug to 'VERIFIED' status ?
(In reply to Humble Chirammal from comment #18) > Also, can you please run below on each OSE node and reproduce the issue? > > # setsebool -P virt_sandbox_use_fusefs on FYI, the following SELinux boolean is on by default in all the nodes: # getsebool -a |grep virt_sandbox_use_fusefs virt_sandbox_use_fusefs --> on
(In reply to Humble Chirammal from comment #20) > Thanks for the verification! Can you please move the bug to 'VERIFIED' > status ? Humble, can you please provide some additional details and also a sample pod spec (with all the parameters required) for the complete verification of this BZ? The ones that are available in our official doc does not seems to be sufficient for verifying this feature and hence the request.
Prasanth, there are different ways to confirm this. One of them would be once the volume is created using a particular gid, use it in the pod's SGID spec. You can follow this doc https://docs.openshift.org/latest/install_config/storage_examples/gluster_example.html#complete-example-using-gusterfs-defining-glusterfs-volume-access from "Defining gluster volume access" section, considering the GID is the one you used when creating the volume. Please let me know if you need any help on this.
The permission denied issue is no more seen when creating volume with specific gid and providing the id while creating pod as suggested in comment#23. sh-4.2$ id uid=1000060000 gid=0(root) groups=0(root),590,1000060000 Moving the bug to verified based on above results. openshift cluster version used for verification, openshift v3.4.0.38 kubernetes v1.4.0+776c994
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2017-0148.html