Description of problem: Provision a GlusterFS volume, a gid is annotated on the PV. The container also has gid added to its supplemental group, but the volume is still not accessible unless pod is privileged. Version-Release number of selected component (if applicable): openshift v3.4.0.33+71c05b2 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 How reproducible: Always Steps to Reproduce: 1. Create a StorageClass, by default it sets gidMin to 2000. apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterprovisioner provisioner: kubernetes.io/glusterfs parameters: resturl: "<hidden>" restuser: "xxx" restuserkey: "xxx" 2. Create a PVC using the StorageClass { "kind": "PersistentVolumeClaim", "apiVersion": "v1", "metadata": { "name": "glusterc1", "annotations": { "volume.beta.kubernetes.io/storage-class": "glusterprovisioner" } }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } } } } 3. Create a Pod { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "gluster1", "labels": { "name": "gluster" } }, "spec": { "containers": [{ "name": "gluster", "image": "aosqe/hello-openshift", "imagePullPolicy": "IfNotPresent", "volumeMounts": [{ "mountPath": "/mnt/gluster", "name": "gluster" }] }], "securityContext": { "seLinuxContext": { "level": "s0:c13,c12" } }, "volumes": [{ "name": "gluster", "persistentVolumeClaim": { "claimName": "glusterc" } }] } } 4. SSH to the container, write to the volume mount directory. Actual results: After step 2: A PV was provisioned with gid "2001" annotated ``` metadata: annotations: pv.beta.kubernetes.io/gid: "2001" ``` After step 3: Pod state became running After step 4: Step failed, the volume mount directory does not allow writing for group "2001" / $ cd /mnt/gluster/ /mnt/gluster $ id uid=1000090000 gid=0(root) groups=2001,1000090000 /mnt/gluster $ touch file touch: file: Permission denied /mnt/gluster $ ls -ld . drwxr-xr-x 4 root root 4096 Dec 7 07:08 . On node: [root@ip-172-18-13-113 ~]# mount|grep pvc-ff02ee09-bc4b-11e6-be56-0ede06b6a4a4 172.18.4.10:vol_25fa0c33091f5964b032ac7374f79783 on /var/lib/origin/openshift.local.volumes/pods/7d922e80-bc4c-11e6-be56-0ede06b6a4a4/volumes/kubernetes.io~glusterfs/pvc-ff02ee09-bc4b-11e6-be56-0ede06b6a4a4 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [root@ip-172-18-13-113 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/7d922e80-bc4c-11e6-be56-0ede06b6a4a4/volumes/kubernetes.io~glusterfs/pvc-ff02ee09-bc4b-11e6-be56-0ede06b6a4a4/ drwxr-xr-x. root root system_u:object_r:fusefs_t:s0 /var/lib/origin/openshift.local.volumes/pods/7d922e80-bc4c-11e6-be56-0ede06b6a4a4/volumes/kubernetes.io~glusterfs/pvc-ff02ee09-bc4b-11e6-be56-0ede06b6a4a4/ Expected results: Should be able to read/write the directory. Additional info:
I think I got the issue: according to problem description, the PVC is "glusterc1" and the mapping in pod spec is "glusterc" . That could be the issue. @jhou, can you please cross check on this ?
Sorry, pasted a wrong PVC. I just checked again, this still happens. PVC ``` { "kind": "PersistentVolumeClaim", "apiVersion": "v1", "metadata": { "name": "glusterc", "annotations": { "volume.beta.kubernetes.io/storage-class": "glusterprovisioner" } }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } } } } ``` PV ``` apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.beta.kubernetes.io/gid: "2001" pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs volume.beta.kubernetes.io/storage-class: glusterprovisioner creationTimestamp: 2016-12-08T07:44:08Z name: pvc-15172f13-bd1a-11e6-838d-0e330f7df19e resourceVersion: "7635" selfLink: /api/v1/persistentvolumes/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e uid: 19f33d78-bd1a-11e6-838d-0e330f7df19e spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: glusterc namespace: jhou resourceVersion: "7628" uid: 15172f13-bd1a-11e6-838d-0e330f7df19e glusterfs: endpoints: gluster-dynamic-glusterc path: vol_efcf5e57d1fdcea26d2566eb0e016c87 persistentVolumeReclaimPolicy: Delete status: phase: Bound ``` Pod ``` { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "gluster", "labels": { "name": "gluster" } }, "spec": { "containers": [{ "name": "gluster", "image": "aosqe/hello-openshift", "imagePullPolicy": "IfNotPresent", "securityContext": { "privileged": true }, "volumeMounts": [{ "mountPath": "/mnt/gluster", "name": "gluster" }] }], "securityContext": { "fsGroup": 123456, "seLinuxContext": { "level": "s0:c13,c12" } }, "volumes": [{ "name": "gluster", "persistentVolumeClaim": { "claimName": "glusterc" } }] } } ``` [root@ip-172-18-4-238 ~]# mount|grep gluster 172.18.1.237:vol_efcf5e57d1fdcea26d2566eb0e016c87 on /var/lib/origin/openshift.local.volumes/pods/5fd391c7-bd1a-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [root@ip-172-18-4-238 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/5fd391c7-bd1a-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e/ drwxr-xr-x. root root system_u:object_r:fusefs_t:s0 /var/lib/origin/openshift.local.volumes/pods/5fd391c7-bd1a-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e/ # oc exec -it gluster -- sh / $ cd /mnt/gluster/ /mnt/gluster $ id uid=1000130000 gid=0(root) groups=2001,1000130000 /mnt/gluster $ touch file touch: file: Permission denied
jhou, thanks for correcting it. Which version of Heketi ( server) is in use here ? Because I expect the permission of "775" in /mnt/gluster mount point. from your output its "755" and its causing the issue I believe.
Updated the Heketi server version to heketi-3.1.0-3.el7rhgs.x86_64. Retested the scenario again and now it works!. /mnt/gluster $ cd / $ id uid=1000130000 gid=0(root) groups=2001,1000130000 / $ ls -ld /mnt/gluster/ drwxrwxr-x 4 root 2001 4096 Dec 8 09:51 /mnt/gluster/ [root@ip-172-18-4-238 ~]# mount|grep glusterfs 172.18.1.237:vol_052f7a0cfcb0b2718949b0ab965867a0 on /var/lib/origin/openshift.local.volumes/pods/fa9c65a0-bd2b-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-f3006d71-bd2b-11e6-838d-0e330f7df19e type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [root@ip-172-18-4-238 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/fa9c65a0-bd2b-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-f3006d71-bd2b-11e6-838d-0e330f7df19e/ drwxrwxr-x. root 2001 system_u:object_r:fusefs_t:s0 /var/lib/origin/openshift.local.volumes/pods/fa9c65a0-bd2b-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-f3006d71-bd2b-11e6-838d-0e330f7df19e/
I think the requirement for Heketi server (version > 3) needs to be documented.
Thanks Jhou for the quick verification !!! we will make sure its documented.
As soon as you get a PR to the openshift docs repo can you set this BZ MODIFIED?
(In reply to Eric Paris from comment #9) > As soon as you get a PR to the openshift docs repo can you set this BZ > MODIFIED? sure Eric!
This PR https://github.com/openshift/openshift-docs/pull/3371 address it
The requirement for Heketi version >=3 is documented. This bug is now verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1235