Bug 1402355 - [GlusterFS Provisioner] Gid is not applied on provisioned volume
Summary: [GlusterFS Provisioner] Gid is not applied on provisioned volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.4.z
Assignee: Humble Chirammal
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-12-07 10:23 UTC by Jianwei Hou
Modified: 2017-05-18 09:27 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2017-05-18 09:27:27 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1235 0 normal SHIPPED_LIVE OpenShift Container Platform 3.5, 3.4, 3.3, and 3.1 bug fix update 2017-05-18 13:15:52 UTC

Description Jianwei Hou 2016-12-07 10:23:50 UTC
Description of problem:
Provision a  GlusterFS volume, a gid is annotated on the PV. The container also has gid added to its supplemental group, but the volume is still not accessible unless pod is privileged.

Version-Release number of selected component (if applicable):
openshift v3.4.0.33+71c05b2
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

How reproducible:
Always

Steps to Reproduce:
1. Create a StorageClass, by default it sets gidMin to 2000.
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: glusterprovisioner
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "<hidden>"
  restuser: "xxx"
  restuserkey: "xxx"

2. Create a PVC using the StorageClass
{
   "kind": "PersistentVolumeClaim",
   "apiVersion": "v1",
   "metadata": {
     "name": "glusterc1",
     "annotations": {
     "volume.beta.kubernetes.io/storage-class": "glusterprovisioner"
     }
   },
   "spec": {
     "accessModes": [
       "ReadWriteOnce"
     ],
     "resources": {
       "requests": {
         "storage": "10Gi"
       }
     }
   }
}


3. Create a Pod
{
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
        "name": "gluster1",
        "labels": {
            "name": "gluster"
        }
    },
    "spec": {
        "containers": [{
            "name": "gluster",
            "image": "aosqe/hello-openshift",
            "imagePullPolicy": "IfNotPresent",
            "volumeMounts": [{
                "mountPath": "/mnt/gluster",
                "name": "gluster"
            }]
        }],
        "securityContext": {
            "seLinuxContext": {
                 "level": "s0:c13,c12"
            }
        },
        "volumes": [{
            "name": "gluster",
            "persistentVolumeClaim": {
                "claimName": "glusterc"
            }
        }]
    }
}

4. SSH to the container, write to the volume mount directory.

Actual results:
After step 2: A PV was provisioned with gid "2001" annotated
```
metadata:
  annotations:
    pv.beta.kubernetes.io/gid: "2001"
```

After step 3: Pod state became running

After step 4: Step failed, the volume mount directory does not allow writing for group "2001" 
/ $ cd /mnt/gluster/
/mnt/gluster $ id
uid=1000090000 gid=0(root) groups=2001,1000090000
/mnt/gluster $ touch file
touch: file: Permission denied
/mnt/gluster $ ls -ld .
drwxr-xr-x    4 root     root          4096 Dec  7 07:08 .

On node:
[root@ip-172-18-13-113 ~]# mount|grep pvc-ff02ee09-bc4b-11e6-be56-0ede06b6a4a4
172.18.4.10:vol_25fa0c33091f5964b032ac7374f79783 on /var/lib/origin/openshift.local.volumes/pods/7d922e80-bc4c-11e6-be56-0ede06b6a4a4/volumes/kubernetes.io~glusterfs/pvc-ff02ee09-bc4b-11e6-be56-0ede06b6a4a4 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

[root@ip-172-18-13-113 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/7d922e80-bc4c-11e6-be56-0ede06b6a4a4/volumes/kubernetes.io~glusterfs/pvc-ff02ee09-bc4b-11e6-be56-0ede06b6a4a4/
drwxr-xr-x. root root system_u:object_r:fusefs_t:s0    /var/lib/origin/openshift.local.volumes/pods/7d922e80-bc4c-11e6-be56-0ede06b6a4a4/volumes/kubernetes.io~glusterfs/pvc-ff02ee09-bc4b-11e6-be56-0ede06b6a4a4/


Expected results:
Should be able to read/write the directory.

Additional info:

Comment 1 Humble Chirammal 2016-12-08 04:47:01 UTC
I think I got the issue:  according to problem description, the PVC is "glusterc1" and the mapping in pod spec is "glusterc" . That could be the issue.

@jhou, can you please cross check on this ?

Comment 2 Jianwei Hou 2016-12-08 07:51:08 UTC
Sorry, pasted a wrong PVC. I just checked again, this still happens.

PVC
```
{
   "kind": "PersistentVolumeClaim",
   "apiVersion": "v1",
   "metadata": {
     "name": "glusterc",
     "annotations": {
     "volume.beta.kubernetes.io/storage-class": "glusterprovisioner"
     }
   },
   "spec": {
     "accessModes": [
       "ReadWriteOnce"
     ],
     "resources": {
       "requests": {
         "storage": "10Gi"
       }
     }
   }
}
```

PV
```
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.beta.kubernetes.io/gid: "2001"
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
    volume.beta.kubernetes.io/storage-class: glusterprovisioner
  creationTimestamp: 2016-12-08T07:44:08Z
  name: pvc-15172f13-bd1a-11e6-838d-0e330f7df19e
  resourceVersion: "7635"
  selfLink: /api/v1/persistentvolumes/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e
  uid: 19f33d78-bd1a-11e6-838d-0e330f7df19e
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: glusterc
    namespace: jhou
    resourceVersion: "7628"
    uid: 15172f13-bd1a-11e6-838d-0e330f7df19e
  glusterfs:
    endpoints: gluster-dynamic-glusterc
    path: vol_efcf5e57d1fdcea26d2566eb0e016c87
  persistentVolumeReclaimPolicy: Delete
status:
  phase: Bound
```

Pod
```
{
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
        "name": "gluster",
        "labels": {
            "name": "gluster"
        }
    },
    "spec": {
        "containers": [{
            "name": "gluster",
            "image": "aosqe/hello-openshift",
            "imagePullPolicy": "IfNotPresent",
            "securityContext": {
                "privileged": true
            },
            "volumeMounts": [{
                "mountPath": "/mnt/gluster",
                "name": "gluster"
            }]
        }],
        "securityContext": {
            "fsGroup": 123456,
            "seLinuxContext": {
                 "level": "s0:c13,c12"
            }
        },
        "volumes": [{
            "name": "gluster",
            "persistentVolumeClaim": {
                "claimName": "glusterc"
            }
        }]
    }
}
```

[root@ip-172-18-4-238 ~]# mount|grep gluster
172.18.1.237:vol_efcf5e57d1fdcea26d2566eb0e016c87 on /var/lib/origin/openshift.local.volumes/pods/5fd391c7-bd1a-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@ip-172-18-4-238 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/5fd391c7-bd1a-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e/
drwxr-xr-x. root root system_u:object_r:fusefs_t:s0    /var/lib/origin/openshift.local.volumes/pods/5fd391c7-bd1a-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e/


# oc exec -it gluster -- sh
/ $ cd /mnt/gluster/
/mnt/gluster $ id
uid=1000130000 gid=0(root) groups=2001,1000130000
/mnt/gluster $ touch file
touch: file: Permission denied

Comment 3 Humble Chirammal 2016-12-08 09:21:46 UTC
jhou, thanks for correcting it. Which version of Heketi ( server) is in use here ? Because I expect the permission of "775" in /mnt/gluster mount point. from your output its "755" and its causing the issue I believe.

Comment 6 Jianwei Hou 2016-12-08 10:05:33 UTC
Updated the Heketi server version to heketi-3.1.0-3.el7rhgs.x86_64. Retested the scenario again and now it works!.

/mnt/gluster $ cd 
/ $ id
uid=1000130000 gid=0(root) groups=2001,1000130000
/ $ ls -ld /mnt/gluster/
drwxrwxr-x    4 root     2001          4096 Dec  8 09:51 /mnt/gluster/

[root@ip-172-18-4-238 ~]# mount|grep glusterfs
172.18.1.237:vol_052f7a0cfcb0b2718949b0ab965867a0 on /var/lib/origin/openshift.local.volumes/pods/fa9c65a0-bd2b-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-f3006d71-bd2b-11e6-838d-0e330f7df19e type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@ip-172-18-4-238 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/fa9c65a0-bd2b-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-f3006d71-bd2b-11e6-838d-0e330f7df19e/
drwxrwxr-x. root 2001 system_u:object_r:fusefs_t:s0    /var/lib/origin/openshift.local.volumes/pods/fa9c65a0-bd2b-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-f3006d71-bd2b-11e6-838d-0e330f7df19e/

Comment 7 Jianwei Hou 2016-12-08 10:20:49 UTC
I think the requirement for Heketi server (version > 3) needs to be documented.

Comment 8 Humble Chirammal 2016-12-08 12:52:27 UTC
Thanks Jhou for the quick verification !!! we will make sure its documented.

Comment 9 Eric Paris 2016-12-08 22:22:51 UTC
As soon as you get a PR to the openshift docs repo can you set this BZ MODIFIED?

Comment 10 Humble Chirammal 2016-12-09 04:38:52 UTC
(In reply to Eric Paris from comment #9)
> As soon as you get a PR to the openshift docs repo can you set this BZ
> MODIFIED?

sure Eric!

Comment 11 Humble Chirammal 2016-12-19 07:20:17 UTC
This PR  https://github.com/openshift/openshift-docs/pull/3371 address it

Comment 13 Jianwei Hou 2017-03-20 03:00:08 UTC
The requirement for Heketi version >=3 is documented. This bug is now verified.

Comment 15 errata-xmlrpc 2017-05-18 09:27:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1235


Note You need to log in before you can comment on or make changes to this bug.