Bug 1402355
Summary: | [GlusterFS Provisioner] Gid is not applied on provisioned volume | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Jianwei Hou <jhou> |
Component: | Storage | Assignee: | Humble Chirammal <hchiramm> |
Status: | CLOSED ERRATA | QA Contact: | Jianwei Hou <jhou> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 3.4.0 | CC: | aos-bugs, eparis, jhou, pprakash, rcyriac |
Target Milestone: | --- | ||
Target Release: | 3.4.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: |
undefined
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-05-18 09:27:27 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jianwei Hou
2016-12-07 10:23:50 UTC
I think I got the issue: according to problem description, the PVC is "glusterc1" and the mapping in pod spec is "glusterc" . That could be the issue. @jhou, can you please cross check on this ? Sorry, pasted a wrong PVC. I just checked again, this still happens. PVC ``` { "kind": "PersistentVolumeClaim", "apiVersion": "v1", "metadata": { "name": "glusterc", "annotations": { "volume.beta.kubernetes.io/storage-class": "glusterprovisioner" } }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } } } } ``` PV ``` apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.beta.kubernetes.io/gid: "2001" pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs volume.beta.kubernetes.io/storage-class: glusterprovisioner creationTimestamp: 2016-12-08T07:44:08Z name: pvc-15172f13-bd1a-11e6-838d-0e330f7df19e resourceVersion: "7635" selfLink: /api/v1/persistentvolumes/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e uid: 19f33d78-bd1a-11e6-838d-0e330f7df19e spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: glusterc namespace: jhou resourceVersion: "7628" uid: 15172f13-bd1a-11e6-838d-0e330f7df19e glusterfs: endpoints: gluster-dynamic-glusterc path: vol_efcf5e57d1fdcea26d2566eb0e016c87 persistentVolumeReclaimPolicy: Delete status: phase: Bound ``` Pod ``` { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "gluster", "labels": { "name": "gluster" } }, "spec": { "containers": [{ "name": "gluster", "image": "aosqe/hello-openshift", "imagePullPolicy": "IfNotPresent", "securityContext": { "privileged": true }, "volumeMounts": [{ "mountPath": "/mnt/gluster", "name": "gluster" }] }], "securityContext": { "fsGroup": 123456, "seLinuxContext": { "level": "s0:c13,c12" } }, "volumes": [{ "name": "gluster", "persistentVolumeClaim": { "claimName": "glusterc" } }] } } ``` [root@ip-172-18-4-238 ~]# mount|grep gluster 172.18.1.237:vol_efcf5e57d1fdcea26d2566eb0e016c87 on /var/lib/origin/openshift.local.volumes/pods/5fd391c7-bd1a-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [root@ip-172-18-4-238 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/5fd391c7-bd1a-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e/ drwxr-xr-x. root root system_u:object_r:fusefs_t:s0 /var/lib/origin/openshift.local.volumes/pods/5fd391c7-bd1a-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-15172f13-bd1a-11e6-838d-0e330f7df19e/ # oc exec -it gluster -- sh / $ cd /mnt/gluster/ /mnt/gluster $ id uid=1000130000 gid=0(root) groups=2001,1000130000 /mnt/gluster $ touch file touch: file: Permission denied jhou, thanks for correcting it. Which version of Heketi ( server) is in use here ? Because I expect the permission of "775" in /mnt/gluster mount point. from your output its "755" and its causing the issue I believe. Updated the Heketi server version to heketi-3.1.0-3.el7rhgs.x86_64. Retested the scenario again and now it works!. /mnt/gluster $ cd / $ id uid=1000130000 gid=0(root) groups=2001,1000130000 / $ ls -ld /mnt/gluster/ drwxrwxr-x 4 root 2001 4096 Dec 8 09:51 /mnt/gluster/ [root@ip-172-18-4-238 ~]# mount|grep glusterfs 172.18.1.237:vol_052f7a0cfcb0b2718949b0ab965867a0 on /var/lib/origin/openshift.local.volumes/pods/fa9c65a0-bd2b-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-f3006d71-bd2b-11e6-838d-0e330f7df19e type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [root@ip-172-18-4-238 ~]# ls -lZd /var/lib/origin/openshift.local.volumes/pods/fa9c65a0-bd2b-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-f3006d71-bd2b-11e6-838d-0e330f7df19e/ drwxrwxr-x. root 2001 system_u:object_r:fusefs_t:s0 /var/lib/origin/openshift.local.volumes/pods/fa9c65a0-bd2b-11e6-838d-0e330f7df19e/volumes/kubernetes.io~glusterfs/pvc-f3006d71-bd2b-11e6-838d-0e330f7df19e/ I think the requirement for Heketi server (version > 3) needs to be documented. Thanks Jhou for the quick verification !!! we will make sure its documented. As soon as you get a PR to the openshift docs repo can you set this BZ MODIFIED? (In reply to Eric Paris from comment #9) > As soon as you get a PR to the openshift docs repo can you set this BZ > MODIFIED? sure Eric! This PR https://github.com/openshift/openshift-docs/pull/3371 address it The requirement for Heketi version >=3 is documented. This bug is now verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1235 |