| Summary: | [GlusterFS Provisioner] Using different StorageClass can cause duplicate gid being assigned | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Jianwei Hou <jhou> |
| Component: | CNS-deployment | Assignee: | Humble Chirammal <hchiramm> |
| Status: | CLOSED NOTABUG | QA Contact: | Anoop <annair> |
| Severity: | low | Docs Contact: | |
| Priority: | medium | ||
| Version: | unspecified | CC: | akhakhar, annair, aos-bugs, bchilds, eparis, hchiramm, jliggitt, jrivera, madam, mliyazud, mzywusko, pprakash, rhs-bugs, rreddy, rtalur |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-08-02 15:02:08 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
I had a comment in the PR https://github.com/kubernetes/kubernetes/pull/37886/#issuecomment-264646662 considering below scenarios. 1) Statically provisioned PVs are not accounted at all. 2) If there are 2 SCs with the same range the allocator will dispatch same GIDs. As per https://github.com/kubernetes/kubernetes/pull/37886/#issuecomment-264648794, this validation is not required/expected. Will have a discussion and confirm again. Exactly, it was discussed when creating the patch that uniqueness of GIDs per storage class is sufficient. Mind you, the range can also be configured per SC. It would not be impossible to change the behavior of the provisioner to hand out unique IDs across all SCs that use the dynamic provisioner, but I'd want a broader concent that this should be done. I think gid allocation should be tracked per storage class, just like uid allocation is per project (two projects can be assigned overlapping uid ranges and pods from those projects can end up with the same uid) @Jordan do you want to call this NOTABUG or even in need of a doc? I would document that if unique gids are desired, a unique range should be given to each storage class (In reply to Jordan Liggitt from comment #6) > I would document that if unique gids are desired, a unique range should be > given to each storage class Thanks Jordan ! I will get this documented under CNS guide. |
Description of problem: A same gid is assigned to different PVs when different StorageClasses have same gidMin/gidMax. See 'steps to reproduce' for details. Version-Release number of selected component (if applicable): openshift v3.4.0.33+71c05b2 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 How reproducible: Always Steps to Reproduce: 1. Create two StorageClasses with same gidMin and gidMax, like this: # oc get storageclass sc -o yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: creationTimestamp: 2016-12-07T06:03:46Z name: sc resourceVersion: "4368" selfLink: /apis/storage.k8s.io/v1beta1/storageclasses/sc uid: ea269043-bc42-11e6-be56-0ede06b6a4a4 parameters: gidMax: "2001" gidMin: "2000" resturl: <hidden> restuser: xxx restuserkey: xxx provisioner: kubernetes.io/glusterfs # oc get storageclass sc1 -o yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: creationTimestamp: 2016-12-07T06:07:28Z name: sc1 resourceVersion: "4443" selfLink: /apis/storage.k8s.io/v1beta1/storageclasses/sc1 uid: 6e6208ce-bc43-11e6-be56-0ede06b6a4a4 parameters: gidMax: "2001" gidMin: "2000" resturl: <hidden> restuser: xxx restuserkey: xxx provisioner: kubernetes.io/glusterfs 2. Create two PVC using each of the StorageClasses respectively NAME STATUS VOLUME CAPACITY ACCESSMODES AGE c1 Bound pvc-fcb2e4bd-bc42-11e6-be56-0ede06b6a4a4 10Gi RWO 9m c2 Bound pvc-77e4766f-bc43-11e6-be56-0ede06b6a4a4 10Gi RWO 6m #oc describe pvc c1 Name: c1 Namespace: jhou StorageClass: sc Status: Bound Volume: pvc-fcb2e4bd-bc42-11e6-be56-0ede06b6a4a4 Labels: <none> Capacity: 10Gi Access Modes: RWO No events. #oc describe pvc c2 Name: c2 Namespace: jhou StorageClass: sc1 Status: Bound Volume: pvc-77e4766f-bc43-11e6-be56-0ede06b6a4a4 Labels: <none> Capacity: 10Gi Access Modes: RWO No events. 3. After two PVs are provisioned, list the PV info Actual results: 3. The gid '2000' is assigned to both PVs. # oc get pv pvc-fcb2e4bd-bc42-11e6-be56-0ede06b6a4a4 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.beta.kubernetes.io/gid: "2000" pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs volume.beta.kubernetes.io/storage-class: sc creationTimestamp: 2016-12-07T06:04:25Z name: pvc-fcb2e4bd-bc42-11e6-be56-0ede06b6a4a4 resourceVersion: "4386" selfLink: /api/v1/persistentvolumes/pvc-fcb2e4bd-bc42-11e6-be56-0ede06b6a4a4 uid: 01925690-bc43-11e6-be56-0ede06b6a4a4 spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: c1 namespace: jhou resourceVersion: "4378" uid: fcb2e4bd-bc42-11e6-be56-0ede06b6a4a4 glusterfs: endpoints: gluster-dynamic-c1 path: vol_0d637ad42d6d8cd366d8ea96700d43eb persistentVolumeReclaimPolicy: Delete status: phase: Bound # oc get pv pvc-77e4766f-bc43-11e6-be56-0ede06b6a4a4 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.beta.kubernetes.io/gid: "2000" pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs volume.beta.kubernetes.io/storage-class: sc1 creationTimestamp: 2016-12-07T06:07:51Z name: pvc-77e4766f-bc43-11e6-be56-0ede06b6a4a4 resourceVersion: "4456" selfLink: /api/v1/persistentvolumes/pvc-77e4766f-bc43-11e6-be56-0ede06b6a4a4 uid: 7c32195e-bc43-11e6-be56-0ede06b6a4a4 spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: c2 namespace: jhou resourceVersion: "4450" uid: 77e4766f-bc43-11e6-be56-0ede06b6a4a4 glusterfs: endpoints: gluster-dynamic-c2 path: vol_b7237bf73d3262beb8254073fbadd396 persistentVolumeReclaimPolicy: Delete status: phase: Bound Expected results: No gid should be duplicated. Additional info: