1. Proposed title of this feature request Add possibility to ignore volume label in dynamic provisioning 3. What is the nature and description of the request? Currently if we create cinder volumes dynamically, the zone label is added automatically to pv [0]. After the zone label is added to volume, it means that pod cannot be started in any other zone than what label says. Volume zone label is kind of nodeselector to pod. [0] https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder_util.go#L214-L216 4. Why does the customer need this? (List the business requirements here) I except that we could override this setting. We have use-case where we have compute zone names: zone-1, zone-2 and zone-3, but then cinder volume zone name is nova. It means that dynamic provision does not work if we do not manually always remove pv label after creation. I except that we could have possibility to not add this volume label to cinder. After that pod can be provisioned in any zone because it does not have label. 5. How would the customer like to achieve this? (List the functional requirements here) The suggestion is to add a flag ignore-volume-label to openstack cloudprovider.conf which will override this label thing. 7. Is there already an existing RFE upstream or in Red Hat bugzilla? Yes, there is kubernetes upstream PR for this: https://github.com/kubernetes/kubernetes/pull/53523
Verify this issue in openshift: oc v3.10.0-0.53.0 openshift v3.10.0-0.53.0 kubernetes v1.10.0+b81c8f8 Add the following to the openstack could provider file and restart api and controllers: [BlockStorage] ignore-volume-az = yes Create a dynamic PVC, zone label still exist. [BlockStorage] ignore-volume-az = yes # oc get pv -o yaml apiVersion: v1 items: - apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: cinder-dynamic-provisioner pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/cinder creationTimestamp: 2018-05-28T09:17:18Z finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/zone: nova name: pvc-e9e9b416-6257-11e8-b0f5-fa163e432045 namespace: "" resourceVersion: "53374" selfLink: /api/v1/persistentvolumes/pvc-e9e9b416-6257-11e8-b0f5-fa163e432045 uid: eb1a7419-6257-11e8-aab4-fa163ea84fde spec: accessModes: - ReadWriteOnce capacity: storage: 1Gi cinder: fsType: xfs volumeID: 32d29e0f-0d1a-4db4-8dcb-eca662b69228 claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: pvc1 namespace: wmeng resourceVersion: "53370" uid: e9e9b416-6257-11e8-b0f5-fa163e432045 persistentVolumeReclaimPolicy: Delete storageClassName: standard status: phase: Bound kind: List metadata: resourceVersion: "" selfLink: ""
Verified in openshift: oc v3.10.0-0.53.0 openshift v3.10.0-0.53.0 kubernetes v1.10.0+b81c8f8 # uname -a Linux wmengahproxy-master-etcd-1 3.10.0-693.21.1.el7.x86_64 #1 SMP Fri Feb 23 18:54:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/redhat-release Red Hat Enterprise Linux Atomic Host release 7.4
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1816