Bug 1500776

Summary: Add possibility to ignore volume label in dynamic provisioning
Product: OpenShift Container Platform Reporter: Nicolas Nosenzo <nnosenzo>
Component: StorageAssignee: Jan Safranek <jsafrane>
Status: CLOSED ERRATA QA Contact: Qin Ping <piqin>
Severity: high Docs Contact:
Priority: high    
Version: 3.6.1CC: aos-bugs, aos-storage-staff, dzhukous, eminguez, eparis, jokerman, jsafrane, meggen, mmccomas, moddi, piqin, sgordon
Target Milestone: ---   
Target Release: 3.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Feature: New configuration option "ignore-volume-az" in cloud.conf file for OpenStack has been added to let OpenShift not to create labels with zones for PersistentVolumes. Reason: OpenStack Cinder and OpenStack Nova can have different topology zones. OpenShift works exclusively with Nova zones, ignoring Cinder topology. Therefore it makes no sense to set label with Cinder zone name into PVs in case it's different to Nova zones. A Pod that uses such PV would be unschedulable by OpenShift. Result: Cluster administrators can turn off labeling of Cinder PVs and make their pods schedulable.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-07-30 19:09:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nicolas Nosenzo 2017-10-11 12:56:41 UTC
1. Proposed title of this feature request
Add possibility to ignore volume label in dynamic provisioning

3. What is the nature and description of the request?
Currently if we create cinder volumes dynamically, the zone label is added automatically to pv [0]. After the zone label is added to volume, it means that pod cannot be started in any other zone than what label says. Volume zone label is kind of nodeselector to pod.

[0] https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder_util.go#L214-L216


4. Why does the customer need this? (List the business requirements here)
 I except that we could override this setting. We have use-case where we have compute zone names: zone-1, zone-2 and zone-3, but then cinder volume zone name is nova. It means that dynamic provision does not work if we do not manually always remove pv label after creation. I except that we could have possibility to not add this volume label to cinder. After that pod can be provisioned in any zone because it does not have label.

5. How would the customer like to achieve this? (List the functional requirements here)
The suggestion is to add a flag ignore-volume-label to openstack cloudprovider.conf which will override this label thing.


7. Is there already an existing RFE upstream or in Red Hat bugzilla?
Yes, there is kubernetes upstream PR for this:

https://github.com/kubernetes/kubernetes/pull/53523

Comment 13 Qin Ping 2018-05-28 09:27:22 UTC
Verify this issue in openshift:
oc v3.10.0-0.53.0
openshift v3.10.0-0.53.0
kubernetes v1.10.0+b81c8f8

Add the following to the openstack could provider file and restart api and controllers:
[BlockStorage]
ignore-volume-az = yes

Create a dynamic PVC, zone label still exist.

[BlockStorage]
ignore-volume-az = yes

# oc get pv -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      kubernetes.io/createdby: cinder-dynamic-provisioner
      pv.kubernetes.io/bound-by-controller: "yes"
      pv.kubernetes.io/provisioned-by: kubernetes.io/cinder
    creationTimestamp: 2018-05-28T09:17:18Z
    finalizers:
    - kubernetes.io/pv-protection
    labels:
      failure-domain.beta.kubernetes.io/zone: nova
    name: pvc-e9e9b416-6257-11e8-b0f5-fa163e432045
    namespace: ""
    resourceVersion: "53374"
    selfLink: /api/v1/persistentvolumes/pvc-e9e9b416-6257-11e8-b0f5-fa163e432045
    uid: eb1a7419-6257-11e8-aab4-fa163ea84fde
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 1Gi
    cinder:
      fsType: xfs
      volumeID: 32d29e0f-0d1a-4db4-8dcb-eca662b69228
    claimRef:
      apiVersion: v1
      kind: PersistentVolumeClaim
      name: pvc1
      namespace: wmeng
      resourceVersion: "53370"
      uid: e9e9b416-6257-11e8-b0f5-fa163e432045
    persistentVolumeReclaimPolicy: Delete
    storageClassName: standard
  status:
    phase: Bound
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Comment 16 Qin Ping 2018-05-29 02:01:18 UTC
Verified in openshift:
oc v3.10.0-0.53.0
openshift v3.10.0-0.53.0
kubernetes v1.10.0+b81c8f8

# uname -a
Linux wmengahproxy-master-etcd-1 3.10.0-693.21.1.el7.x86_64 #1 SMP Fri Feb 23 18:54:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
Red Hat Enterprise Linux Atomic Host release 7.4

Comment 18 errata-xmlrpc 2018-07-30 19:09:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816