Bug 1500776 - Add possibility to ignore volume label in dynamic provisioning
Summary: Add possibility to ignore volume label in dynamic provisioning
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.6.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.10.0
Assignee: Jan Safranek
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-11 12:56 UTC by Nicolas Nosenzo
Modified: 2018-07-31 08:45 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Feature: New configuration option "ignore-volume-az" in cloud.conf file for OpenStack has been added to let OpenShift not to create labels with zones for PersistentVolumes. Reason: OpenStack Cinder and OpenStack Nova can have different topology zones. OpenShift works exclusively with Nova zones, ignoring Cinder topology. Therefore it makes no sense to set label with Cinder zone name into PVs in case it's different to Nova zones. A Pod that uses such PV would be unschedulable by OpenShift. Result: Cluster administrators can turn off labeling of Cinder PVs and make their pods schedulable.
Clone Of:
Environment:
Last Closed: 2018-07-30 19:09:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1816 0 None None None 2018-07-30 19:09:46 UTC

Description Nicolas Nosenzo 2017-10-11 12:56:41 UTC
1. Proposed title of this feature request
Add possibility to ignore volume label in dynamic provisioning

3. What is the nature and description of the request?
Currently if we create cinder volumes dynamically, the zone label is added automatically to pv [0]. After the zone label is added to volume, it means that pod cannot be started in any other zone than what label says. Volume zone label is kind of nodeselector to pod.

[0] https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder_util.go#L214-L216


4. Why does the customer need this? (List the business requirements here)
 I except that we could override this setting. We have use-case where we have compute zone names: zone-1, zone-2 and zone-3, but then cinder volume zone name is nova. It means that dynamic provision does not work if we do not manually always remove pv label after creation. I except that we could have possibility to not add this volume label to cinder. After that pod can be provisioned in any zone because it does not have label.

5. How would the customer like to achieve this? (List the functional requirements here)
The suggestion is to add a flag ignore-volume-label to openstack cloudprovider.conf which will override this label thing.


7. Is there already an existing RFE upstream or in Red Hat bugzilla?
Yes, there is kubernetes upstream PR for this:

https://github.com/kubernetes/kubernetes/pull/53523

Comment 13 Qin Ping 2018-05-28 09:27:22 UTC
Verify this issue in openshift:
oc v3.10.0-0.53.0
openshift v3.10.0-0.53.0
kubernetes v1.10.0+b81c8f8

Add the following to the openstack could provider file and restart api and controllers:
[BlockStorage]
ignore-volume-az = yes

Create a dynamic PVC, zone label still exist.

[BlockStorage]
ignore-volume-az = yes

# oc get pv -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      kubernetes.io/createdby: cinder-dynamic-provisioner
      pv.kubernetes.io/bound-by-controller: "yes"
      pv.kubernetes.io/provisioned-by: kubernetes.io/cinder
    creationTimestamp: 2018-05-28T09:17:18Z
    finalizers:
    - kubernetes.io/pv-protection
    labels:
      failure-domain.beta.kubernetes.io/zone: nova
    name: pvc-e9e9b416-6257-11e8-b0f5-fa163e432045
    namespace: ""
    resourceVersion: "53374"
    selfLink: /api/v1/persistentvolumes/pvc-e9e9b416-6257-11e8-b0f5-fa163e432045
    uid: eb1a7419-6257-11e8-aab4-fa163ea84fde
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 1Gi
    cinder:
      fsType: xfs
      volumeID: 32d29e0f-0d1a-4db4-8dcb-eca662b69228
    claimRef:
      apiVersion: v1
      kind: PersistentVolumeClaim
      name: pvc1
      namespace: wmeng
      resourceVersion: "53370"
      uid: e9e9b416-6257-11e8-b0f5-fa163e432045
    persistentVolumeReclaimPolicy: Delete
    storageClassName: standard
  status:
    phase: Bound
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Comment 16 Qin Ping 2018-05-29 02:01:18 UTC
Verified in openshift:
oc v3.10.0-0.53.0
openshift v3.10.0-0.53.0
kubernetes v1.10.0+b81c8f8

# uname -a
Linux wmengahproxy-master-etcd-1 3.10.0-693.21.1.el7.x86_64 #1 SMP Fri Feb 23 18:54:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
Red Hat Enterprise Linux Atomic Host release 7.4

Comment 18 errata-xmlrpc 2018-07-30 19:09:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816


Note You need to log in before you can comment on or make changes to this bug.