Bug 1400248

Summary: GCE Persistent disks being created in zones without nodes when multizone=True
Product: OpenShift Container Platform Reporter: Jason DeTiberus <jdetiber>
Component: StorageAssignee: Bradley Childs <bchilds>
Status: CLOSED DUPLICATE QA Contact: Jianwei Hou <jhou>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.4.0CC: aos-bugs, haowang, jialiu, jokerman, mmccomas, wmeng
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1397672 Environment:
Last Closed: 2016-12-01 17:47:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Jason DeTiberus 2016-11-30 18:01:48 UTC
+++ This bug was initially created as a clone of Bug #1397672 +++

Description of problem:
Currently when install OCP on the GCE platform,if the cloudprovider is enabled, the gce.conf have the default the option: multizone: true, if the custormer prepare some nodes in the same zone A, persistent dynamic provision will create pv according to that multizone configuration iterm and prepare a pv that will in different zone with the node, which will cause the pod that need that pv schedule failed due to the NoVolumeZoneConflict scheduler predicate.

Version-Release number of selected component (if applicable):
openshift v3.4.0.28+dfe3a66
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

How reproducible:
always

Steps to Reproduce:
1.prepare a cluster with nodes on the same zone on gce, and enable cloudprovider and pv dynamic provison.
2.check the gce.conf
3.create a pod with pv needed

Actual results:
2. default value:
[Global]
multizone = false
3. pod scheduled failed due to NoVolumeZoneConflict

Expected results:
We should have an option to set multizone to false

Additional info:

--- Additional comment from Jason DeTiberus on 2016-11-30 13:00:10 EST ---

I believe we have 2 different issues here.

First, I agree we need to expose that setting to users.

Second, I think the behavior you are seeing is a bug. The Dynamic provisioner should not be creating disks in zones that do not have any nodes. Looking in vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/gce/gce.go it looks like it is using the value of gce.managedZones (which defaults to all zones in a region and is not overridable) instead of using the GetAllZones() function which returns a list of zones that contain nodes.

I will clone this bug and track the second issue against the cloned bug.

Comment 1 Bradley Childs 2016-12-01 17:47:47 UTC

*** This bug has been marked as a duplicate of bug 1398104 ***