Bug 1400248 - GCE Persistent disks being created in zones without nodes when multizone=True
Summary: GCE Persistent disks being created in zones without nodes when multizone=True
Keywords:
Status: CLOSED DUPLICATE of bug 1398104
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Bradley Childs
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-30 18:01 UTC by Jason DeTiberus
Modified: 2016-12-01 17:47 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1397672
Environment:
Last Closed: 2016-12-01 17:47:47 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Jason DeTiberus 2016-11-30 18:01:48 UTC
+++ This bug was initially created as a clone of Bug #1397672 +++

Description of problem:
Currently when install OCP on the GCE platform,if the cloudprovider is enabled, the gce.conf have the default the option: multizone: true, if the custormer prepare some nodes in the same zone A, persistent dynamic provision will create pv according to that multizone configuration iterm and prepare a pv that will in different zone with the node, which will cause the pod that need that pv schedule failed due to the NoVolumeZoneConflict scheduler predicate.

Version-Release number of selected component (if applicable):
openshift v3.4.0.28+dfe3a66
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

How reproducible:
always

Steps to Reproduce:
1.prepare a cluster with nodes on the same zone on gce, and enable cloudprovider and pv dynamic provison.
2.check the gce.conf
3.create a pod with pv needed

Actual results:
2. default value:
[Global]
multizone = false
3. pod scheduled failed due to NoVolumeZoneConflict

Expected results:
We should have an option to set multizone to false

Additional info:

--- Additional comment from Jason DeTiberus on 2016-11-30 13:00:10 EST ---

I believe we have 2 different issues here.

First, I agree we need to expose that setting to users.

Second, I think the behavior you are seeing is a bug. The Dynamic provisioner should not be creating disks in zones that do not have any nodes. Looking in vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/gce/gce.go it looks like it is using the value of gce.managedZones (which defaults to all zones in a region and is not overridable) instead of using the GetAllZones() function which returns a list of zones that contain nodes.

I will clone this bug and track the second issue against the cloned bug.

Comment 1 Bradley Childs 2016-12-01 17:47:47 UTC

*** This bug has been marked as a duplicate of bug 1398104 ***


Note You need to log in before you can comment on or make changes to this bug.