Hide Forgot
+++ This bug was initially created as a clone of Bug #1397672 +++ Description of problem: Currently when install OCP on the GCE platform,if the cloudprovider is enabled, the gce.conf have the default the option: multizone: true, if the custormer prepare some nodes in the same zone A, persistent dynamic provision will create pv according to that multizone configuration iterm and prepare a pv that will in different zone with the node, which will cause the pod that need that pv schedule failed due to the NoVolumeZoneConflict scheduler predicate. Version-Release number of selected component (if applicable): openshift v3.4.0.28+dfe3a66 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 How reproducible: always Steps to Reproduce: 1.prepare a cluster with nodes on the same zone on gce, and enable cloudprovider and pv dynamic provison. 2.check the gce.conf 3.create a pod with pv needed Actual results: 2. default value: [Global] multizone = false 3. pod scheduled failed due to NoVolumeZoneConflict Expected results: We should have an option to set multizone to false Additional info: --- Additional comment from Jason DeTiberus on 2016-11-30 13:00:10 EST --- I believe we have 2 different issues here. First, I agree we need to expose that setting to users. Second, I think the behavior you are seeing is a bug. The Dynamic provisioner should not be creating disks in zones that do not have any nodes. Looking in vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/gce/gce.go it looks like it is using the value of gce.managedZones (which defaults to all zones in a region and is not overridable) instead of using the GetAllZones() function which returns a list of zones that contain nodes. I will clone this bug and track the second issue against the cloned bug.
*** This bug has been marked as a duplicate of bug 1398104 ***