Bug 1490477
| Summary: | dynamic provisioning doesn't work well in multizone deployments | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Peter Schiffer <pschiffe> |
| Component: | Storage | Assignee: | Pavel Pospisil <ppospisi> |
| Status: | CLOSED DUPLICATE | QA Contact: | Liang Xia <lxia> |
| Severity: | low | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.6.0 | CC: | aos-bugs, aos-storage-staff, bchilds, eparis, jsafrane, pschiffe, wmeng |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-11-03 19:53:18 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Peter Schiffer
2017-09-11 17:20:59 UTC
Ah, now I see it. The pv is in the zone=europe-west3-c, but app nodes are only in the zone=europe-west3-a and zone=europe-west3-b. So, the question now is, how are the zones selected? According to the master zones? In that case, number of infra and app nodes in multizone deployment should be at least number of masters I guess. I thought PVs could be created in any zone which had GCE instances with the same project ID as set on the master config. So if you have non-OpenShift instances in the same project ID you could have the trouble. Please, would you add output of the below command: $ oc get nodes Note: there's a Storage Class configuration parameter "zone" [1] that can be used to specify in which zone is the PV provisioned. Note: OpenShift 3.7 will have Storage Class configuration parameter "zones" [2]. In case the "zone" configuration parameter is not specified in the Storage Class the PV is provisioned in an arbitrary zone in the cluster. As far as I remember there was a bug causing that a PV could have been provisioned in a zone where only masters resided. This bug was fixed by I can't find whether the fix was for K8s 1.6 or 1.7. [1] https://docs.openshift.org/latest/install_config/persistent_storage/dynamically_provisioning_pvs.html#gce-persistentdisk-gcePd [2] https://trello.com/c/hIoJFosv/506-8-admins-can-configure-zones-in-storage-class $ oc get nodes NAME STATUS AGE VERSION ocp-infra-node-1227 Ready 3d v1.6.1+5115d708d7 ocp-infra-node-g4d5 Ready 3d v1.6.1+5115d708d7 ocp-infra-node-lg0s Ready 3d v1.6.1+5115d708d7 ocp-master-k082 Ready,SchedulingDisabled 3d v1.6.1+5115d708d7 ocp-master-l9lx Ready,SchedulingDisabled 3d v1.6.1+5115d708d7 ocp-master-tdv1 Ready,SchedulingDisabled 3d v1.6.1+5115d708d7 ocp-node-2lrj Ready 3d v1.6.1+5115d708d7 ocp-node-dgpl Ready 3d v1.6.1+5115d708d7 But I'm going to increase the number of app nodes and try again. This bug is discussed upstream here: https://github.com/kubernetes/kubernetes/issues/50115 There is no solution so far. As a workaround, you should always have at least one node in every zone that has a master. Thanks. I can confirm that adding an app node in my case solved the problem. *** This bug has been marked as a duplicate of bug 1509028 *** |