| Summary: | Openshift ansible installer need an option to control the multizone value in /etc/origin/cloudprovider/gce.conf | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Wang Haoran <haowang> | |
| Component: | Installer | Assignee: | Scott Dodson <sdodson> | |
| Status: | CLOSED WONTFIX | QA Contact: | Johnny Liu <jialiu> | |
| Severity: | high | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 3.4.0 | CC: | aos-bugs, jiajliu, jokerman, lxia, mmccomas, wmeng | |
| Target Milestone: | --- | |||
| Target Release: | 3.10.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1400248 (view as bug list) | Environment: | ||
| Last Closed: | 2018-05-02 17:42:30 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
|
Description
Wang Haoran
2016-11-23 06:53:39 UTC
I believe we have 2 different issues here. First, I agree we need to expose that setting to users. Second, I think the behavior you are seeing is a bug. The Dynamic provisioner should not be creating disks in zones that do not have any nodes. Looking in vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/gce/gce.go it looks like it is using the value of gce.managedZones (which defaults to all zones in a region and is not overridable) instead of using the GetAllZones() function which returns a list of zones that contain nodes. I will clone this bug and track the second issue against the cloned bug. Tracking the GCE provisioning bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1400248 Since the remaining issue is to expose setting the multizone setting in the GCE cloud config, I'm setting this to UpcomingRelease. Seth, you changed the default to be multizone here https://github.com/openshift/openshift-ansible/pull/2728 Does this BZ provide valid use case for it being configurable or is something else amiss? We switched to "multizone = true" by default because if nodes are started in a zone other than the master's zone, those nodes are removed from the cluster because the master doesn't see them in the inventory of instances from the cloud provider as the query is limited to the master's zone. https://bugzilla.redhat.com/show_bug.cgi?id=1390160#c8 multizone is kinda weird now we have cluster federation. I think the recommended way forward is to use cluster federation for clusters in different zones. I can see where the dynamic provisioner might have a chicken-egg problem since it doesn't know which zone to allocate the storage from until it knows which node, and therefore which zone, the pod requiring the storage will be scheduled to. Even if you allocate storage from a zone that has nodes, there is no guarantee that the node the pod lands on is in the same zone as the storage. From the point of view that cluster federation is the proper way to do cross zone cluster management, "multizone = false" should be set. I think there is merit in making this setable in the installed. If a user is doing single zone clusters with federation, they should set it to false. If they are doing cross zone in a single cluster, they should set it to true. Clayton says that we shouldn't be setting this by default and we should allow people to opt in and that it's somewhat critical so moving to 3.7.0. |