Bug 1365398
Summary: | Dynamic provisioned volume is not in the same AZ with instance | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Chao Yang <chaoyang> |
Component: | Documentation | Assignee: | Vikram Goyal <vigoyal> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Vikram Goyal <vigoyal> |
Severity: | high | Docs Contact: | Vikram Goyal <vigoyal> |
Priority: | high | ||
Version: | 3.6.0 | CC: | akostadi, aos-bugs, bchilds, dma, dyocum, eparis, ghuang, jgoulding, jokerman, jsafrane, lxia, mmccomas, sdodson, tatanaka, vigoyal, wmeng |
Target Milestone: | --- | Keywords: | OpsBlocker, Reopened |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-09-19 06:05:33 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Chao Yang
2016-08-09 07:48:52 UTC
I saw it once or twice. Looking at the code, Kubernetes lists all running AWS instances and randomly selects a zone that is used by one of them. It happens only on your shared AWS account. It should work if Kubernetes is installed on a dedicated AWS project where all AWS instances are Kubernetes nodes. Filled https://github.com/kubernetes/kubernetes/issues/30265 about it. Current work around: Add tag "Name=KubernetesCluster,Value=<clusterid>" to all instances of a same cluster. Removed 'testblocker' keyword since the work around works for us. I think the upstream issue is saying that the tagging is not a 'work around' but is the 'design'. I think this is 'working as expected'. I do not believe there is anything left to fix in this BZ. Closing as working as designed per upstreams comment (use the tagging to influence PV zone) We need to document this in case customers runs into same problem. Tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1367617 openshift-ansible doesn't currently perform any instance manipulation but we start that work as part of 3.7. Is there another suggested short term fix that you'd like? *** Bug 1468756 has been marked as a duplicate of this bug. *** https://github.com/openshift/openshift-ansible/pull/4726 makes it mandatory to specify a cluster id when you're using the AWS provider or explicitly state that you're only running one cluster per account. Based on the following from Hemant Kumar I'm moving this to be a Docs bug. While the ansible installer could update aws.conf this seems like a bad idea because it's yet another item that needs to be kept in sync. "Also, there is no need to update aws.conf file, because if KubernetesClusterTag is not present in aws.conf then the tag value is picked from master instance tag." Infact, the docs already mention this, however in a section specific to AWS dynamic volumes. It should probably be moved to a more prominent location and it needs to be updated to reflect the new label. Here's a PR that does the latter. https://github.com/openshift/openshift-docs/pull/4783 |