Bug 1505464
Summary: | OpenShift 3.7 installs fail on AWS with incorrect assertion that clusterid not set. | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Mike Fiedler <mifiedle> |
Component: | Installer | Assignee: | Kenny Woodson <kwoodson> |
Status: | CLOSED DUPLICATE | QA Contact: | Johnny Liu <jialiu> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.7.0 | CC: | aos-bugs, jokerman, mifiedle, mmccomas, vlaad |
Target Milestone: | --- | ||
Target Release: | 3.7.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-10-24 14:37:14 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Mike Fiedler
2017-10-23 15:58:06 UTC
Instances tagged with Key=kubernetes.io/cluster/xxxx are failing too. Not related to KubernetesCluster. AWS console: kubernetes.io/cluster/aosqe-g9w svt ansible output: Play: Initialize host facts Task: Ensure clusterid is set along with the cloudprovider Message: Ensure that the openshift_clusterid is set and that all infrastructure has the required tags. For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/xxxx,Value=clusterid where xxxx and clusterid are unique per cluster. In versions prior to 3.6, this was Key=KubernetesCluster,Value=clusterid. The problem is that unless the installer is responsible for provisioning the instances we have no reasonable to assure that instances are labeled appropriately other than to ask that openshift_clusterid is set. I assume that if you set this variable the installation works as expected? If so then this is working as designed. We may need to re-word the error message to make it abundantly clear that's the nature of the check. https://github.com/openshift/openshift-docs/issues/4906#issuecomment-338729681 release notes item added The instances are labelled correctly and the install is failing. This is a new (as of today, Monday 23 Oct 2017) issue Right it's a new check, all you have to do is set openshift_clusterid variable as the error message indicates. Since we cannot accurately check this we're relying on that as a signal that the admin has done the required steps. @Sott, so we will do NOT plan to gather the tags from aws metadata API for all node and master hosts and check if the openshift_clusterid vaule equal to the tag value which is described in https://bugzilla.redhat.com/show_bug.cgi?id=1491399#c4 ? @johnny, correct, all we're doing is asserting that the admin sets the variable. *** This bug has been marked as a duplicate of bug 1491399 *** |