Description of problem: Instances tagged with deprecated, but supported, AWS clusterid tag fail to install ASK [openshift_sanitize_inventory : Ensure clusterid is set along with the cloudprovider] *** Monday 23 October 2017 14:34:56 +0000 (0:00:00.090) 0:00:16.985 ******** fatal: [ec2-34-212-83-246.us-west-2.compute.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "Ensure that the openshift_clusterid is set and that all infrastructure has the required tags.\nFor dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/xxxx,Value=clusterid where xxxx and clusterid are unique per cluster. In versions prior to 3.6, this was Key=KubernetesCluster,Value=clusterid.\nhttps://github.com/openshift/openshift-docs/blob/master/install_config/persistent_storage/dynamically_provisioning_pvs.adoc#available-dynamically-provisioned-plug-ins\n"} fatal: [ec2-54-244-69-164.us-west-2.compute.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "Ensure that the openshift_clusterid is set and that all infrastructure has the required tags.\nFor dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/xxxx,Value=clusterid where xxxx and clusterid are unique per cluster. In versions prior to 3.6, this was Key=KubernetesCluster,Value=clusterid.\nhttps://github.com/openshift/openshift-docs/blob/master/install_config/persistent_storage/dynamically_provisioning_pvs.adoc#available-dynamically-provisioned-plug-ins\n"} fatal: [ec2-54-200-185-224.us-west-2.compute.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "Ensure that the openshift_clusterid is set and that all infrastructure has the required tags.\nFor dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/xxxx,Value=clusterid where xxxx and clusterid are unique per cluster. In versions prior to 3.6, this was Key=KubernetesCluster,Value=clusterid.\nhttps://github.com/openshift/openshift-docs/blob/master/install_config/persistent_storage/dynamically_provisioning_pvs.adoc#available-dynamically-provisioned-plug-ins\n"} fatal: [ec2-34-212-24-63.us-west-2.compute.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "Ensure that the openshift_clusterid is set and that all infrastructure has the required tags.\nFor dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/xxxx,Value=clusterid where xxxx and clusterid are unique per cluster. In versions prior to 3.6, this was Key=KubernetesCluster,Value=clusterid.\nhttps://github.com/openshift/openshift-docs/blob/master/install_config/persistent_storage/dynamically_provisioning_pvs.adoc#available-dynamically-provisioned-plug-ins\n"} Version-Release number of the following components: ommit a20098d3ae28110dd4d38ed4aa3a89af6cb72a01 (HEAD -> master, origin/master, origin/HEAD) Merge: 53a54ff4a 27f062260 Author: Scott Dodson <sdodson> Date: Mon Oct 23 10:17:54 2017 -0400 Merge pull request #5844 from mtnbikenc/fix-1504515 1504515 Correct host group for controller restart How reproducible: always installing from openshift-ansible master HEAD Steps to Reproduce: 1. Install on AWS with instances tagged with KubernetesCluster=my_cluster Actual results: Error above Description of problem: Version-Release number of the following components: rpm -q openshift-ansible rpm -q ansible ansible --version How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Please include the entire output from the last TASK line through the end of output if an error is generated Expected results: Additional info: Please attach logs from ansible-playbook with the -vvv flag
Instances tagged with Key=kubernetes.io/cluster/xxxx are failing too. Not related to KubernetesCluster. AWS console: kubernetes.io/cluster/aosqe-g9w svt ansible output: Play: Initialize host facts Task: Ensure clusterid is set along with the cloudprovider Message: Ensure that the openshift_clusterid is set and that all infrastructure has the required tags. For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/xxxx,Value=clusterid where xxxx and clusterid are unique per cluster. In versions prior to 3.6, this was Key=KubernetesCluster,Value=clusterid.
The problem is that unless the installer is responsible for provisioning the instances we have no reasonable to assure that instances are labeled appropriately other than to ask that openshift_clusterid is set. I assume that if you set this variable the installation works as expected? If so then this is working as designed.
We may need to re-word the error message to make it abundantly clear that's the nature of the check.
https://github.com/openshift/openshift-docs/issues/4906#issuecomment-338729681 release notes item added
The instances are labelled correctly and the install is failing. This is a new (as of today, Monday 23 Oct 2017) issue
Right it's a new check, all you have to do is set openshift_clusterid variable as the error message indicates. Since we cannot accurately check this we're relying on that as a signal that the admin has done the required steps.
@Sott, so we will do NOT plan to gather the tags from aws metadata API for all node and master hosts and check if the openshift_clusterid vaule equal to the tag value which is described in https://bugzilla.redhat.com/show_bug.cgi?id=1491399#c4 ?
@johnny, correct, all we're doing is asserting that the admin sets the variable. *** This bug has been marked as a duplicate of bug 1491399 ***