Bug 1468579
Summary: | Missing Kubernetes Cluster ID tag from openshift cluster resources | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Hemant Kumar <hekumar> |
Component: | Master | Assignee: | Robert Rati <rrati> |
Status: | CLOSED ERRATA | QA Contact: | DeShuai Ma <dma> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.6.0 | CC: | aos-bugs, chaoyang, decarr, dyocum, eparis, jgoulding, jliggitt, jokerman, mmccomas, nraghava, sdodson, sjenning, wgordon, xtian |
Target Milestone: | --- | Keywords: | OpsBlocker |
Target Release: | 3.7.0 | ||
Hardware: | All | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
Running multiple clusters in a single az in AWS requires resources be tagged.
Consequence:
Clusters will not work properly
Fix:
Master Controllers process will require a ClusterID on resources in order to run. Existing resources will need to be tagged manually.
Result:
Multiple clusters in one az will work properly once tagged
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-11-28 22:00:15 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Hemant Kumar
2017-07-07 12:42:51 UTC
This bug need verify on aws and 128 version. now the latest version in mirror repo is still 127, need wait mirror repo sync. Verify on openshift v3.7.0-0.146.0 [root@ip-172-18-8-9 ~]# openshift version openshift v3.7.0-0.146.0 kubernetes v1.7.6+a08f5eeb62 etcd 3.2.1 Steps to verify: 1. Setup cluster env in aws and enable cloud-provider 2. Remove "KubernetesCluster" tag on instance then restart atomic-openshift-master-controllers # systemctl restart atomic-openshift-master-controllers.service 3. Check atomic-openshift-master-controllers logs Oct 10 05:07:30 ip-172-18-8-9.ec2.internal atomic-openshift-master-controllers[69790]: E1010 05:07:30.537355 69790 tags.go:94] Tag "KubernetesCluster" nor "kubernetes.io/cluster/..." not found; Kubernetes may behave unexpectedly. Oct 10 05:07:30 ip-172-18-8-9.ec2.internal atomic-openshift-master-controllers[69790]: W1010 05:07:30.537373 69790 tags.go:78] AWS cloud - no clusterID filtering applied for shared resources; do not run multiple clusters in this AZ. Oct 10 05:07:30 ip-172-18-8-9.ec2.internal atomic-openshift-master-controllers[69790]: F1010 05:07:30.537423 69790 controllermanager.go:179] error building controller context: no ClusterID Found. A ClusterID is required for the cloud Oct 10 05:07:30 ip-172-18-8-9.ec2.internal systemd[1]: atomic-openshift-master-controllers.service: main process exited, code=exited, status=255/n/a Oct 10 05:07:30 ip-172-18-8-9.ec2.internal systemd[1]: Unit atomic-openshift-master-controllers.service entered failed state. Oct 10 05:07:30 ip-172-18-8-9.ec2.internal systemd[1]: atomic-openshift-master-controllers.service failed. 4. Modify allow-untagged-cloud=true in /etc/origin/master/master-config.yaml kubernetesMasterConfig: controllerArguments: allow-untagged-cloud: - "true" 5. Check controller log again, there is some warning and controller can start success Oct 10 05:12:30 ip-172-18-8-9 atomic-openshift-master-controllers: E1010 05:12:30.541450 70438 tags.go:94] Tag "KubernetesCluster" nor "kubernetes.io/cluster/..." not found; Kubernetes may behave unexpectedly. Oct 10 05:12:30 ip-172-18-8-9 atomic-openshift-master-controllers: W1010 05:12:30.541471 70438 tags.go:78] AWS cloud - no clusterID filtering applied for shared resources; do not run multiple clusters in this AZ. Oct 10 05:12:30 ip-172-18-8-9 atomic-openshift-master-controllers: W1010 05:12:30.541522 70438 controllermanager.go:422] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188 |