Bug 1651632
| Summary: | cluster console is not accessible from web console | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Arnab Ghosh <arghosh> |
| Component: | Management Console | Assignee: | Samuel Padgett <spadgett> |
| Status: | CLOSED ERRATA | QA Contact: | Yadan Pei <yapei> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.11.0 | CC: | adeshpan, aos-bugs, arghosh, jokerman, misalunk, mkhan, mmccomas, rekhan, spadgett, tkimura, xiaocwan |
| Target Milestone: | --- | ||
| Target Release: | 3.11.z | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Previously, running the install playbook multiple times with no changes to the cluster console configuration could cause the cluster console login to stop working. The underlying problem has been fixed, and now running the playbook more than once will correctly rollout a new console deployment.
This problem can be worked around without the installer fix by manually deleting the console pods using the command:
$ oc delete --all pods -n openshift-console
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-02-20 14:11:02 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
I'd check the following:
1. Check that the openshift-console OAuth client exists:
$ oc describe oauthclient openshift-console
2. Check that the redirect URI in the OAuth client matches the public hostname of the cluster console.
3. Check that the OAuth secret from the OAuth client matches what you see when you run the command:
$ oc get secret console-oauth-config -o=jsonpath='{.data.clientSecret}' -n openshift-console | base64 --decode
4. Check that the client ID in the openshift-config config map matches the OAuth secret name. config-config.yaml should have a key `key.clientID` which should have the value "openshift-console"
$ oc describe configmap console-config -n openshift-console
If everything matches, try deleting the pods to make sure they have loaded the latest secret and config map values. The pods will be recreated.
$ oc delete --all pods -n openshift-console
If things still don't work, can you attach the YAML for those resources?
$ oc get -o yaml oauthclient openshift-console
$ oc get -o yaml secret console-oauth-config -n openshift-console
$ oc get -o yaml configmap console-config -n openshift-console
Master logs taken at log level 4+ may be helpful, search for these strings: 1. OAuth authentication error 2. osin This problem can occur when the install playbook is run more than once with no changes to the console deployment. A new rollout isn't triggered, and the console pods don't pick up the new OAuth secret value. 3.11 fix in PR: https://github.com/openshift/openshift-ansible/pull/11089 This does not reproduce on openshift/oc v3.11.82. Console switcher is working. Moving it to Verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0326 |
Description of problem: After installation is completed customer is able to access web console but when he is trying to access cluster console it is being redirected to login page and after providing credential getting the login screen back. Please find below logs from console POD: 2018/11/20 05:05:38 auth: unable to verify auth code with issuer: oauth2: cannot fetch token: 400 Bad Request Response: {"error":"unauthorized_client","error_description":"The client is not authorized to request a token using this method."} 2018/11/20 05:05:39 server: authentication failed: unauthenticated 2018/11/20 05:05:39 server: authentication failed: unauthenticated 2018/11/20 05:05:39 server: authentication failed: unauthenticated 2018/11/20 05:05:39 server: authentication failed: unauthenticated We received a similar case earlier and at that time adding below lines to master-config.yml solved the issue but this time its not working. corsAllowedOrigins: - (?i)//console\.apps.\.subdomain\.domain\.com(:|\z) Version-Release number of selected component (if applicable): Openshift container platform 3.11 How reproducible: Not Sure Steps to Reproduce: 1. 2. 3. Actual results: Cluster console is not accessible from web console Expected results: Cluster console should be accessible Additional info: