Hide Forgot
Description of problem: After installation is completed customer is able to access web console but when he is trying to access cluster console it is being redirected to login page and after providing credential getting the login screen back. Please find below logs from console POD: 2018/11/20 05:05:38 auth: unable to verify auth code with issuer: oauth2: cannot fetch token: 400 Bad Request Response: {"error":"unauthorized_client","error_description":"The client is not authorized to request a token using this method."} 2018/11/20 05:05:39 server: authentication failed: unauthenticated 2018/11/20 05:05:39 server: authentication failed: unauthenticated 2018/11/20 05:05:39 server: authentication failed: unauthenticated 2018/11/20 05:05:39 server: authentication failed: unauthenticated We received a similar case earlier and at that time adding below lines to master-config.yml solved the issue but this time its not working. corsAllowedOrigins: - (?i)//console\.apps.\.subdomain\.domain\.com(:|\z) Version-Release number of selected component (if applicable): Openshift container platform 3.11 How reproducible: Not Sure Steps to Reproduce: 1. 2. 3. Actual results: Cluster console is not accessible from web console Expected results: Cluster console should be accessible Additional info:
I'd check the following: 1. Check that the openshift-console OAuth client exists: $ oc describe oauthclient openshift-console 2. Check that the redirect URI in the OAuth client matches the public hostname of the cluster console. 3. Check that the OAuth secret from the OAuth client matches what you see when you run the command: $ oc get secret console-oauth-config -o=jsonpath='{.data.clientSecret}' -n openshift-console | base64 --decode 4. Check that the client ID in the openshift-config config map matches the OAuth secret name. config-config.yaml should have a key `key.clientID` which should have the value "openshift-console" $ oc describe configmap console-config -n openshift-console If everything matches, try deleting the pods to make sure they have loaded the latest secret and config map values. The pods will be recreated. $ oc delete --all pods -n openshift-console If things still don't work, can you attach the YAML for those resources? $ oc get -o yaml oauthclient openshift-console $ oc get -o yaml secret console-oauth-config -n openshift-console $ oc get -o yaml configmap console-config -n openshift-console
Master logs taken at log level 4+ may be helpful, search for these strings: 1. OAuth authentication error 2. osin
This problem can occur when the install playbook is run more than once with no changes to the console deployment. A new rollout isn't triggered, and the console pods don't pick up the new OAuth secret value. 3.11 fix in PR: https://github.com/openshift/openshift-ansible/pull/11089
This does not reproduce on openshift/oc v3.11.82. Console switcher is working. Moving it to Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0326