Bug 1651632 - cluster console is not accessible from web console
Summary: cluster console is not accessible from web console
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 3.11.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 3.11.z
Assignee: Samuel Padgett
QA Contact: Yadan Pei
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-20 12:45 UTC by Arnab Ghosh
Modified: 2022-03-13 16:08 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, running the install playbook multiple times with no changes to the cluster console configuration could cause the cluster console login to stop working. The underlying problem has been fixed, and now running the playbook more than once will correctly rollout a new console deployment. This problem can be worked around without the installer fix by manually deleting the console pods using the command: $ oc delete --all pods -n openshift-console
Clone Of:
Environment:
Last Closed: 2019-02-20 14:11:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0326 0 None None None 2019-02-20 14:11:09 UTC

Description Arnab Ghosh 2018-11-20 12:45:10 UTC
Description of problem:
After installation is completed customer is able to access web console but when he is trying to access cluster console it is being redirected to login page and after providing credential getting the login screen back.

Please find below logs from console POD:

2018/11/20 05:05:38 auth: unable to verify auth code with issuer: oauth2: cannot fetch token: 400 Bad Request
Response: {"error":"unauthorized_client","error_description":"The client is not authorized to request a token using this method."}
2018/11/20 05:05:39 server: authentication failed: unauthenticated
2018/11/20 05:05:39 server: authentication failed: unauthenticated
2018/11/20 05:05:39 server: authentication failed: unauthenticated
2018/11/20 05:05:39 server: authentication failed: unauthenticated

We received a similar case earlier and at that time adding below lines to master-config.yml solved the issue but this time its not working.

corsAllowedOrigins:
-  (?i)//console\.apps.\.subdomain\.domain\.com(:|\z)

Version-Release number of selected component (if applicable):
Openshift container platform 3.11

How reproducible:
Not Sure

Steps to Reproduce:
1.
2.
3.

Actual results:
Cluster console is not accessible from web console

Expected results:
Cluster console should be accessible 

Additional info:

Comment 2 Samuel Padgett 2018-11-20 13:29:12 UTC
I'd check the following:


1. Check that the openshift-console OAuth client exists:

  $ oc describe oauthclient openshift-console


2. Check that the redirect URI in the OAuth client matches the public hostname of the cluster console.


3. Check that the OAuth secret from the OAuth client matches what you see when you run the command:

  $ oc get secret console-oauth-config -o=jsonpath='{.data.clientSecret}' -n openshift-console | base64 --decode


4. Check that the client ID in the openshift-config config map matches the OAuth secret name. config-config.yaml should have a key `key.clientID` which should have the value "openshift-console"

  $ oc describe configmap console-config -n openshift-console


If everything matches, try deleting the pods to make sure they have loaded the latest secret and config map values. The pods will be recreated.

  $ oc delete --all pods -n openshift-console


If things still don't work, can you attach the YAML for those resources?

  $ oc get -o yaml oauthclient openshift-console
  $ oc get -o yaml secret console-oauth-config -n openshift-console
  $ oc get -o yaml configmap console-config -n openshift-console

Comment 3 Mo 2018-11-20 14:41:32 UTC
Master logs taken at log level 4+ may be helpful, search for these strings:

1. OAuth authentication error
2. osin

Comment 13 Samuel Padgett 2019-01-28 17:56:24 UTC
This problem can occur when the install playbook is run more than once with no changes to the console deployment. A new rollout isn't triggered, and the console pods don't pick up the new OAuth secret value. 3.11 fix in PR:

https://github.com/openshift/openshift-ansible/pull/11089

Comment 15 XiaochuanWang 2019-02-12 02:25:29 UTC
This does not reproduce on openshift/oc v3.11.82. Console switcher is working.
Moving it to Verified.

Comment 17 errata-xmlrpc 2019-02-20 14:11:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0326


Note You need to log in before you can comment on or make changes to this bug.