Bug 1679272 - 'Oh no! Something went wrong.' Message after first login into web console
Summary: 'Oh no! Something went wrong.' Message after first login into web console
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.1.0
Assignee: Samuel Padgett
QA Contact: Yadan Pei
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-20 18:53 UTC by Simon
Modified: 2023-03-24 14:35 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-25 19:08:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Simon 2019-02-20 18:53:44 UTC
Description of problem:
After first attempt to login to web console user get error message:
Oh no! Something went wrong. There was an error loggin you in. Please log out and log in again. 


Version-Release number of selected component (if applicable): 4.0


How reproducible:


Steps to Reproduce:
1. Use Next-Gen installer to provide fresh cluster.
2. Go to web console (link, username, password provided by Next-Gen installer)
3. Try to log in into web console. 

Actual results:
* After login browser show error message: Oh no!...
* Browser is attempting to login once again - stuck in a loop getting status code back: 401
* oc logs <pods in openshift-console project>
```
auth: unable to verify auth code with issuer: Post https://openshift-authentication-openshift-authentication.apps.mffiedler-68.qe.devcluster.openshift.com/oauth/token: x509: certificate signed by unknown authority
server: authentication failed: unauthenticated
```

Expected results:
User can log in into web console without problems.

Additional info:
After deleting pod where console is running user can login to web console without problems.

Comment 1 Mike Fiedler 2019-02-20 18:58:48 UTC
This is on 4.0.0-0.nightly-2019-02-19-195128 and seen a on a few previous builds.   Seems to happen 75% - 90% of time after install.   As mentioned, restart of console pods fixes it.

Comment 2 Samuel Padgett 2019-02-20 20:29:36 UTC
It looks like console reads the serviceaccount/ca.crt file too early and doesn't detect when it changes on the filesystem. We're seeing this now since the OAuth server recently moved behind a route.

Comment 3 Samuel Padgett 2019-02-20 20:39:31 UTC
I was able to reproduce

Comment 4 Samuel Padgett 2019-02-20 21:41:02 UTC
https://github.com/openshift/console/pull/1206

Comment 5 Simon 2019-02-25 19:05:10 UTC
Retested positive!
Build: 4.0.0-0.nightly-2019-02-24-045124

Steps:
- Create new clusters using Next Gen installer.
- First login into WebConsole with user and password delivered by installer.
- Success login! - As expected!

Log from pod:
2019/02/25 18:50:29 auth: oauth success, redirecting to: "https://console-openshift-console.apps.skordas-qe-25.qe.devcluster.openshift.com/"

Comment 6 Samuel Padgett 2019-02-25 19:08:46 UTC
Thanks for validating!

Comment 8 Zhigang Wang 2020-01-17 15:12:53 UTC
is there a plan to backport the fix to OCP 3.11?

Comment 9 kirby.shabaga 2020-01-23 21:09:12 UTC
I'm just seeing this now on one of our 3.11 clusters. 

(In reply to Zhigang Wang from comment #8)
> is there a plan to backport the fix to OCP 3.11?


Note You need to log in before you can comment on or make changes to this bug.