Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1679272

Summary: 'Oh no! Something went wrong.' Message after first login into web console
Product: OpenShift Container Platform Reporter: Simon <skordas>
Component: Management ConsoleAssignee: Samuel Padgett <spadgett>
Status: CLOSED CURRENTRELEASE QA Contact: Yadan Pei <yapei>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.1.0CC: aos-bugs, erich, jokerman, kirby.shabaga, mifiedle, mmccomas, spadgett, zhigwang
Target Milestone: ---   
Target Release: 4.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-02-25 19:08:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Simon 2019-02-20 18:53:44 UTC
Description of problem:
After first attempt to login to web console user get error message:
Oh no! Something went wrong. There was an error loggin you in. Please log out and log in again. 


Version-Release number of selected component (if applicable): 4.0


How reproducible:


Steps to Reproduce:
1. Use Next-Gen installer to provide fresh cluster.
2. Go to web console (link, username, password provided by Next-Gen installer)
3. Try to log in into web console. 

Actual results:
* After login browser show error message: Oh no!...
* Browser is attempting to login once again - stuck in a loop getting status code back: 401
* oc logs <pods in openshift-console project>
```
auth: unable to verify auth code with issuer: Post https://openshift-authentication-openshift-authentication.apps.mffiedler-68.qe.devcluster.openshift.com/oauth/token: x509: certificate signed by unknown authority
server: authentication failed: unauthenticated
```

Expected results:
User can log in into web console without problems.

Additional info:
After deleting pod where console is running user can login to web console without problems.

Comment 1 Mike Fiedler 2019-02-20 18:58:48 UTC
This is on 4.0.0-0.nightly-2019-02-19-195128 and seen a on a few previous builds.   Seems to happen 75% - 90% of time after install.   As mentioned, restart of console pods fixes it.

Comment 2 Samuel Padgett 2019-02-20 20:29:36 UTC
It looks like console reads the serviceaccount/ca.crt file too early and doesn't detect when it changes on the filesystem. We're seeing this now since the OAuth server recently moved behind a route.

Comment 3 Samuel Padgett 2019-02-20 20:39:31 UTC
I was able to reproduce

Comment 4 Samuel Padgett 2019-02-20 21:41:02 UTC
https://github.com/openshift/console/pull/1206

Comment 5 Simon 2019-02-25 19:05:10 UTC
Retested positive!
Build: 4.0.0-0.nightly-2019-02-24-045124

Steps:
- Create new clusters using Next Gen installer.
- First login into WebConsole with user and password delivered by installer.
- Success login! - As expected!

Log from pod:
2019/02/25 18:50:29 auth: oauth success, redirecting to: "https://console-openshift-console.apps.skordas-qe-25.qe.devcluster.openshift.com/"

Comment 6 Samuel Padgett 2019-02-25 19:08:46 UTC
Thanks for validating!

Comment 8 Zhigang Wang 2020-01-17 15:12:53 UTC
is there a plan to backport the fix to OCP 3.11?

Comment 9 kirby.shabaga 2020-01-23 21:09:12 UTC
I'm just seeing this now on one of our 3.11 clusters. 

(In reply to Zhigang Wang from comment #8)
> is there a plan to backport the fix to OCP 3.11?