Matt, PTAL.
Same issue for logging on ded-int-aws environment
Hi all. I'm seeing this same issue on 3 different clusters, with 3.11.88 and 3.11.82 versions. I'm not getting any error in the Chrome DevTools, but I'm getting the same oauth login loop after logging out from Kibana console. If I logout and try to login again, the login loop happens. However, when opening up another (incognito) browser window, the Kibana console login works just fine. Any known workaround of this behavior?
Any changes these fixes [1] may resolve this issue? https://github.com/openshift/cluster-logging-operator/pull/127/files#diff-5ff1cbe659b99e0e73d8ba484249c27cR463
(In reply to Jeff Cantrill from comment #6) > Any changes these fixes [1] may resolve this issue? > > https://github.com/openshift/cluster-logging-operator/pull/127/files#diff- > 5ff1cbe659b99e0e73d8ba484249c27cR463 Hi Jeff. I've tested the login in 4.1.7, but when I login, logout and try to login again, I'm getting ERR_TOO_MANY_REDIRECTS error in Chrome. I'll attach the .har files with the HTTP request made both for the login and logout.
Pushing this off to 4.3 as this is not a blocker. It is possible to work around by pasting the route back into the browser which loads the page correctly
@Ben, There is something wrong with the logout workflow. * The link from kibana is using the 'sign_in' endpoint which was documented as valid for sign out: [1] /oauth/sign_in - the login page, which also doubles as a sign out page (it clears cookies) * The code actually says there is a 'sign_out' endpoint which redirects you to '/' once it clears the cookies 596 func (p *OAuthProxy) SignOut(rw http.ResponseWriter, req *http.Request) { 597 p.ClearSessionCookie(rw, req) 598 http.Redirect(rw, req, "/", 302) 599 } The result of this action is to take you back into the application which is successful but you are already authenticated negating the 'sign out' [1] https://github.com/openshift/oauth-proxy#endpoint-documentation Shouldn't the oauth-proxy component owner be responsible for fixing this; it is not isolated to cluster logging
Verified with ose-cluster-logging-operator-v4.3.0-201911081316
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0062