test: Auth test.Login test.logs out kubeadmin user is failing frequently in CI, see search results: https://search.ci.openshift.org/?maxAge=168h&context=1&type=bug%2Bjunit&name=&maxMatches=5&maxBytes=20971520&groupBy=job&search=Auth+test%5C.Login+test%5C.logs+out+kubeadmin+user https://search.ci.openshift.org/?search=logs+out+kubeadmin+user&maxAge=168h&context=1&type=bug%2Bjunit&name=4.6&maxMatches=5&maxBytes=20971520&groupBy=job @sttts I think I remember seeing some logout chatter go by today. This seems worth looking at.
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-ocp-installer-console-aws-4.6/1298326620267876352 as an example job
Tested the steps manually and haven't found any issue. There could be an issue with the new Cypress test framework that we integrated. @Dave could you check the failed TestCase please.
I think this might be due to https://bugzilla.redhat.com/show_bug.cgi?id=1869966, PR fix was merged 22hrs ago: https://github.com/openshift/console/pull/6431. The auth failure is a protractor test fail. I do see three flakes of this sort within the last 22 hours, but also merged PRs in the same time period.
I believe this is a dup of 1869966 *** This bug has been marked as a duplicate of bug 1869966 ***
Reopening. We're still seeing this fairly often even after the fix for bug 1869966 merged. https://search.ci.openshift.org/?search=logs+out+kubeadmin&maxAge=48h&context=1&type=bug%2Bjunit&name=&maxMatches=5&maxBytes=20971520&groupBy=job
Created attachment 1712961 [details] Failed to log out kubeadmin user Seems like race condition with dashboard loading
Was able to reproduce this locally, pls see attachment 1712961 [details].
I was able to reproduce this using a 4.6.0-0.nightly-2020-08-02-134243 cluster. The fix (https://github.com/openshift/console/pull/6431) went in on 8/25. I am UNABLE to reproduce this using a 4.6.0-0.nightly-2020-08-27 cluster; ran it over 10+ times. However, ci/prow/e2e-gcp-console is still showing this failure. I did notice that in finished.json for these failed jobs shows FAILED: "{"timestamp":1598631020,"passed":false,"metadata":{"infra-commit":"","job-version":"","metadata":null,"pod":"1d42b764-e940-11ea-b553-0a580a800cb9","repo":"operator-framework/operator-marketplace","repo-commit":"","repos":{"operator-framework/operator-marketplace":"master:626a2b965d00865f23aca40081fcb97ad8a22488,336:b5be12c54be8c932b1d8f98f6b0979c39e5dcef8"},"revision":"1","work-namespace":"ci-op-sbdb8s3t"},"result":"FAILURE","revision":"b5be12c54be8c932b1d8f98f6b0979c39e5dcef8"}"
Increasing the severity as this is failing nearly 50% of jobs.
According to sippy, 4.6 pass rate dropped from 65.45% to 36.67%.
The pass rate is right now that 45.71%, Any updates regarding the fix?
No new similar errors appear again when Bug fix PR merged 8 hours ago. So this could be Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196