test: operator install authentication cluster operator authentication failing repeatedly citing multiple reasons on periodic-ci-openshift-release-master-ocp-4.6-e2e-aws-proxy is failing frequently in CI, see search results: https://search.ci.openshift.org/?maxAge=168h&context=1&type=bug%2Bjunit&name=&maxMatches=5&maxBytes=20971520&groupBy=job&search=operator+install+authentication Attaching few logs... https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ocp-4.6-e2e-aws-proxy/1306202351639465984 https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ocp-4.6-e2e-aws-proxy/1306237263985774592 https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ocp-4.6-e2e-aws-proxy/1306246450585276416
Moving to auth team to pin point why proxy is affecting their cluster.
Leaving open to discourage further bug filing by build watchers, but afiact this is non-actionable (failures are effectively exploded clusters with 11+ failed operator installs) and not caused by auth. Should likely be closed when groomed for the next sprint.
Actually, all of the clusters from the logs in the first comment share the same symptoms - there are no worker nodes, therefore the default router does not get scheduled, therefore the authn operator (and the others) go degraded. Not sure if MCO or installer team would be the right one to move this to, choosing the former.
Missing worker nodes isn't MCO (and the MCO logs look fine) I'll pass to Machine API Operator to take a look and they can decide whether to keep or pass to installer.
*** This bug has been marked as a duplicate of bug 1875773 ***