The test "[Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]" has been failing about once a day in the e2e-azure-serial-4.5 job for the past week: - https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-serial-4.5/996 - https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-serial-4.5/1014 -https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-serial-4.5/1017 - https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-serial-4.5/1031 - https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-serial-4.5/1040 This may a duplicate of an issue reported and closed last year (https://bugzilla.redhat.com/show_bug.cgi?id=1764414), but the correspondence seems to indicate that the flake was never resolved. --- started: (0/147/271) "[Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]" I0501 20:41:00.472273 4677 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready May 1 20:41:00.519: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable May 1 20:41:00.733: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 1 20:41:00.878: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 1 20:41:00.878: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready. May 1 20:41:00.878: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 1 20:41:00.925: INFO: e2e test version: v1.18.0-rc.1 May 1 20:41:00.966: INFO: kube-apiserver version: v1.18.0-rc.1 May 1 20:41:01.010: INFO: Cluster IP family: ipv4 [BeforeEach] [Top Level] /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/framework.go:1413 [BeforeEach] [Top Level] /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/framework.go:1413 [BeforeEach] [Top Level] /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/client.go:135 STEP: Creating a kubernetes client [BeforeEach] [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/client.go:111 May 1 20:41:01.438: INFO: configPath is now "/tmp/configfile518729969" May 1 20:41:01.438: INFO: The user is now "e2e-test-request-headers-dgtl9-user" May 1 20:41:01.438: INFO: Creating project "e2e-test-request-headers-dgtl9" May 1 20:41:01.617: INFO: Waiting on permissions in project "e2e-test-request-headers-dgtl9" ... May 1 20:41:01.665: INFO: Waiting for ServiceAccount "default" to be provisioned... May 1 20:41:01.814: INFO: Waiting for ServiceAccount "deployer" to be provisioned... May 1 20:41:01.963: INFO: Waiting for ServiceAccount "builder" to be provisioned... May 1 20:41:02.112: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... May 1 20:41:02.206: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... May 1 20:41:02.299: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... May 1 20:41:17.473: INFO: Project "e2e-test-request-headers-dgtl9" has been fully provisioned. [It] test RequestHeaders IdP [Suite:openshift/conformance/serial] /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/oauth/requestheaders.go:65 May 1 20:41:17.791: INFO: Running 'oc --namespace=e2e-test-request-headers-dgtl9 --kubeconfig=/tmp/configfile518729969 get --raw /.well-known/oauth-authorization-server' [AfterEach] [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/client.go:133 STEP: Collecting events from namespace "e2e-test-request-headers-dgtl9". STEP: Found 0 events. May 1 20:43:19.902: INFO: POD NODE PHASE GRACE CONDITIONS May 1 20:43:19.902: INFO: May 1 20:43:19.947: INFO: skipping dumping cluster info - cluster too large May 1 20:43:20.003: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-request-headers-dgtl9-user}, err: <nil> May 1 20:43:20.056: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-request-headers-dgtl9}, err: <nil> May 1 20:43:20.110: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens kLEJ0MqGRqCcgZDrkKRw8AAAAAAAAAAA}, err: <nil> [AfterEach] [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/client.go:134 May 1 20:43:20.110: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready STEP: Destroying namespace "e2e-test-request-headers-dgtl9" for this suite. May 1 20:43:20.222: INFO: Running AfterSuite actions on all nodes May 1 20:43:20.223: INFO: Running AfterSuite actions on node 1 fail [github.com/openshift/origin/test/extended/oauth/requestheaders.go:370]: Unexpected error: <*errors.errorString | 0xc000210990>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409