Description of problem: bootstrap logs for this cluster https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/591/pull-ci-openshift-machine-config-operator-master-e2e-aws-op/1678/artifacts/e2e-aws-op/installer/bootstrap-logs.tar.gz "Note the error in our operator pod. We need to wait for all the required, non-revision configmaps and secrets from starter.go in each operator. It's racy and sometimes we lose." Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
https://gcsweb-ci.svc.ci.openshift.org/gcs/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/591/pull-ci-openshift-machine-config-operator-master-e2e-aws-op/1678/artifacts/e2e-aws-op/ failed to bootstrap the cluster
https://github.com/openshift/cluster-kube-apiserver-operator/pull/434
Hi Erica von Buelow: Could you please give more info about the failure, I can't reproduce the issue with all the payloads after 4.1.0-0.nightly-2019-04-25-121505 . Thanks a lot .
First, checked above PR landed in payloads whose build is >= 4.1.0-0.nightly-2019-05-02-004418. (In reply to Erica von Buelow from comment #0) > It's racy and sometimes we lose. Per this, checked all 4.1.0-0.nightly payloads >= 4.1.0-0.nightly-2019-05-02-004418 in https://openshift-release.svc.ci.openshift.org/ as of now. Most are Accepted, meaning not hitting this bug. There are only a few "Rejected" ones. Checked all these "Rejected" ones' artifacts/e2e-aws/bootstrap/bootkube.service , didn't see the error like comment 1: ... Error: error while checking pod status: timed out waiting for the condition Apr 22 21:31:43 ip-10-0-1-229 bootkube.sh[1523]: Tearing down temporary bootstrap control plane ... In our many daily cluster creations, above error is not hit either. So moving to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758