Description of problem:
bootstrap logs for this cluster
"Note the error in our operator pod. We need to wait for all the required, non-revision configmaps and secrets from starter.go in each operator. It's racy and sometimes we lose."
Version-Release number of selected component (if applicable):
Steps to Reproduce:
failed to bootstrap the cluster
Hi Erica von Buelow:
Could you please give more info about the failure, I can't reproduce the issue with all the payloads after 4.1.0-0.nightly-2019-04-25-121505 . Thanks a lot .
First, checked above PR landed in payloads whose build is >= 4.1.0-0.nightly-2019-05-02-004418.
(In reply to Erica von Buelow from comment #0)
> It's racy and sometimes we lose.
Per this, checked all 4.1.0-0.nightly payloads >= 4.1.0-0.nightly-2019-05-02-004418 in https://openshift-release.svc.ci.openshift.org/ as of now. Most are Accepted, meaning not hitting this bug. There are only a few "Rejected" ones. Checked all these "Rejected" ones' artifacts/e2e-aws/bootstrap/bootkube.service , didn't see the error like comment 1:
Error: error while checking pod status: timed out waiting for the condition
Apr 22 21:31:43 ip-10-0-1-229 bootkube.sh: Tearing down temporary bootstrap control plane
In our many daily cluster creations, above error is not hit either.
So moving to VERIFIED.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.