When the local kube-apiserver becomes ready, the GCP LBs pick up the endpoint and route traffic to the IP. In parallel the gcp-routes service notices the local, green readyz and stops sending traffic to that LB. The second step must happen *BEFORE* the first. Otherwise, local requests still go to the GCP LB that already picked up the endpoint. We risk to blackhole 1/3 of the requests (because GCP has no hairpinning support). The reason is that the gcp-routes script polls every 5s (without inotify which isn't installed in the image as we know) such that we end up with 1*2s + 1.9999s + 5s until the gcp-routes script updates iptables (1.9999s because the readyz polling happens at an unfortunate time, and the 5s for the poll script to notice). Hence, 9s >> 6s of the LB. Hence, for 3s we might lose 1/3 of the requests originating from the local host. Compare: https://github.com/openshift/installer/pull/3512/files#diff-3aaac4ae7d381237a540f05371931b76R10
Do we have a fork of this bug which is targeted for 4.5.0?
@Lala see: https://github.com/openshift/machine-config-operator/pull/1821 It's been merged into 4.5
@Stefan is the acceptance criteria for this the same as https://bugzilla.redhat.com/show_bug.cgi?id=1845416#c13 ?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196