We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context the UpgradeBlocker flag has been added to this bug. It will be removed if the assessment indicates that this should not block upgrade edges. Who is impacted? Customers upgrading from 4.2.99 to 4.3.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet All customers upgrading from 4.2.z to 4.3.z fail approximately 10% of the time What is the impact? Up to 2 minute disruption in edge routing Up to 90seconds of API downtime etcd loses quorum and you have to restore from backup How involved is remediation? Issue resolves itself after five minutes Admin uses oc to fix things Admin must SSH to hosts, restore from backups, or other non standard admin activities Is this a regression? No, itβs always been like this we just never noticed Yes, from 4.2.z and 4.3.1 Depending on the answers to the above questions we can remove UpgradeBlocker keyword.
Who is impacted? All customers that upgrade to 4.3.5. What is the impact? The service CA will be rotated on upgrade, which is intended to ensure against CA expiry. Without the fix for this bz, though, oauth-proxy will not automatically refresh to pick up the new key material. If not restarted before expiry of the pre-rotation CA, any attempt to communicate via oauth-proxy will result in tls validation errors which will break many of the monitoring components (see [1]). For 4.1 clusters upgraded to 4.3.5 this could occur as soon as May 14th 2020. How involved is remediation? Manual restart of the monitoring components that use oauth-proxy. Is this a regression? No. Without automated rotation, manual rotation (including pod restart) would be required anyway. 1: https://docs.google.com/document/d/1NB2wUf9e8XScfVM6jFBl8VuLYG6-3uV63eUpqmYE8Ts/edit
Tested with 4.3.0-0.nightly-2020-03-19-052824 and followed the case OCP-27992, issue is not happen
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0858
Removing UpgradeBlocker from this older bug, to remove it from the suspect queue described in [1]. If you feel like this bug still needs to be a suspect, please add keyword again. [1]: https://github.com/openshift/enhancements/pull/475