The readiness of a pod is a leading indicator of the vast majority of Kube related failures that may impact a workload. Readiness is a prerequisite for network serving, encodes latency to start, and is impacted by regressions in aggregate behavior. Since we wish to improve our measurement of the disruption and readiness of both our workload (openshift core) and customer workload during upgrades, the before, during, and after state of readiness can provide a signal for total aggregate health that should in general remain high during an upgrade. Calculate from a recording rule a metric with one series per pending or running pod which reports 0 if the pod is unready and 1 if the pod is ready. Terminal pods are excluded. Average the pod readiness for openshift-* and !openshift-* workloads and report that to telemetry. If this metric returns a sufficiently useful indicator for openshift-* workloads, we may decide to investigate clusters that experience a dip immediately after an upgrade. Large clusters may report a lower average readiness, but would also tend to signal widespread node issues faster. We may also be able to institute a per component error burn (unready pods is bad as a rule) once we have quantified numbers.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633