Which leads to failures like [1]: [sig-arch] Managed cluster should ensure platform components have system-* priority class associated [Suite:openshift/conformance/parallel] complaining about the version pods. [1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.8-e2e-aws-upgrade-rollback/1395138552144072704
No docs, because if we've ever had a cluster so resource-constrained that the version pod was evicted, I haven't heard about it, and you'd have lots of other alarm bells going off by that point anyway.
Verified this bug with 4.8.0-0.nightly-2021-05-21-233425, and PASS. Install a cluster using 4.8.0-0.nightly-2021-05-21-233425 (the source cluster includes the bug fix), trigger upgrade (or downgrade) to another payload, in my testing, the target payload is 4.8.0-0.nightly-2021-05-21-200728. [root@preserve-jialiu-ansible ~]# oc get po -n openshift-cluster-version NAME READY STATUS RESTARTS AGE cluster-version-operator-6796748df6-w5js6 1/1 Terminating 0 14m version--p7k2v-cpmhx 0/1 Completed 0 7s [root@preserve-jialiu-ansible ~]# oc get po -n openshift-cluster-version version--p7k2v-cpmhx -o yaml | grep -i priorityClassName priorityClassName: openshift-user-critical
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438