+++ This bug was initially created as a clone of Bug #2028217 +++ Tomas and Vadim noticed that, when a Deployment manifest leaves 'replicas' unset, the CVO ignores the property. This means that cluster admins can scale those Deployments up or, worse, down to 0, and the CVO will happily continue on without stomping them. Auditing 4.9.10: $ oc adm release extract --to manifests quay.io/openshift-release-dev/ocp-release:4.9.10-x86_64 Extracted release payload from digest sha256:e1853d68d8ff093ec353ca7078b6b6df1533729688bb016b8208263ee7423f66 created at 2021-12-01T09:19:24Z $ for F in $(grep -rl 'kind: Deployment' manifests); do yaml2json < "${F}" | jq -r '.[] | select(.kind == "Deployment" and .spec.replicas == null).metadata | .namespace + " " + .name'; done | sort | uniq openshift-cluster-machine-approver machine-approver openshift-insights insights-operator openshift-network-operator network-operator Those are all important operators, and I'm fairly confident that none of their maintainers expect "cluster admin scales them down to 0" to be a supported UX. We should have the CVO default Deployment replicas to 1 (the type's default [1]), so admins who decide they don't want a network operator pod, etc., have to use some more explicit, alarming API to remove those pods (e.g. setting spec.overrides in the ClusterVersion object to assume control of the resource themselves). [1]: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/#DeploymentSpec
We'd fixed this in 4.9.12 [1], and nobody seems to be bothered by the lack of 4.8 fix. 4.8 goes end-of-life on 2023-01-27 [2], so closing this DEFERRED. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=2028602#c8 [2]: https://access.redhat.com/support/policy/updates/openshift#dates