Bug 2028599 - Cluster-version operator does not default Deployment replicas to one
Summary: Cluster-version operator does not default Deployment replicas to one
Status: CLOSED DUPLICATE of bug 2028217
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cluster Version Operator
Version: 4.1.z
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.10.0
Assignee: W. Trevor King
QA Contact: Yang Yang
Depends On:
TreeView+ depends on / blocked
Reported: 2021-12-02 18:05 UTC by OpenShift BugZilla Robot
Modified: 2021-12-02 18:37 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2021-12-02 18:07:42 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description OpenShift BugZilla Robot 2021-12-02 18:05:38 UTC
+++ This bug was initially created as a clone of Bug #2028217 +++

Tomas and Vadim noticed that, when a Deployment manifest leaves 'replicas' unset, the CVO ignores the property.  This means that cluster admins can scale those Deployments up or, worse, down to 0, and the CVO will happily continue on without stomping them.  Auditing 4.9.10:

  $ oc adm release extract --to manifests quay.io/openshift-release-dev/ocp-release:4.9.10-x86_64
  Extracted release payload from digest sha256:e1853d68d8ff093ec353ca7078b6b6df1533729688bb016b8208263ee7423f66 created at 2021-12-01T09:19:24Z
  $ for F in $(grep -rl 'kind: Deployment' manifests); do yaml2json < "${F}" | jq -r '.[] | select(.kind == "Deployment" and .spec.replicas == null).metadata | .namespace + " " + .name'; done | sort | uniq
  openshift-cluster-machine-approver machine-approver
  openshift-insights insights-operator
  openshift-network-operator network-operator

Those are all important operators, and I'm fairly confident that none of their maintainers expect "cluster admin scales them down to 0" to be a supported UX.  We should have the CVO default Deployment replicas to 1 (the type's default [1]), so admins who decide they don't want a network operator pod, etc., have to use some more explicit, alarming API to remove those pods (e.g. setting spec.overrides in the ClusterVersion object to assume control of the resource themselves).

[1]: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/#DeploymentSpec

Comment 1 W. Trevor King 2021-12-02 18:07:42 UTC
Bug 2028217 already targeted 4.10.0.

*** This bug has been marked as a duplicate of bug 2028217 ***

Note You need to log in before you can comment on or make changes to this bug.