Bug 1938467

Summary: The default cluster-autoscaler should get default cpu and memory requests if user omits them
Product: OpenShift Container Platform Reporter: Clayton Coleman <ccoleman>
Component: Cloud ComputeAssignee: Danil Grigorev <dgrigore>
Cloud Compute sub component: Cluster Autoscaler QA Contact: sunzhaohua <zhsun>
Status: CLOSED ERRATA Docs Contact:
Severity: high    
Priority: unspecified CC: aos-bugs, dgrigore, nelluri, wking
Version: 4.8   
Target Milestone: ---   
Target Release: 4.8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-07-27 22:53:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Clayton Coleman 2021-03-13 17:39:43 UTC
All payload components should request a reasonable minimum CPU and p90 memory usage

https://github.com/openshift/enhancements/blob/master/CONVENTIONS.md#resources-and-limits

The cluster-autoscaler-default deploys by default without such CPU and memory.  A reasonable default must be provided when the autoscaler is created if user does not provide one (this causes the autoscaler to be in best-effort).

Referenced from the new e2e test which gates components without resource requests and enforces the resource conventions.

Comment 2 sunzhaohua 2021-04-16 03:41:50 UTC
Failed to verify, autoscaler po stuck in pending status.

clusterversion: 4.8.0-0.nightly-2021-04-15-202330

steps:
1. create clusterautoscaler
apiVersion: "autoscaling.openshift.io/v1"
kind: "ClusterAutoscaler"
metadata:
  name: "default"
spec:
  resourceLimits:
    maxNodesTotal: 10
  scaleDown:
    enabled: true
    delayAfterAdd: 10s
    delayAfterDelete: 10s
    delayAfterFailure: 10s
    unneededTime: 10s
2.check autoscaler pod, stuck in pending
$ oc get po
NAME                                           READY   STATUS    RESTARTS   AGE
cluster-autoscaler-default-56659849fc-b49rk    0/1     Pending   0          5m46s
cluster-autoscaler-default-74f84d9957-dp8xk    1/1     Running   0          5m37s
cluster-autoscaler-operator-844d8f7b96-srq2v   2/2     Running   0          8m28s
cluster-baremetal-operator-84f7c56bbc-j9v52    2/2     Running   0          79m
machine-api-controllers-685988fb5d-2k5tf       7/7     Running   0          85m
machine-api-operator-6cbcdcd4cd-l27jd          2/2     Running   0          85m

$ oc describe po cluster-autoscaler-default-56659849fc-b49rk
    Requests:
      cpu:     20Mi
      memory:  10m


maybe should:
        cpu: 10m
        memory: 20Mi

Comment 3 sunzhaohua 2021-04-16 03:42:53 UTC
$ oc describe po cluster-autoscaler-default-56659849fc-b49rk
Events:
  Type     Reason             Age                   From                Message
  ----     ------             ----                  ----                -------
  Warning  FailedScheduling   6m12s                 default-scheduler   0/6 nodes are available: 3 Insufficient cpu, 3 node(s) didn't match Pod's node affinity/selector.
  Warning  FailedScheduling   6m11s                 default-scheduler   0/6 nodes are available: 3 Insufficient cpu, 3 node(s) didn't match Pod's node affinity/selector.

Comment 5 W. Trevor King 2021-04-17 02:40:36 UTC
Moving back to POST so I can attach PR removing the test-suite exception too.

Comment 7 sunzhaohua 2021-04-21 09:27:49 UTC
Verified
clusterversion: 4.8.0-0.nightly-2021-04-20-195442

# oc get po
NAME                                           READY   STATUS    RESTARTS   AGE
cluster-autoscaler-default-56fc5bc88c-zqqtf    1/1     Running   0          2m41s

    resources:
      requests:
        cpu: 10m
        memory: 20Mi

Comment 10 errata-xmlrpc 2021-07-27 22:53:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438