Bug 1779743 - Cluster-autoscaler stuck on update, doesn't report status
Summary: Cluster-autoscaler stuck on update, doesn't report status
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.2.z
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.2.z
Assignee: Brad Ison
QA Contact: Jianwei Hou
Depends On: 1779741
Blocks: 1779745
TreeView+ depends on / blocked
Reported: 2019-12-04 15:52 UTC by Brad Ison
Modified: 2020-01-07 17:55 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1779640
Last Closed: 2020-01-07 17:55:11 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github openshift cluster-autoscaler-operator pull 126 0 None closed Bug 1779743: Don't suppress errors when reporting operator status 2020-01-23 12:28:05 UTC
Red Hat Product Errata RHBA-2020:0014 0 None None None 2020-01-07 17:55:25 UTC

Description Brad Ison 2019-12-04 15:52:18 UTC
+++ This bug was initially created as a clone of Bug #1779640 +++

Description of problem:

4.3 nightly -> 4.3 nightly update failed:
`failed to initialize the cluster: Cluster operator cluster-autoscaler is still updating`

Clusteroperators list (https://storage.googleapis.com/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/11999/artifacts/e2e-aws-upgrade/clusteroperators.json) shows its empty (?):

            "apiVersion": "config.openshift.io/v1",
            "kind": "ClusterOperator",
            "metadata": {
                "creationTimestamp": "2019-12-04T00:20:33Z",
                "generation": 1,
                "name": "cluster-autoscaler",
                "resourceVersion": "11333",
                "selfLink": "/apis/config.openshift.io/v1/clusteroperators/cluster-autoscaler",
                "uid": "ed891617-6cf2-4c78-9c0e-54d2e86af724"
            "spec": {}

Version-Release number of selected component (if applicable):
4.3.0-0.nightly-2019-12-03-211441 -> 4.3.0-0.nightly-2019-12-03-234445

How reproducible:

Additional info:

--- Additional comment from Brad Ison on 2019-12-04 15:49:10 UTC ---

The underlying issue here is that etcd was under load and taking multiple seconds to sync its log, which was causing leader elections, and I think some API writes to fail.

In addition, the cluster-autoscaler-operator was not reporting failures to apply updates to its ClusterOperator resource, and worse, was not retrying when it failed to apply an "Available" status. So the CVO was unaware of its success. The linked PR fixes that, and I'll make sure it's back ported to previous releases.

Comment 2 Jianwei Hou 2019-12-20 04:22:18 UTC
Verified in 4.2.0-0.nightly-2019-12-20-002556

oc get co cluster-autoscaler -o yaml                                                               
apiVersion: config.openshift.io/v1
kind: ClusterOperator
  creationTimestamp: "2019-12-20T04:04:13Z"
  generation: 1
  name: cluster-autoscaler
  resourceVersion: "11305"
  selfLink: /apis/config.openshift.io/v1/clusteroperators/cluster-autoscaler
  uid: c8a162ad-22dd-11ea-bfe2-02be5cf252dc
spec: {}
  - lastTransitionTime: "2019-12-20T04:04:13Z"
    message: at version 4.2.0-0.nightly-2019-12-20-002556
    status: "True"
    type: Available
  - lastTransitionTime: "2019-12-20T04:04:13Z"
    status: "False"
    type: Progressing
  - lastTransitionTime: "2019-12-20T04:04:13Z"
    status: "False"
    type: Degraded
  - lastTransitionTime: "2019-12-20T04:04:13Z"
    status: "True"
    type: Upgradeable
  extension: null
  - group: machine.openshift.io
    name: ""
    namespace: openshift-machine-api
    resource: machineautoscalers
  - group: machine.openshift.io
    name: ""
    namespace: openshift-machine-api
    resource: clusterautoscalers
  - group: ""
    name: openshift-machine-api
    resource: namespaces
  - name: operator
    version: 4.2.0-0.nightly-2019-12-20-002556

Comment 4 errata-xmlrpc 2020-01-07 17:55:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.