Bug 1779745 - Cluster-autoscaler stuck on update, doesn't report status
Summary: Cluster-autoscaler stuck on update, doesn't report status
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.1.z
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.1.z
Assignee: Brad Ison
QA Contact: Jianwei Hou
Depends On: 1779743
TreeView+ depends on / blocked
Reported: 2019-12-04 15:54 UTC by Brad Ison
Modified: 2020-01-09 09:16 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1779640
Last Closed: 2020-01-09 09:16:20 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Github openshift cluster-autoscaler-operator pull 127 None None None 2019-12-20 09:55:49 UTC
Red Hat Product Errata RHBA-2020:0010 None None None 2020-01-09 09:16:28 UTC

Description Brad Ison 2019-12-04 15:54:40 UTC
+++ This bug was initially created as a clone of Bug #1779640 +++

Description of problem:

4.3 nightly -> 4.3 nightly update failed:
`failed to initialize the cluster: Cluster operator cluster-autoscaler is still updating`

Clusteroperators list (https://storage.googleapis.com/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/11999/artifacts/e2e-aws-upgrade/clusteroperators.json) shows its empty (?):

            "apiVersion": "config.openshift.io/v1",
            "kind": "ClusterOperator",
            "metadata": {
                "creationTimestamp": "2019-12-04T00:20:33Z",
                "generation": 1,
                "name": "cluster-autoscaler",
                "resourceVersion": "11333",
                "selfLink": "/apis/config.openshift.io/v1/clusteroperators/cluster-autoscaler",
                "uid": "ed891617-6cf2-4c78-9c0e-54d2e86af724"
            "spec": {}

Version-Release number of selected component (if applicable):
4.3.0-0.nightly-2019-12-03-211441 -> 4.3.0-0.nightly-2019-12-03-234445

How reproducible:

Additional info:

--- Additional comment from Brad Ison on 2019-12-04 15:49:10 UTC ---

The underlying issue here is that etcd was under load and taking multiple seconds to sync its log, which was causing leader elections, and I think some API writes to fail.

In addition, the cluster-autoscaler-operator was not reporting failures to apply updates to its ClusterOperator resource, and worse, was not retrying when it failed to apply an "Available" status. So the CVO was unaware of its success. The linked PR fixes that, and I'll make sure it's back ported to previous releases.

Comment 2 Qin Ping 2019-12-24 05:24:38 UTC
verified in 4.1.0-0.nightly-2019-12-23-102617

$ oc get co cluster-autoscaler -oyaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
  creationTimestamp: "2019-12-23T07:50:08Z"
  generation: 1
  name: cluster-autoscaler
  resourceVersion: "378275"
  selfLink: /apis/config.openshift.io/v1/clusteroperators/cluster-autoscaler
  uid: d6ed856a-2558-11ea-ac20-0aeeb9ddd54e
spec: {}
  - lastTransitionTime: "2019-12-23T07:50:08Z"
    message: at version 4.1.0-0.nightly-2019-12-23-102617
    status: "True"
    type: Available
  - lastTransitionTime: "2019-12-24T03:25:03Z"
    status: "False"
    type: Progressing
  - lastTransitionTime: "2019-12-23T07:50:08Z"
    status: "False"
    type: Degraded
  extension: null
  - group: ""
    name: openshift-machine-api
    resource: namespaces
  - name: operator
    version: 4.1.0-0.nightly-2019-12-23-102617

Comment 4 errata-xmlrpc 2020-01-09 09:16:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.