(In reply to ge liu from comment #0) > upgrade 4.9.26 to 4.10.6... Update failures are probably not installer bugs. Do you have a must-gather, ClusterVersion, etcd ClusterOperator, or etcd or cluster-version operator logs? It's not clear to me from comment 0 whether this should be an etcd bug or a cluster-version operator bug.
From comment 3's must-gather: $ yaml2json <cluster-scoped-resources/config.openshift.io/clusterversions/version.yaml | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message' 2022-04-06T05:06:36Z Available=True : Done applying 4.9.26 2022-04-06T05:55:30Z Failing=False : 2022-04-06T05:27:06Z Progressing=True ClusterOperatorUpdating: Working towards 4.10.6: 205 of 771 done (26% complete), waiting on machine-api 2022-04-06T05:26:09Z RetrievedUpdates=True : 2022-04-06T05:35:07Z Upgradeable=False KubeletMinorVersion_KubeletMinorVersionUnsupportedNextUpgrade: Cluster operator kube-apiserver should not be upgraded between minor versions: KubeletMinorVersionUpgradeable: Kubelet minor versions on 5 nodes will not be supported in the next OpenShift minor version upgrade. So looks like the CVO was briefly waiting on etcd to do the pre-minor-update backup dance, and now everything is going smoothly? [1] walks through the dance and points out some higher-latency steps, and [2] seems to be directly scoping that bug to Upgradeable-check latency, so I'll close this one as a dup of [2]. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=2061444#c4 [2]: https://bugzilla.redhat.com/show_bug.cgi?id=2006611#c7 *** This bug has been marked as a duplicate of bug 2006611 ***
Yes, this bug is duplicate with 2061444