Description of problem: vsphere build cluster updated from 4.7.6 to 4.7.9 and stuck on node tuning operator not progressing to 4.7.9. The new pod got stuck on leader election: ``` I0508 13:55:12.095058 1 main.go:25] Go Version: go1.15.7 I0508 13:55:12.095391 1 main.go:26] Go OS/Arch: linux/amd64 I0508 13:55:12.095442 1 main.go:27] node-tuning Version: v4.7.0-202104250659.p0-0-g1d2e014-dirty I0508 13:55:12.101977 1 controller.go:954] trying to become a leader ``` Version-Release number of selected component (if applicable): 4.7.6
Without properly investigating must-gather yet, this reminds me of: https://bugzilla.redhat.com/show_bug.cgi?id=1916865 Were there other operators also stuck during the upgrade?
(In reply to jmencak from comment #2) > Without properly investigating must-gather yet, this reminds me of: > https://bugzilla.redhat.com/show_bug.cgi?id=1916865 Looks similar, I guess it could be closed as a dupe > Were there other operators also stuck during the upgrade? Marketplace also got stuck similarly
As other operators are stuck in a similar way, I'm closing this BZ as duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1916865 There is a Jira card to re-assess the current implementation of leader election in NTO not to rely on ConfigMaps and the garbage collection mechanism as it doesn't seem reliable. Currently, the plan is to have the rewrite done in OCP 4.9. *** This bug has been marked as a duplicate of bug 1916865 ***
Can you link the Jira card, or point us at something else where we can subscribe to hear about the fix?
(In reply to W. Trevor King from comment #5) > Can you link the Jira card, or point us at something else where we can > subscribe to hear about the fix? Sorry, I meant to include it in comment 4. https://issues.redhat.com/browse/PSAP-314 Note this is for NTO only, other operators will have to do a similar fix.
Deleted node-tuning-operator-lock ConfigMap created April 20 in the openshift-cluster-node-tuning-operator Namespace I0510 11:51:01.870067 1 controller.go:954] trying to become a leader I0510 22:54:32.086704 1 controller.go:959] became a leader I0510 22:54:32.092826 1 controller.go:966] starting Tuned controller I0510 22:54:32.496588 1 controller.go:1017] started events processor/controller
Upgrade verified on 4.7.6 -> 4.7.0-0.nightly-2021-06-26-014854 $ oc get cm/node-tuning-operator-lock -o yaml apiVersion: v1 kind: ConfigMap metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"cluster-node-tuning-operator-6bc548dcc5-n779h_cd1248a0-a728-4189-ab62-152e74f966d5","leaseDurationSeconds":30,"acquireTime":"2021-06-29T09:31:29Z","renewTime":"2021-06-29T10:19:23Z","leaderTransitions":1}' creationTimestamp: "2021-06-29T09:08:06Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:control-plane.alpha.kubernetes.io/leader: {} manager: cluster-node-tuning-operator operation: Update time: "2021-06-29T09:08:06Z" name: node-tuning-operator-lock namespace: openshift-cluster-node-tuning-operator resourceVersion: "79282" selfLink: /api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps/node-tuning-operator-lock uid: 5d4fe7ed-02e5-4f75-b95b-dc00b7869bf2
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.7.19 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:2554