Created attachment 1751576 [details] Worker Nodes status still shown after upgrade finished Description of problem: after upgrade is finished, cluster setting page still show worker nodes updates status Version-Release number of selected component (if applicable): 4.7.0-0.nightly-2021-01-28-005023 How reproducible: Always Steps to Reproduce: 1. launch 4.7 cluster and upgrade it to a newer version $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2021-01-27-192705 True False 150m Cluster version is 4.7.0-0.nightly-2021-01-27-192705 $ oc adm upgrade --to 4.7.0-0.nightly-2021-01-28-005023 --force warning: --force overrides cluster verification of your supplied release image and waives any update precondition failures. Updating to 4.7.0-0.nightly-2021-01-28-005023 [root@preserved-qe-ui-rhel-1 ~]# oc adm upgrade Cluster version is 4.7.0-0.nightly-2021-01-27- 2. Monitor Cluster Operators, Master & Worker Nodes updates progress from Administration -> Cluster Settings page 3. Check Cluster Operators, Master & Worker Nodes updates status after upgrade completed Actual results: 2. During upgrade, console shows Cluster Operators, Master & Worker Nodes updates progress in percentage 3. After upgrade completed, 'Worker Nodes' update status are still shown and it has value 0% Expected results: 3. after upgrade completed, we should not show 'Worker Nodes' updates status any more. Additional info:
Created attachment 1751577 [details] Cluster Operator/Master/Worker update status during upgrade
Ya Dan, did the worker nodes indicate they were updated *before* the cluster said it was updated and then the worker nodes switched back to not updated? It is possible for the cluster to be updated without the worker nodes which is why worker nodes are separated with a divider and can still show after the cluster update is complete (the help icon popover explains this -- see https://github.com/openshift/console/blob/master/frontend/public/components/cluster-settings/cluster-settings.tsx#L542).
I was able to reproduce this by attempting to upgrade a 4.7.0-0.nightly-2021-01-28-081708 to 4.7.0-0.nightly-2021-01-28-140123 and then reverting to 4.7.0-0.nightly-2021-01-28-081708. I suspect this is not a console issue but is an issue where the worker MachineConfigPool does not get updated. The console checks the MCP status condition Updating lastTransitionTime and compares it to the ClusterVersion desired version startedTime. Reassigning to the ____ team to investigate.
Please disregard https://bugzilla.redhat.com/show_bug.cgi?id=1921529#c3. I inadvertently submitted a draft comment. Still investigating.
Reassigning to the Machine Config Operator team for investigation. It appears the cause of the bug is the result of the MachineConfigPool Updated status condition lastTransitionTime not being updated as there have been no changes in console that would have resulted in this regression. The console compares the MachineConfigPool Updated status condition lastTransitionTime [1] to the ClusterVersion status history desiredHistory startedTime [2] to determine whether or not a MachineConfig has been updated. If the MachineConfigPool Updated status condition lastTransitionTime is not updated, the MachineConfigs will never show as having updated. [1] https://github.com/openshift/console/blob/master/frontend/public/components/cluster-settings/cluster-settings.tsx#L121-L126 [2] https://github.com/openshift/console/blob/master/frontend/public/components/cluster-settings/cluster-settings.tsx#L111-L119
Sigh. "Updated" in https://bugzilla.redhat.com/show_bug.cgi?id=1921529#c5 should be "Updating".
*** This bug has been marked as a duplicate of bug 2050698 ***