Bug 2075831 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade
Summary: Cluster upgrade.[sig-mco] Machine config pools complete upgrade
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-controller-manager
Version: 4.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.10.z
Assignee: Maciej Szulik
QA Contact: zhou ying
URL:
Whiteboard:
Depends On: 2075621
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-04-15 15:04 UTC by Maciej Szulik
Modified: 2022-07-29 06:58 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: The new mechanism responsible for tracking job's pod is using finalizers. Consequence: In rare cases it's possible that some pods will not get removed due to that finalizers not being removed. Fix: Disable beta JobTrackingWithFinalizers feature which was enabled by default. Result: There should be no pods left behind.
Clone Of: 2075621
Environment:
Last Closed: 2022-05-11 10:31:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kubernetes pull 1244 0 None open [release-4.10] Bug 2075831: UPSTREAM: 109487: Disable JobTrackingWithFinalizers due to unresolved… 2022-04-26 08:46:55 UTC
Red Hat Knowledge Base (Solution) 6953111 0 None None None 2022-04-22 14:34:21 UTC
Red Hat Product Errata RHBA-2022:1690 0 None None None 2022-05-11 10:32:12 UTC

Description Maciej Szulik 2022-04-15 15:04:38 UTC
+++ This bug was initially created as a clone of Bug #2075621 +++

Cluster upgrade.[sig-mco] Machine config pools complete upgrade

is failing frequently in CI, see:
https://sippy.ci.openshift.org/sippy-ng/tests/4.11/analysis?test=Cluster%20upgrade.%5Bsig-mco%5D%20Machine%20config%20pools%20complete%20upgrade


Specific case: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release[…]e-from-stable-4.10-e2e-aws-ovn-upgrade/1514548265960345600

pod collect-profiles-27498960-h5qfg in namespace openshift-operator-lifecycle-manager does not get deleted. It seems that job track finalizer is holding the pod. 

upstream kube is pulling the feature due to bug: https://github.com/kubernetes/kubernetes/pull/109487

upstream bug: https://github.com/kubernetes/kubernetes/issues/109485

Looks like this is a regression from 4.9 to 4.10

See slack discussion for more details: https://coreos.slack.com/archives/C01CQA76KMX/p1649954805399049

--- Additional comment from W. Trevor King on 2022-04-14 19:44:29 CEST ---

Machine pools not completing updates can keep the overall OCP-core update from completing, so adding UpgradeBlocker to get this into the assessment pipeline [1].  We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug. When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label. The expectation is that the assignee answers these questions.

Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking?
* example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
* example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time

What is the impact? Is it serious enough to warrant blocking edges?
* example: Up to 2 minute disruption in edge routing
* example: Up to 90 seconds of API downtime
* example: etcd loses quorum and you have to restore from backup

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
* example: Issue resolves itself after five minutes
* example: Admin uses oc to fix things
* example: Admin must SSH to hosts, restore from backups, or other non standard admin activities

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
* example: No, it has always been like this we just never noticed
* example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

[1]: https://github.com/openshift/enhancements/blob/master/enhancements/update/update-blocker-lifecycle/README.md

Comment 1 W. Trevor King 2022-04-15 15:45:50 UTC
The impact statement is getting worked out in the series tip bug 2075621, so drop those flags from this bug.

Comment 8 errata-xmlrpc 2022-05-11 10:31:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.10.13 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:1690


Note You need to log in before you can comment on or make changes to this bug.