Bug 1846177 - Unable to upgrade OCP4.3.19 to OCP4.4 in disconnected env: CVO enters reconciling mode without applying any manifests in update mode
Summary: Unable to upgrade OCP4.3.19 to OCP4.4 in disconnected env: CVO enters reconci...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cluster Version Operator
Version: 4.3.z
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.2.z
Assignee: W. Trevor King
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On: 1844117
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-11 00:49 UTC by OpenShift BugZilla Robot
Modified: 2020-07-01 16:08 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: The Cluster Version Operator had a race where it would consider a timed-out update reconciliation cycle as a successful update. The race was very rare, except for restricted-network clusters where the operator timed out attempting to fetch release image signatures. Consequence: The Cluster Version Operator would enter its shuffled-manifest reconciliation mode, possibly breaking the cluster if the manifests were applied in an order that the components could not handle. Fix: The Cluster Version Operator now treats those timed-out updates as failures. Result: The Cluster Version Operator no longer enters reconciling mode before the update succeeds.
Clone Of:
Environment:
Last Closed: 2020-07-01 16:08:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-version-operator pull 382 0 None closed Bug 1846177: pkg/cvo/sync_worker: Do not treat "All errors were context errors..." as success 2020-06-26 17:08:24 UTC
Red Hat Product Errata RHBA-2020:2589 0 None None None 2020-07-01 16:08:29 UTC

Description OpenShift BugZilla Robot 2020-06-11 00:49:06 UTC
+++ This bug was initially created as a clone of Bug #1844117 +++

+++ This bug was initially created as a clone of Bug #1843732 +++

+++ This bug was initially created as a clone of Bug #1843526 +++

+++ This bug was initially created as a clone of Bug #1838497 +++

--- Additional comment from W. Trevor King on 2020-05-28 08:39:16 UTC ---

Ok, we have a better lead on this issue after looking at some local reproducers and then taking a closer look at the CVO logs attached in comment 6.  From the attached unsuccessful_oc_logs_cluster_version_operator.log:

I0524 16:02:50.774740       1 start.go:19] ClusterVersionOperator v4.3.19-202005041055-dirty
...
I0524 16:02:51.342076       1 cvo.go:332] Starting ClusterVersionOperator with minimum reconcile period 2m52.525702462s
...
I0524 16:02:51.444381       1 sync_worker.go:471] Running sync quay.io/openshift-release-dev/ocp-release@sha256:039a4ef7c128a049ccf916a1d68ce93e8f5494b44d5a75df60c85e9e7191dacc (force=true) on generation 2 in state Updating at attempt 0
...
I0524 16:08:36.950963       1 sync_worker.go:539] Payload loaded from quay.io/openshift-release-dev/ocp-release@sha256:039a4ef7c128a049ccf916a1d68ce93e8f5494b44d5a75df60c85e9e7191dacc with hash h110xMINmng=
...
I0524 16:08:36.953283       1 task_graph.go:611] Result of work: [update was cancelled at 0 of 573]
...
I0524 16:11:29.479262       1 sync_worker.go:471] Running sync quay.io/openshift-release-dev/ocp-release@sha256:039a4ef7c128a049ccf916a1d68ce93e8f5494b44d5a75df60c85e9e7191dacc (force=true) on generation 2 in state Reconciling at attempt 0
...

So the 4.3.19 CVO loads the 4.4.3 (in that case) manifests, begins updating to them, immediately hits a cancel/timeout [1], and then decides (mistakenly) that it successfully completed the update and start Reconciling.  We're still not clear on exactly what the mistake is.

In the meantime, reconciliation's shuffled, flattened manifest graph can do bad things like updating the kubelets before updating the Kubernetes API server.  Raising to urgent while we work on bottoming this out.

[1]: https://github.com/openshift/cluster-version-operator/pull/372 to improve the logging here, but the 5m45s duration between 2:51 and 8:36 roughly matches 2 * 2m52s [2].
[2]: https://github.com/openshift/cluster-version-operator/blob/86b9bdba55a85e2e071603916db4c43b481e7588/pkg/cvo/sync_worker.go#L296

--- Additional comment from W. Trevor King on 2020-05-28 12:46:48 UTC ---

PR submitted.  We should backport through 4.2, when we started supporting restricted-network flows, because the timing out signature retrievals plus forced updates common there are what makes tripping this race more likely.  Here's a full impact statement, now that we understand the issue:

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
  All customers upgrading out of a CVO that does not contain the patch are potentially affected, but the chance of tripping the race is very small except for restricted-network users who are forcing updates.
  The impact when the race trips is also small for patch-level bumps, so the main concern is restricted-network users who are performing minor bumps like 4.2->4.3.
What is the impact?  Is it serious enough to warrant blocking edges?
  The CVO enters reconciliation mode on the target version, attempting to apply a flat, shuffled manifest graph.  All kinds of terrible things could happen like the machine-config trying to roll out the newer machine-os-content and its 4.4 hyperkube binary before rolling out prerequisites like the 4.4 kube-apiserver operator.  That one will make manifest application sticky, but it would not surprise me if you could find some terrible ordering that might brick a cluster.
How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
  Admin must update to a CVO that is not vulnerable to the race.  Using an unforced update (e.g. by copying in a signature ConfigMap [1] for 4.3.12+ or 4.4.0+) would help reduce the likelihood of tripping the race.  Using a patch-level update would reduce the impact if the race trips anyway.

[1]: https://github.com/openshift/openshift-docs/pull/21993

--- Additional comment from Brenton Leanhardt on 2020-05-28 12:51:30 UTC ---

Are clusters that hit this bug permanently wedged or is there a chance a subsequent attempt avoids the race?

--- Additional comment from W. Trevor King on 2020-05-28 13:11:06 UTC ---

> Are clusters that hit this bug permanently wedged or is there a chance a subsequent attempt avoids the race?

Once you trip the race and move from UpdatingPayload to ReconcilingPayload, that CVO will not go back to updating.  You can re-target your update with 'oc adm upgrade ...' and that will get you back into UpdatingPayload mode.  But while the CVO was running between the trace trip and your update, it could have been doing all sorts of things as it tried to push out the flattened, shuffled manifest graph.  Recovering a cluster that has hit this bug is going to be hard, and will probably involve a case-by-case review of its current state to try to determine a next-hop update target that is as close as possible to what the cluster is currently running.  And also accounts for which directions and orders operators can transition in.  Worst case short of a bricked cluster would be having to turn the CVO off entirely, and push manifests one at a time on its behalf to slowly unwind any components that had been too tangled up.

--- Additional comment from lmohanty on 2020-06-04 14:55:54 UTC ---

*** Bug 1843987 has been marked as a duplicate of this bug. ***

--- Additional comment from wking on 2020-06-04 16:41:01 UTC ---

*** Bug 1843987 has been marked as a duplicate of this bug. ***

Comment 3 Johnny Liu 2020-06-23 06:38:11 UTC
Verified this bug with 4.2.0-0.nightly-2020-06-21-204910, PASS.


Install a disconnected cluster with 4.2.0-0.nightly-2020-06-21-204910, trigger upgrade towards 4.3.26, upgrade is completed successfully.

Comment 5 errata-xmlrpc 2020-07-01 16:08:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2589


Note You need to log in before you can comment on or make changes to this bug.