Bug 1881068 - [release 4.5] cluster-samples-operator: Fix bug in reflector not recovering from "Too large resource version"
Summary: [release 4.5] cluster-samples-operator: Fix bug in reflector not recovering f...
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Samples
Version: 4.5
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.5.z
Assignee: Gabe Montero
QA Contact: XiuJuan Wang
Depends On: 1880368
TreeView+ depends on / blocked
Reported: 2020-09-21 13:43 UTC by Gabe Montero
Modified: 2020-10-12 15:48 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: An upstream kubernetes bug resulted in the API client not recovering reasonably quickly after recovery from a tcp-reset. Since controllers/operators inherently maintain client connections to the api server, they could be impacted by this. Consequence: Client logs could be flooded with "Timeout: Too large resource version errors" when connectivity was lost and and then regained. Fix: The upstream kubernetes 1.18 fix was pulled into samples operator 4.5.z Result: Samples operator is no longer susceptible to this hot loop of errror messages.
Clone Of:
Last Closed: 2020-10-12 15:48:00 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github openshift cluster-samples-operator pull 325 0 None closed Bug 1881068: bump(k8s): Fix bug in reflector not recovering from 'Too large resource version' 2020-11-04 19:11:03 UTC
Red Hat Product Errata RHBA-2020:3843 0 None None None 2020-10-12 15:48:19 UTC

Description Gabe Montero 2020-09-21 13:43:01 UTC
This bug was initially created as a copy of Bug #1880368

I am copying this bug because: 

A recent fix in the reflector/informer https://github.com/kubernetes/kubernetes/pull/92688 prevents components/operators from entering a hotloop and stuck.

There are already reported cases that have run into that issue and were stuck for hours or even days. For example https://bugzilla.redhat.com/show_bug.cgi?id=1877346.

The root cause of the issue is the fact that a watch cache is initialized from the global revision (etcd) and might stay on it for an undefined period (if no changes were (add, modify) made). 
That means that the watch cache across server instances may be out of sync. 
That might lead to a situation in which a client gets a resource version from a server that has observed a newer rv, disconnect (due to a network error) from it, and reconnect to a server that is behind, resulting in “Too large resource version“ errors.

More details in https://github.com/kubernetes/kubernetes/issues/91073 and https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1904-efficient-watch-resumption

It looks like the issue only affects 1.18. According to https://github.com/kubernetes/kubernetes/issues/91073#issuecomment-652251669 the issue was first introduced in that version by changes done to the reflector. 
The fix is already present in 1.19.

Please make sure that cluster-samples-operator and its operands are using a client-go that includes https://github.com/kubernetes/kubernetes/pull/92688 if not please use this BZ and file a PR.
In case you are using a framework to build your operator make sure it uses the right version of the client-go library.

Comment 3 XiuJuan Wang 2020-09-29 10:48:55 UTC
Do regression test for samples operator with 4.5.0-0.nightly-2020-09-28-124031, no issue found.

Comment 5 errata-xmlrpc 2020-10-12 15:48:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.5.14 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.