Bug 1880341

Summary: [release 4.5] cluster-network-operator: Fix bug in reflector not recovering from "Too large resource version"
Product: OpenShift Container Platform Reporter: Lukasz Szaszkiewicz <lszaszki>
Component: NetworkingAssignee: Jacob Tanenbaum <jtanenba>
Networking sub component: openshift-sdn QA Contact: zhaozhanqi <zzhao>
Status: CLOSED ERRATA Docs Contact:
Severity: high    
Priority: high CC: abraj, aconstan, akhaire, bbennett, hgomes, jnaess, jseunghw, jtanenba, palonsor, rkshirsa, sople
Version: 4.5Keywords: Reopened
Target Milestone: ---   
Target Release: 4.5.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1882071 (view as bug list) Environment:
Last Closed: 2020-12-01 10:48:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1880369, 1882071    
Bug Blocks: 1879901    

Description Lukasz Szaszkiewicz 2020-09-18 10:19:18 UTC
A recent fix in the reflector/informer https://github.com/kubernetes/kubernetes/pull/92688 prevents components/operators from entering a hotloop and stuck.

There are already reported cases that have run into that issue and were stuck for hours or even days. For example https://bugzilla.redhat.com/show_bug.cgi?id=1877346.

The root cause of the issue is the fact that a watch cache is initialized from the global revision (etcd) and might stay on it for an undefined period (if no changes were (add, modify) made). 
That means that the watch cache across server instances may be out of sync. 
That might lead to a situation in which a client gets a resource version from a server that has observed a newer rv, disconnect (due to a network error) from it, and reconnect to a server that is behind, resulting in “Too large resource version“ errors.

More details in https://github.com/kubernetes/kubernetes/issues/91073 and https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1904-efficient-watch-resumption


It looks like the issue only affects 1.18. According to https://github.com/kubernetes/kubernetes/issues/91073#issuecomment-652251669 the issue was first introduced in that version by changes done to the reflector. 
The fix is already present in 1.19.


Please make sure that cluster-network-operator and its operands are using a client-go that includes https://github.com/kubernetes/kubernetes/pull/92688 if not please use this BZ and file a PR.
In case you are using a framework to build your operator make sure it uses the right version of the client-go library.

Comment 2 Ben Bennett 2020-09-21 13:36:19 UTC
This does not affect kube 1.19 so is not a problem in 4.6.

Comment 3 Ben Bennett 2020-09-21 13:38:48 UTC
Reopening since there is already a master version and Lukasz already linked to it.

Comment 4 Lukasz Szaszkiewicz 2020-09-23 18:24:31 UTC
(In reply to Ben Bennett from comment #2)
> This does not affect kube 1.19 so is not a problem in 4.6.

OpenShift in 4.6 can send a msg that the informers must understand and react.
We need to make sure that operators in 4.6 are at least on 1.18.6 as well.

Comment 7 Jacob Tanenbaum 2020-11-03 14:25:53 UTC
*** Bug 1892596 has been marked as a duplicate of this bug. ***

Comment 10 zhaozhanqi 2020-11-24 11:44:17 UTC
Verified this bug on 4.5.0-0.nightly-2020-11-22-160319

oc logs -n openshift-sdn sdn-controller-ktpsc | grep -i "too large"
$

Comment 12 errata-xmlrpc 2020-12-01 10:48:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.5.21 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5194

Comment 13 Red Hat Bugzilla 2023-09-15 00:48:22 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days