Bug 1880322 - [release 4.5] cluster-kube-apiserver-operator: Fix bug in reflector not recovering from "Too large resource version"
Summary: [release 4.5] cluster-kube-apiserver-operator: Fix bug in reflector not recov...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.5.z
Assignee: Lukasz Szaszkiewicz
QA Contact: Ke Wang
URL:
Whiteboard:
Depends On: 1880369
Blocks: 1879901
TreeView+ depends on / blocked
 
Reported: 2020-09-18 09:53 UTC by Lukasz Szaszkiewicz
Modified: 2020-10-12 15:48 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: A watch cache (in Kube API) is initialized from the global revision (etcd) and might stay on it for an undefined period if no changes were (add, modify) made. Consequence: It might lead to a situation in which a client gets a resource version (RV) from a server that has observed a newer RV, disconnect (due to a network error) from it, and reconnect to a server that is behind, resulting in "Too large resource version" errors. Fix: Fix the reflector so that it can recover from "Too large resource version" errors Result: Operators that use client-go library for getting notifications from the server can recover and make progress upon receiving "Too large resource version" error
Clone Of:
Environment:
Last Closed: 2020-10-12 15:47:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-apiserver-operator pull 954 0 None closed Bug 1880322: fix bug in reflector not recovering from "Too large resource version" 2021-01-15 10:14:53 UTC
Red Hat Product Errata RHBA-2020:3843 0 None None None 2020-10-12 15:48:24 UTC

Description Lukasz Szaszkiewicz 2020-09-18 09:53:21 UTC
A recent fix in the reflector/informer https://github.com/kubernetes/kubernetes/pull/92688 prevents components/operators from entering a hotloop and stuck.

There are already reported cases that have run into that issue and were stuck for hours or even days. For example https://bugzilla.redhat.com/show_bug.cgi?id=1877346.

The root cause of the issue is the fact that a watch cache is initialized from the global revision (etcd) and might stay on it for an undefined period (if no changes were (add, modify) made). 
That means that the watch cache across server instances may be out of sync. 
That might lead to a situation in which a client gets a resource version from a server that has observed a newer rv, disconnect (due to a network error) from it, and reconnect to a server that is behind, resulting in “Too large resource version“ errors.

More details in https://github.com/kubernetes/kubernetes/issues/91073 and https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1904-efficient-watch-resumption


It looks like the issue only affects 1.18. According to https://github.com/kubernetes/kubernetes/issues/91073#issuecomment-652251669 the issue was first introduced in that version by changes done to the reflector. 
The fix is already present in 1.19.


Please make sure that cluster-kube-apiserver-operator is using a client-go that includes https://github.com/kubernetes/kubernetes/pull/92688 if not please use this BZ and file a PR.
In case you are using a framework to build your operator make sure it uses the right version of the client-go library.

Comment 3 Ke Wang 2020-09-27 05:41:48 UTC
- cluster-kube-apiserver-operator checking, as expected version. so move the bug verified.

$ git clone https://github.com/openshift/cluster-kube-apiserver-operator.git

$ cd cluster-kube-apiserver-operator 

$ oc adm release info --commits registry.svc.ci.openshift.org/ocp/release:4.5.0-0.nightly-2020-09-26-194704 | grep cluster-kube-apiserver-operator 
  cluster-kube-apiserver-operator                https://github.com/openshift/cluster-kube-apiserver-operator                be4604abdc0c47463ca39ba80316ce5219867109

$ git checkout -b 4.5.0-0.nightly-2020-09-26-194704 be4604a
Switched to a new branch '4.5.0-0.nightly-2020-09-26-194704'

$ grep -i 'k8s.io/client-go v' go.mod 
	k8s.io/client-go v0.18.6

According to the PR https://github.com/kubernetes/kubernetes/issues/91073,disconnected a node from network for a few minutes. 

$ oc debug node/ip-10-0-144-242.us-east-2.compute.internal
sh-4.4# cat > test.sh <<EOF
ifconfig ens3 down
sleep 300
ifconfig ens3 up
EOF
sh-4.4# bash ./test.sh &

After connection recovered, reconnected to node, check if the such error messages in bug can be found, 
kubelet logs,
$ oc debug node/ip-10-0-144-242.us-east-2.compute.internal
sh-4.4# journalctl -b -u kubelet |  grep -i 'Too large resource version'

kube-apiserver logs,
$ oc get pods -n openshift-kube-apiserver
NAME                                                           READY   STATUS      RESTARTS   AGE
...
kube-apiserver-ip-10-0-144-242.us-east-2.compute.internal      4/4     Running     4          42m
kube-apiserver-ip-10-0-177-170.us-east-2.compute.internal      4/4     Running     0          44m
kube-apiserver-ip-10-0-223-13.us-east-2.compute.internal       4/4     Running     0          38m
..

$ oc logs -n openshift-kube-apiserver kube-apiserver-ip-10-0-144-242.us-east-2.compute.internal | grep -i 'Too large resource version'
$ oc logs -n openshift-kube-apiserver kube-apiserver-ip-10-0-177-170.us-east-2.compute.internal | grep -i 'Too large resource version'
$ oc logs -n openshift-kube-apiserver kube-apiserver-ip-10-0-223-13.us-east-2.compute.internal | grep -i 'Too large resource version'

No longer see the such error messages in bug.

Comment 6 errata-xmlrpc 2020-10-12 15:47:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.5.14 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3843


Note You need to log in before you can comment on or make changes to this bug.