A recent fix in the reflector/informer https://github.com/kubernetes/kubernetes/pull/92688 prevents components/operators from entering a hotloop and stuck.
There are already reported cases that have run into that issue and were stuck for hours or even days. For example https://bugzilla.redhat.com/show_bug.cgi?id=1877346.
The root cause of the issue is the fact that a watch cache is initialized from the global revision (etcd) and might stay on it for an undefined period (if no changes were (add, modify) made).
That means that the watch cache across server instances may be out of sync.
That might lead to a situation in which a client gets a resource version from a server that has observed a newer rv, disconnect (due to a network error) from it, and reconnect to a server that is behind, resulting in “Too large resource version“ errors.
More details in https://github.com/kubernetes/kubernetes/issues/91073 and https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1904-efficient-watch-resumption
It looks like the issue only affects 1.18. According to https://github.com/kubernetes/kubernetes/issues/91073#issuecomment-652251669 the issue was first introduced in that version by changes done to the reflector.
The fix is already present in 1.19.
Please make sure that console-operator is using a client-go that includes https://github.com/kubernetes/kubernetes/pull/92688 if not please use this BZ and file a PR.
In case you are using a framework to build your operator make sure it uses the right version of the client-go library.
1. disconnected a node from network for a few minutes
# oc debug node/qe-yapei45-xxxx-master-0.c.openshift-qe.internal
Starting pod/qe-yapei45-xxxx-master-0copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.0.4
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# cd /root
sh-4.4# bash test.sh &
2. After connection recovered, reconnected to node, check if the such error messages in bug can be found,
$ oc debug node/ip-xx-0-xxx-242.us-east-2.compute.internal
sh-4.4# journalctl -b -u kubelet | grep -i 'Too large resource version'
No longer see the such error messages in kubelet logs.
3. I think for each component, we should check component operator logs, then I checked console-operator logs, also didn't see 'Too large resource version' errors
# oc logs console-operator-5ffd57b9b-8t294 -n openshift-console-operator | grep -i 'too large' // nothing returns
Moving to VERIFIED on 4.5.0-0.nightly-2020-10-10-013307
Do let me know if my steps are wrong. Thanks
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (OpenShift Container Platform 4.5.15 bug fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.