Bug 1881077 - [release 4.5] prometheus-operator: Fix bug in reflector not recovering from "Too large resource version"
Summary: [release 4.5] prometheus-operator: Fix bug in reflector not recovering from "...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.5.z
Assignee: Lili Cosic
QA Contact: Junqi Zhao
URL:
Whiteboard:
: 1892590 (view as bug list)
Depends On: 1881072
Blocks: 1879901
TreeView+ depends on / blocked
 
Reported: 2020-09-21 13:55 UTC by Simon Pasquier
Modified: 2020-12-01 14:34 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 1881072
Environment:
Last Closed: 2020-10-26 15:11:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift prometheus-operator pull 95 0 None closed Bug 1881077:go.sum,go.mod: Bump client-go & co. to fix known bug 2021-02-07 23:23:31 UTC
Red Hat Bugzilla 1891815 0 high CLOSED invalid syntax error to list PrometheusRule/ServiceMonitor 2023-12-15 19:55:30 UTC
Red Hat Product Errata RHBA-2020:4268 0 None None None 2020-10-26 15:12:17 UTC

Description Simon Pasquier 2020-09-21 13:55:17 UTC
+++ This bug was initially created as a clone of Bug #1881072 +++

+++ This bug was initially created as a clone of Bug #1880337 +++

A recent fix in the reflector/informer https://github.com/kubernetes/kubernetes/pull/92688 prevents components/operators from entering a hotloop and stuck.

There are already reported cases that have run into that issue and were stuck for hours or even days. For example https://bugzilla.redhat.com/show_bug.cgi?id=1877346.

The root cause of the issue is the fact that a watch cache is initialized from the global revision (etcd) and might stay on it for an undefined period (if no changes were (add, modify) made). 
That means that the watch cache across server instances may be out of sync. 
That might lead to a situation in which a client gets a resource version from a server that has observed a newer rv, disconnect (due to a network error) from it, and reconnect to a server that is behind, resulting in “Too large resource version“ errors.

More details in https://github.com/kubernetes/kubernetes/issues/91073 and https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1904-efficient-watch-resumption


It looks like the issue only affects 1.18. According to https://github.com/kubernetes/kubernetes/issues/91073#issuecomment-652251669 the issue was first introduced in that version by changes done to the reflector. 
The fix is already present in 1.19.


Please make sure that cluster-monitoring-operator is using a client-go that includes https://github.com/kubernetes/kubernetes/pull/92688 if not please use this BZ and file a PR.
In case you are using a framework to build your operator make sure it uses the right version of the client-go library.

--- Additional comment from Simon Pasquier on 2020-09-21 10:46:22 UTC ---

cluster-monitoring-operator 4.5 depends on k8s.io/client-go v0.17.1 [1] so it isn't affected by this issue. The same goes for prometheus-operator [2].
That being said, the 4.6 branches use k8s.io/client-go v0.18.3 and v0.18.2 [3][4] and they probably need to be fixed. 

@Lukasz Should we open another BZ?

[1] https://github.com/openshift/cluster-monitoring-operator/blob/0c110b7edadad09182983e48013125a07284116d/go.mod#L37
[2] https://github.com/openshift/prometheus-operator/blob/99b893905d26d85d50d1178be195388e5c000322/go.mod#L42
[3] https://github.com/openshift/cluster-monitoring-operator/blob/922578d7d8a33f39b43b577e74c469b4374e90bd/go.mod#L31
[4] https://github.com/openshift/prometheus-operator/blob/52492b3b48ed1e4f851a78a51817e92404cf2767/go.mod#L36

--- Additional comment from Lukasz Szaszkiewicz on 2020-09-21 12:20:10 UTC ---

(In reply to Simon Pasquier from comment #1)
> cluster-monitoring-operator 4.5 depends on k8s.io/client-go v0.17.1 [1] so
> it isn't affected by this issue. The same goes for prometheus-operator [2].
> That being said, the 4.6 branches use k8s.io/client-go v0.18.3 and v0.18.2
> [3][4] and they probably need to be fixed. 
> 
> @Lukasz Should we open another BZ?
> 
> [1]
> https://github.com/openshift/cluster-monitoring-operator/blob/
> 0c110b7edadad09182983e48013125a07284116d/go.mod#L37
> [2]
> https://github.com/openshift/prometheus-operator/blob/
> 99b893905d26d85d50d1178be195388e5c000322/go.mod#L42
> [3]
> https://github.com/openshift/cluster-monitoring-operator/blob/
> 922578d7d8a33f39b43b577e74c469b4374e90bd/go.mod#L31
> [4]
> https://github.com/openshift/prometheus-operator/blob/
> 52492b3b48ed1e4f851a78a51817e92404cf2767/go.mod#L36

The Kube API in 4.5 is affected. It can return an error that the operators must understand and recover from. Basically anything that uses an informer.
For 4.5/4.6 you should bump at least to 1.18.6 (which has the fix)

--- Additional comment from Simon Pasquier on 2020-09-21 12:30:42 UTC ---

Targeting this bug against 4.6.0. I'll create a clone for 4.5.z.

Comment 2 Junqi Zhao 2020-10-15 06:43:14 UTC
tested with 4.5.0-0.nightly-2020-10-15-005105, disconnected the node where prometheus-operator pod is deployed from network for a few minutes, after reconnection, no "Too large resource version" error for prometheus-operator container
steps:
1. ssh to the node where prometheus-operator is deployed and execute the the script in the background, it disconnects the node from the network for 5 minutes and reconnect later
$ ./test.sh &

$ cat test.sh
sudo ifconfig ens3 down
sleep 300
sudo ifconfig ens3 up

2. check prometheus-operator logs after the node is reconnected , there should not have "Too large resource version" error

Comment 5 errata-xmlrpc 2020-10-26 15:11:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.5.16 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4268

Comment 7 Sergiusz Urbaniak 2020-12-01 14:31:08 UTC
*** Bug 1892590 has been marked as a duplicate of this bug. ***

Comment 8 Sergiusz Urbaniak 2020-12-01 14:34:07 UTC
yes, we had to revert this fix in https://github.com/openshift/prometheus-operator/pull/99 specifically for prometheus-operator as it was causing other issues.

Hence marking this bug as CLOSED -> WONTFIX.


Note You need to log in before you can comment on or make changes to this bug.