|Summary:||[release 4.5] cluster-monitoring-operator: Fix bug in reflector not recovering from "Too large resource version"|
|Product:||OpenShift Container Platform||Reporter:||Simon Pasquier <spasquie>|
|Component:||Monitoring||Assignee:||Pawel Krupa <pkrupa>|
|Status:||CLOSED ERRATA||QA Contact:||Junqi Zhao <juzhao>|
|Version:||4.5||CC:||alegrand, anpicker, erooth, juzhao, kakkoyun, lcosic, lszaszki, mloibl, palonsor, pkrupa, spasquie, ssadhale, surbania|
|Fixed In Version:||Doc Type:||If docs needed, set a value|
|Doc Text:||Story Points:||---|
|Last Closed:||2020-11-24 12:42:10 UTC||Type:||---|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
|Cloudforms Team:||---||Target Upstream Version:|
|Bug Depends On:||1880337|
Description Simon Pasquier 2020-10-29 09:12:22 UTC
+++ This bug was initially created as a clone of Bug #1880337 +++ A recent fix in the reflector/informer https://github.com/kubernetes/kubernetes/pull/92688 prevents components/operators from entering a hotloop and stuck. There are already reported cases that have run into that issue and were stuck for hours or even days. For example https://bugzilla.redhat.com/show_bug.cgi?id=1877346. The root cause of the issue is the fact that a watch cache is initialized from the global revision (etcd) and might stay on it for an undefined period (if no changes were (add, modify) made). That means that the watch cache across server instances may be out of sync. That might lead to a situation in which a client gets a resource version from a server that has observed a newer rv, disconnect (due to a network error) from it, and reconnect to a server that is behind, resulting in “Too large resource version“ errors. More details in https://github.com/kubernetes/kubernetes/issues/91073 and https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1904-efficient-watch-resumption It looks like the issue only affects 1.18. According to https://github.com/kubernetes/kubernetes/issues/91073#issuecomment-652251669 the issue was first introduced in that version by changes done to the reflector. The fix is already present in 1.19. Please make sure that cluster-monitoring-operator is using a client-go that includes https://github.com/kubernetes/kubernetes/pull/92688 if not please use this BZ and file a PR. In case you are using a framework to build your operator make sure it uses the right version of the client-go library. --- Additional comment from Simon Pasquier on 2020-09-21 10:46:22 UTC --- cluster-monitoring-operator 4.5 depends on k8s.io/client-go v0.17.1  so it isn't affected by this issue. The same goes for prometheus-operator . That being said, the 4.6 branches use k8s.io/client-go v0.18.3 and v0.18.2  and they probably need to be fixed. @Lukasz Should we open another BZ?  https://github.com/openshift/cluster-monitoring-operator/blob/0c110b7edadad09182983e48013125a07284116d/go.mod#L37  https://github.com/openshift/prometheus-operator/blob/99b893905d26d85d50d1178be195388e5c000322/go.mod#L42  https://github.com/openshift/cluster-monitoring-operator/blob/922578d7d8a33f39b43b577e74c469b4374e90bd/go.mod#L31  https://github.com/openshift/prometheus-operator/blob/52492b3b48ed1e4f851a78a51817e92404cf2767/go.mod#L36 --- Additional comment from Lukasz Szaszkiewicz on 2020-09-21 12:20:10 UTC --- (In reply to Simon Pasquier from comment #1) > cluster-monitoring-operator 4.5 depends on k8s.io/client-go v0.17.1  so > it isn't affected by this issue. The same goes for prometheus-operator . > That being said, the 4.6 branches use k8s.io/client-go v0.18.3 and v0.18.2 >  and they probably need to be fixed. > > @Lukasz Should we open another BZ? > >  > https://github.com/openshift/cluster-monitoring-operator/blob/ > 0c110b7edadad09182983e48013125a07284116d/go.mod#L37 >  > https://github.com/openshift/prometheus-operator/blob/ > 99b893905d26d85d50d1178be195388e5c000322/go.mod#L42 >  > https://github.com/openshift/cluster-monitoring-operator/blob/ > 922578d7d8a33f39b43b577e74c469b4374e90bd/go.mod#L31 >  > https://github.com/openshift/prometheus-operator/blob/ > 52492b3b48ed1e4f851a78a51817e92404cf2767/go.mod#L36 The Kube API in 4.5 is affected. It can return an error that the operators must understand and recover from. Basically anything that uses an informer. For 4.5/4.6 you should bump at least to 1.18.6 (which has the fix) --- Additional comment from Simon Pasquier on 2020-09-21 12:30:42 UTC --- Targeting this bug against 4.6.0. I'll create a clone for 4.5.z. --- Additional comment from OpenShift Automated Release Tooling on 2020-09-23 21:22:54 UTC --- Elliott changed bug status from MODIFIED to ON_QA. --- Additional comment from Junqi Zhao on 2020-09-24 03:52:46 UTC --- fix is in 4.6.0-0.nightly-2020-09-24-015627 and later build --- Additional comment from Junqi Zhao on 2020-09-25 08:34:52 UTC --- tested with 4.6.0-0.nightly-2020-09-24-184015, the fix is in the payload and did not see the "Too large resource version" error --- Additional comment from errata-xmlrpc on 2020-10-06 18:23:59 UTC --- This bug has been added to advisory RHBA-2020:54579 by OpenShift Release Team Bot (ocp-build/buildvm.openshift.eng.bos.redhat.com) --- Additional comment from errata-xmlrpc on 2020-10-26 00:42:27 UTC --- Bug report changed to RELEASE_PENDING status by Errata System. Advisory RHBA-2020:4196-05 has been changed to PUSH_READY status. https://errata.devel.redhat.com/advisory/54579 --- Additional comment from errata-xmlrpc on 2020-10-27 16:42:20 UTC --- Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196
Comment 1 Simon Pasquier 2020-10-29 09:13:19 UTC
Fix for https://bugzilla.redhat.com/show_bug.cgi?id=1881043 was incomplete, we need at least k8s.io/client-go.9 while we're only at v0.18.6.
Comment 2 Lukasz Szaszkiewicz 2020-11-04 14:17:49 UTC
any update? what's the current status?
Comment 3 Simon Pasquier 2020-11-05 13:49:21 UTC
This bug will be planned in one of our future sprints.
Comment 7 Junqi Zhao 2020-11-16 01:33:03 UTC
tested with 4.5.0-0.nightly-2020-11-15-110315, Client go version is 0.18.9 and did not see the "Too large resource version" error
Comment 9 errata-xmlrpc 2020-11-24 12:42:10 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.5.20 bug fix and golang security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5118