Bug 1880322
Summary: | [release 4.5] cluster-kube-apiserver-operator: Fix bug in reflector not recovering from "Too large resource version" | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Lukasz Szaszkiewicz <lszaszki> |
Component: | kube-apiserver | Assignee: | Lukasz Szaszkiewicz <lszaszki> |
Status: | CLOSED ERRATA | QA Contact: | Ke Wang <kewang> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.5 | CC: | aos-bugs, mfojtik, xxia |
Target Milestone: | --- | ||
Target Release: | 4.5.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause: A watch cache (in Kube API) is initialized from the global revision (etcd) and might stay on it for an undefined period if no changes were (add, modify) made.
Consequence: It might lead to a situation in which a client gets a resource version (RV) from a server that has observed a newer RV, disconnect (due to a network error) from it, and reconnect to a server that is behind, resulting in "Too large resource version" errors.
Fix: Fix the reflector so that it can recover from "Too large resource version" errors
Result: Operators that use client-go library for getting notifications from the server can recover and make progress upon receiving "Too large resource version" error
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2020-10-12 15:47:56 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1880369 | ||
Bug Blocks: | 1879901 |
Description
Lukasz Szaszkiewicz
2020-09-18 09:53:21 UTC
- cluster-kube-apiserver-operator checking, as expected version. so move the bug verified. $ git clone https://github.com/openshift/cluster-kube-apiserver-operator.git $ cd cluster-kube-apiserver-operator $ oc adm release info --commits registry.svc.ci.openshift.org/ocp/release:4.5.0-0.nightly-2020-09-26-194704 | grep cluster-kube-apiserver-operator cluster-kube-apiserver-operator https://github.com/openshift/cluster-kube-apiserver-operator be4604abdc0c47463ca39ba80316ce5219867109 $ git checkout -b 4.5.0-0.nightly-2020-09-26-194704 be4604a Switched to a new branch '4.5.0-0.nightly-2020-09-26-194704' $ grep -i 'k8s.io/client-go v' go.mod k8s.io/client-go v0.18.6 According to the PR https://github.com/kubernetes/kubernetes/issues/91073,disconnected a node from network for a few minutes. $ oc debug node/ip-10-0-144-242.us-east-2.compute.internal sh-4.4# cat > test.sh <<EOF ifconfig ens3 down sleep 300 ifconfig ens3 up EOF sh-4.4# bash ./test.sh & After connection recovered, reconnected to node, check if the such error messages in bug can be found, kubelet logs, $ oc debug node/ip-10-0-144-242.us-east-2.compute.internal sh-4.4# journalctl -b -u kubelet | grep -i 'Too large resource version' kube-apiserver logs, $ oc get pods -n openshift-kube-apiserver NAME READY STATUS RESTARTS AGE ... kube-apiserver-ip-10-0-144-242.us-east-2.compute.internal 4/4 Running 4 42m kube-apiserver-ip-10-0-177-170.us-east-2.compute.internal 4/4 Running 0 44m kube-apiserver-ip-10-0-223-13.us-east-2.compute.internal 4/4 Running 0 38m .. $ oc logs -n openshift-kube-apiserver kube-apiserver-ip-10-0-144-242.us-east-2.compute.internal | grep -i 'Too large resource version' $ oc logs -n openshift-kube-apiserver kube-apiserver-ip-10-0-177-170.us-east-2.compute.internal | grep -i 'Too large resource version' $ oc logs -n openshift-kube-apiserver kube-apiserver-ip-10-0-223-13.us-east-2.compute.internal | grep -i 'Too large resource version' No longer see the such error messages in bug. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.5.14 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:3843 |