Bug 1573460 - master-controllers log spammed with "The resourceVersion for the provided watch is too old." warning messages
Summary: master-controllers log spammed with "The resourceVersion for the provided wat...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-controller-manager
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.9.z
Assignee: Jack Ottofaro
QA Contact: zhou ying
URL:
Whiteboard:
Depends On: 1731187 1731188
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-01 10:39 UTC by Nicolas Nosenzo
Modified: 2023-09-07 19:08 UTC (History)
23 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Warning message reported by etcd roughly every 7 seconds Consequence: Excessive logging. Fix: Changed log level to info.
Clone Of:
: 1731187 1731188 (view as bug list)
Environment:
Last Closed: 2019-08-26 16:27:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3828751 0 None None None 2019-01-24 14:48:34 UTC
Red Hat Product Errata RHBA-2019:2550 0 None None None 2019-08-26 16:27:42 UTC

Description Nicolas Nosenzo 2018-05-01 10:39:18 UTC
Description of problem:


~~~
Apr 30 20:08:53 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:08:53.225792    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.Job ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:09:15 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:15.828429    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.LimitRange ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:09:27 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:27.558681    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.Namespace ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:09:31 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:31.469026    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1beta1.PodDisruptionBudget ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:09:32 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:32.872733    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.NetworkPolicy ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:09:40 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:40.689733    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.HorizontalPodAutoscaler ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:09:47 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:47.607628    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.PersistentVolumeClaim ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:09:51 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:51.714889    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.Role ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:09:54 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:54.530157    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.ControllerRevision ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:09:56 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:56.432845    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.ResourceQuota ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:10:09 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:10:09.058252    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1beta1.StatefulSet ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:10:10 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:10:10.282031    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1beta1.ReplicaSet ended with: The resourceVersion for the provided watch is too old.
Apr 30 20:10:14 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:10:14.288656    1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.Job ended with: The resourceVersion for the provided watch is too old.
~~~

# journalctl -u atomic-openshift-master-controllers | grep -i "The resourceVersion for the provided watch is too old" | wc -l
5686

Version-Release number of selected component (if applicable):

# oc version
oc v3.9.14
kubernetes v1.9.1+a0ce1bc657



How reproducible:

100%

Actual results:

controllers logs being filled out by this noisy warning message

Expected results:


Additional info:

Comment 15 Michal Fojtik 2018-12-17 09:16:34 UTC
Again, this is not a bug. The warning is issued as a result of watch being restarted, which happens when etcd drops the connection and we relist the resources. This warning is not fatal
and informs the user that the watchers are operating properly. If customer won't see this warning in logs, that will be somewhat concerning and might suggest that their watches/cache
is not healthy.

If this explanation is satisfactory for customers, I would close this bug as NOTBUG?

Comment 16 Anshul Verma 2018-12-18 12:46:30 UTC
Hello,

These messages created noise in the logs. So is this possible that we shift these message to any other greater logs level, may be 3 or 4, so that the default log-level 2 will not show these messages.
Please do update your views on this.

Thanks

Regards,
Anshul

Comment 18 Steven Walter 2019-01-24 22:24:31 UTC
I think this would be something we can resolve if we change it from the current log level. If it's truly just info/debug level can we mark it as such?

Comment 32 zhou ying 2019-08-20 03:15:19 UTC
Confirmed with latest version : openshift v3.9.99 , the warning has changed to information: 
Aug 19 22:50:37 ip-172-18-5-0.ec2.internal atomic-openshift-master-controllers[9169]: I0819 22:50:37.577469    9169 reflector.go:343] github.com/openshift/origin/pkg/quota/generated/informers/internalversion/factory.go:57: watch of *quota.ClusterResourceQuota ended with: The resourceVersion for the provided watch is too old.
Aug 19 22:50:40 ip-172-18-5-0.ec2.internal atomic-openshift-master-controllers[9169]: I0819 22:50:40.182531    9169 reflector.go:343] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1beta1.ValidatingWebhookConfiguration ended with: The resourceVersion for the provided watch is too old.

Comment 34 errata-xmlrpc 2019-08-26 16:27:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2550


Note You need to log in before you can comment on or make changes to this bug.