Description of problem: ~~~ Apr 30 20:08:53 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:08:53.225792 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.Job ended with: The resourceVersion for the provided watch is too old. Apr 30 20:09:15 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:15.828429 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.LimitRange ended with: The resourceVersion for the provided watch is too old. Apr 30 20:09:27 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:27.558681 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.Namespace ended with: The resourceVersion for the provided watch is too old. Apr 30 20:09:31 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:31.469026 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1beta1.PodDisruptionBudget ended with: The resourceVersion for the provided watch is too old. Apr 30 20:09:32 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:32.872733 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.NetworkPolicy ended with: The resourceVersion for the provided watch is too old. Apr 30 20:09:40 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:40.689733 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.HorizontalPodAutoscaler ended with: The resourceVersion for the provided watch is too old. Apr 30 20:09:47 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:47.607628 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.PersistentVolumeClaim ended with: The resourceVersion for the provided watch is too old. Apr 30 20:09:51 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:51.714889 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.Role ended with: The resourceVersion for the provided watch is too old. Apr 30 20:09:54 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:54.530157 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.ControllerRevision ended with: The resourceVersion for the provided watch is too old. Apr 30 20:09:56 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:09:56.432845 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.ResourceQuota ended with: The resourceVersion for the provided watch is too old. Apr 30 20:10:09 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:10:09.058252 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1beta1.StatefulSet ended with: The resourceVersion for the provided watch is too old. Apr 30 20:10:10 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:10:10.282031 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1beta1.ReplicaSet ended with: The resourceVersion for the provided watch is too old. Apr 30 20:10:14 master-0.redhat.com atomic-openshift-master-controllers[1871]: W0430 20:10:14.288656 1871 reflector.go:341] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1.Job ended with: The resourceVersion for the provided watch is too old. ~~~ # journalctl -u atomic-openshift-master-controllers | grep -i "The resourceVersion for the provided watch is too old" | wc -l 5686 Version-Release number of selected component (if applicable): # oc version oc v3.9.14 kubernetes v1.9.1+a0ce1bc657 How reproducible: 100% Actual results: controllers logs being filled out by this noisy warning message Expected results: Additional info:
Again, this is not a bug. The warning is issued as a result of watch being restarted, which happens when etcd drops the connection and we relist the resources. This warning is not fatal and informs the user that the watchers are operating properly. If customer won't see this warning in logs, that will be somewhat concerning and might suggest that their watches/cache is not healthy. If this explanation is satisfactory for customers, I would close this bug as NOTBUG?
Hello, These messages created noise in the logs. So is this possible that we shift these message to any other greater logs level, may be 3 or 4, so that the default log-level 2 will not show these messages. Please do update your views on this. Thanks Regards, Anshul
I think this would be something we can resolve if we change it from the current log level. If it's truly just info/debug level can we mark it as such?
Confirmed with latest version : openshift v3.9.99 , the warning has changed to information: Aug 19 22:50:37 ip-172-18-5-0.ec2.internal atomic-openshift-master-controllers[9169]: I0819 22:50:37.577469 9169 reflector.go:343] github.com/openshift/origin/pkg/quota/generated/informers/internalversion/factory.go:57: watch of *quota.ClusterResourceQuota ended with: The resourceVersion for the provided watch is too old. Aug 19 22:50:40 ip-172-18-5-0.ec2.internal atomic-openshift-master-controllers[9169]: I0819 22:50:40.182531 9169 reflector.go:343] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: watch of *v1beta1.ValidatingWebhookConfiguration ended with: The resourceVersion for the provided watch is too old.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2550