Bug 1782847 - openshift-sdn logging is way too verbose
Summary: openshift-sdn logging is way too verbose
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.4.0
Assignee: Casey Callendrello
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks: 1782860
TreeView+ depends on / blocked
 
Reported: 2019-12-12 13:51 UTC by Casey Callendrello
Modified: 2020-05-04 11:20 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-04 11:19:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift sdn pull 84 0 None None None 2019-12-12 13:59:50 UTC
Red Hat Product Errata RHBA-2020:0581 0 None None None 2020-05-04 11:20:27 UTC

Description Casey Callendrello 2019-12-12 13:51:34 UTC
We added way too much logging to try and pin down the informer issue, and not all of it got rolled back.

We need to roll it back before we ship.

Comment 2 Anurag saxena 2019-12-16 19:37:52 UTC
This looks good on 4.4.0-0.nightly-2019-12-16-025547. 
Apparently we are getting too less logs in comparison with non-fix builds. Only logging i see now is from approved fields like k8s.io/client-go etc as seen in PR as opposed to what thousands of logs in older builds.  

$ oc logs sdn-n97bf | grep -i informer
I1216 14:45:22.932283    3762 shared_informer.go:197] Waiting for caches to sync for service config
I1216 14:45:22.932291    3762 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1216 14:45:23.032399    3762 shared_informer.go:204] Caches are synced for service config 
I1216 14:45:23.032400    3762 shared_informer.go:204] Caches are synced for endpoints config 
W1216 14:46:43.515906    3762 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 9000 (10705)
W1216 14:46:43.516352    3762 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.EgressNetworkPolicy ended with: too old resource version: 7387 (12199)
W1216 14:46:43.516465    3762 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 11343 (12172)
W1216 14:46:43.691569    3762 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 8998 (12195)
W1216 14:46:43.692410    3762 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.NetworkPolicy ended with: too old resource version: 5058 (10716)
W1216 14:52:19.428152    3762 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 10705 (15232)
W1216 14:52:19.518779    3762 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.NetworkPolicy ended with: too old resource version: 10716 (15241)
W1216 14:52:19.518802    3762 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 14907 (15236)
W1216 14:52:19.518912    3762 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.EgressNetworkPolicy ended with: too old resource version: 12199 (15988)
W1216 14:52:19.637645    3762 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 12172 (17271)
W1216 14:52:19.641076    3762 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 12195 (17274)
W1216 14:54:25.672572    3762 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 15232 (17296)
W1216 14:54:25.672836    3762 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 15236 (17297)
W1216 14:54:25.698715    3762 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.NetworkPolicy ended with: too old resource version: 15241 (17300)
W1216 14:54:26.790479    3762 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.EgressNetworkPolicy ended with: too old resource version: 15988 (18026)
W1216 14:54:26.790668    3762 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 17271 (18033)
W1216 14:54:26.790926    3762 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 17274 (18034)

Comment 3 Casey Callendrello 2019-12-17 13:05:46 UTC
Yup, that looks about right. We were previously dumping every Service and Endpoint on every change. Logs after a 2-hour CI run were over 100 MB!

Comment 5 errata-xmlrpc 2020-05-04 11:19:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581


Note You need to log in before you can comment on or make changes to this bug.