Bug 1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs
Summary: OAuth proxy container for AlertManager and Thanos are flooding the logs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oauth-proxy
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 4.8.0
Assignee: Sergiusz Urbaniak
QA Contact:
URL:
Whiteboard:
: 1953934 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-07 10:52 UTC by Sergiusz Urbaniak
Modified: 2021-07-27 23:07 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 23:07:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift oauth-proxy pull 214 0 None open Bug 1958158: providers/openshift: remove logging of authorizer decisions 2021-05-07 10:57:21 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 23:07:35 UTC

Description Sergiusz Urbaniak 2021-05-07 10:52:06 UTC
This bug was initially created as a copy of Bug #1953934

I am copying this bug because: 



Description of problem:

This is a similar or related issue reported here: 
https://bugzilla.redhat.com/show_bug.cgi?id=1915667

After upgrading to 4.6.18 "authorizer reason:" messages are flooding many of the openshift-monitoring components logs.

~~~
% tail -10 openshift-monitoring/pods/thanos-querier-74bb56c5b-d7ghv/oauth-proxy/oauth-proxy/logs/current.log
2021-04-20T11:10:34.762048785Z 2021/04/20 11:10:34 provider.go:407: authorizer reason: 
2021-04-20T11:10:37.293217146Z 2021/04/20 11:10:37 provider.go:407: authorizer reason: 
2021-04-20T11:10:38.730652636Z 2021/04/20 11:10:38 provider.go:407: authorizer reason: 
2021-04-20T11:10:39.332019672Z 2021/04/20 11:10:39 provider.go:407: authorizer reason: 
2021-04-20T11:10:39.481910711Z 2021/04/20 11:10:39 provider.go:407: authorizer reason: 
2021-04-20T11:10:40.272032967Z 2021/04/20 11:10:40 provider.go:407: authorizer reason: 
2021-04-20T11:10:40.873662747Z 2021/04/20 11:10:40 provider.go:407: authorizer reason: 
2021-04-20T11:10:41.641975445Z 2021/04/20 11:10:41 provider.go:407: authorizer reason: 
2021-04-20T11:10:41.705171971Z 2021/04/20 11:10:41 provider.go:407: authorizer reason: 
2021-04-20T11:10:42.399313711Z 2021/04/20 11:10:42 provider.go:407: authorizer reason: 

% tail -10 pods/alertmanager-main-0/alertmanager-proxy/alertmanager-proxy/logs/current.log
2021-04-20T11:09:50.837136785Z 2021/04/20 11:09:50 provider.go:407: authorizer reason: 
2021-04-20T11:09:50.871266184Z 2021/04/20 11:09:50 provider.go:407: authorizer reason: 
2021-04-20T11:09:52.640532648Z 2021/04/20 11:09:52 provider.go:407: authorizer reason: 
2021-04-20T11:09:52.739566538Z 2021/04/20 11:09:52 provider.go:407: authorizer reason: 
2021-04-20T11:09:52.763426879Z 2021/04/20 11:09:52 provider.go:407: authorizer reason: 
2021-04-20T11:09:52.966189986Z 2021/04/20 11:09:52 provider.go:407: authorizer reason: 
2021-04-20T11:09:53.392129427Z 2021/04/20 11:09:53 provider.go:407: authorizer reason: 
2021-04-20T11:09:53.706211822Z 2021/04/20 11:09:53 provider.go:407: authorizer reason: 
2021-04-20T11:09:54.873927105Z 2021/04/20 11:09:54 provider.go:407: authorizer reason: 
2021-04-20T11:09:55.735697803Z 2021/04/20 11:09:55 provider.go:407: authorizer reason: 

% tail -10 pods/prometheus-k8s-0/prometheus/prometheus/logs/current.log
2021-04-20T11:08:19.640577479Z level=error ts=2021-04-20T11:08:19.640Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:428: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:08:37.319848959Z level=error ts=2021-04-20T11:08:37.319Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:426: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"endpoints\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:08:53.448878330Z level=error ts=2021-04-20T11:08:53.448Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:427: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"services\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:08:58.131243051Z level=error ts=2021-04-20T11:08:58.131Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:428: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:09:25.208319773Z level=error ts=2021-04-20T11:09:25.208Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:426: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"endpoints\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:09:27.281669576Z level=error ts=2021-04-20T11:09:27.281Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:427: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"services\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:09:44.840779771Z level=error ts=2021-04-20T11:09:44.840Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:428: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:09:59.866034134Z level=error ts=2021-04-20T11:09:59.865Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:427: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"services\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:10:18.092895390Z level=error ts=2021-04-20T11:10:18.092Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:428: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:10:21.095560237Z level=error ts=2021-04-20T11:10:21.095Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:426: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"endpoints\" in API group \"\" in the namespace \"openshift-operators\""

% tail -20 openshift-console/pods/console-85f5c9cc86-swncp/console/console/logs/current.log 
2021-04-20T11:03:52.423623023Z 2021-04-20T11:03:52Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:03:55.553149818Z 2021-04-20T11:03:55Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:04:16.703003590Z 2021-04-20T11:04:16Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:04:30.197936031Z 2021-04-20T11:04:30Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:04:46.460235087Z 2021-04-20T11:04:46Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:04:48.940441485Z 2021-04-20T11:04:48Z http: proxy error: context canceled
2021-04-20T11:04:58.050038234Z 2021-04-20T11:04:58Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:16.703325267Z 2021-04-20T11:05:16Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:27.731962054Z 2021-04-20T11:05:27Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:28.570577367Z 2021-04-20T11:05:28Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:30.206242655Z 2021-04-20T11:05:30Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:46.760356760Z 2021-04-20T11:05:46Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:52.428707459Z 2021-04-20T11:05:52Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:06:16.713684354Z 2021-04-20T11:06:16Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:06:28.582405339Z 2021-04-20T11:06:28Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:06:30.212481881Z 2021-04-20T11:06:30Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:06:47.022228828Z 2021-04-20T11:06:47Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:06:53.963877457Z 2021-04-20T11:06:53Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:07:16.732356144Z 2021-04-20T11:07:16Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:07:30.265008003Z 2021-04-20T11:07:30Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
~~~

This issue is directly related to the openshift-console pods, scaling down the openshift-console pods those messages are not reported anymore. As soon as the openshift-console operator is set as Managed the log flooding starts again.

Version-Release number of selected component (if applicable):
4.6.18

How reproducible:
Upgrade a cluster from 4.6.z to 4.6.18

Steps to Reproduce:
1. Upgrade a cluster from 4.6.z to 4.6.18
2.
3.

Actual results:
Many useless messages flooding logs in openshift-monitoring pods.

Expected results:
Stop this message flooding.

Additional info:

Comment 1 Sergiusz Urbaniak 2021-05-07 10:52:50 UTC
This is a leightweight copy of https://bugzilla.redhat.com/show_bug.cgi?id=1953934, concerned only about the log flooding in oauth-proxy.

Comment 4 Samuel Padgett 2021-05-24 13:53:33 UTC
*** Bug 1953934 has been marked as a duplicate of this bug. ***

Comment 8 errata-xmlrpc 2021-07-27 23:07:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.