Bug 1915667 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy
Summary: Messages flooded in thanos-querier pod- oauth-proxy container: Authorization ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oauth-proxy
Version: 4.5
Hardware: x86_64
OS: Unspecified
high
medium
Target Milestone: ---
: 4.6.z
Assignee: Standa Laznicka
QA Contact: scheng
URL:
Whiteboard:
Depends On: 1879878
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-13 08:07 UTC by OpenShift BugZilla Robot
Modified: 2021-04-27 10:58 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-22 13:54:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift oauth-proxy pull 200 0 None closed [release-4.6] Bug 1915667: remove logging when authz header is present but basic auth is not attempted 2021-02-15 15:05:05 UTC
Red Hat Knowledge Base (Solution) 5525631 0 None None None 2021-03-21 06:41:55 UTC
Red Hat Product Errata RHBA-2021:0510 0 None None None 2021-02-22 13:54:47 UTC

Comment 3 Standa Laznicka 2021-02-05 10:55:41 UTC
the PR is waiting for cherry-pick approval for 23 days. Bumping severity to high to avoid starvation in the cherry-pick approver queue.

Comment 8 errata-xmlrpc 2021-02-22 13:54:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6.18 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0510

Comment 9 Mani 2021-02-26 16:07:31 UTC
customer updated the cluster version to 4.6.18, Now they see the below message. 


2021/02/25 15:43:47 provider.go:407: authorizer reason: 
2021/02/25 15:43:58 provider.go:407: authorizer reason: 
2021/02/25 15:44:03 provider.go:407: authorizer reason: 
2021/02/25 15:44:14 provider.go:407: authorizer reason: 
2021/02/25 15:44:19 provider.go:407: authorizer reason: 
2021/02/25 15:44:20 provider.go:407: authorizer reason: 
2021/02/25 15:44:30 provider.go:407: authorizer reason: 
2021/02/25 15:44:35 provider.go:407: authorizer reason: 
2021/02/25 15:44:35 provider.go:407: authorizer reason: 
2021/02/25 15:44:46 provider.go:407: authorizer reason: 
2021/02/25 15:44:50 provider.go:407: authorizer reason: 
2021/02/25 15:44:51 provider.go:407: authorizer reason: 
2021/02/25 15:45:02 provider.go:407: authorizer reason: 
2021/02/25 15:45:07 provider.go:407: authorizer reason: 
2021/02/25 15:45:18 provider.go:407: authorizer reason: 
2021/02/25 15:45:23 provider.go:407: authorizer reason: 
2021/02/25 15:45:34 provider.go:407: authorizer reason: 
2021/02/25 15:45:39 provider.go:407: authorizer reason: 
2021/02/25 15:45:50 provider.go:407: authorizer reason:

Comment 10 Standa Laznicka 2021-04-16 07:47:11 UTC
sorry about the delay. Looks like someone without proper access was trying to log in through the proxy.

Comment 11 Victor Hernando 2021-04-21 09:26:38 UTC
(In reply to Standa Laznicka from comment #10)
> sorry about the delay. Looks like someone without proper access was trying
> to log in through the proxy.

Hi Standa

We have a customer that is experiencing same issues reported in #9, they start facing this after upgrading to 4.6.18, looks like this could be related to this BZ, if that's not the case let me know and I'll file a new BZ.
Those "authorizer reason:" messages are being reported endlessly and directly related to the openshift-console pods, scaling down the openshift-console pods those messages are not reported anymore. As soon as the openshift-console operator is set as Managed the log flooding starts again.
This is what we've seen so far in the different pods:

~~
% tail -10 openshift-monitoring/pods/thanos-querier-74bb56c5b-d7ghv/oauth-proxy/oauth-proxy/logs/current.log
2021-04-20T11:10:34.762048785Z 2021/04/20 11:10:34 provider.go:407: authorizer reason: 
2021-04-20T11:10:37.293217146Z 2021/04/20 11:10:37 provider.go:407: authorizer reason: 
2021-04-20T11:10:38.730652636Z 2021/04/20 11:10:38 provider.go:407: authorizer reason: 
2021-04-20T11:10:39.332019672Z 2021/04/20 11:10:39 provider.go:407: authorizer reason: 
2021-04-20T11:10:39.481910711Z 2021/04/20 11:10:39 provider.go:407: authorizer reason: 
2021-04-20T11:10:40.272032967Z 2021/04/20 11:10:40 provider.go:407: authorizer reason: 
2021-04-20T11:10:40.873662747Z 2021/04/20 11:10:40 provider.go:407: authorizer reason: 
2021-04-20T11:10:41.641975445Z 2021/04/20 11:10:41 provider.go:407: authorizer reason: 
2021-04-20T11:10:41.705171971Z 2021/04/20 11:10:41 provider.go:407: authorizer reason: 
2021-04-20T11:10:42.399313711Z 2021/04/20 11:10:42 provider.go:407: authorizer reason: 

% tail -10 pods/alertmanager-main-0/alertmanager-proxy/alertmanager-proxy/logs/current.log
2021-04-20T11:09:50.837136785Z 2021/04/20 11:09:50 provider.go:407: authorizer reason: 
2021-04-20T11:09:50.871266184Z 2021/04/20 11:09:50 provider.go:407: authorizer reason: 
2021-04-20T11:09:52.640532648Z 2021/04/20 11:09:52 provider.go:407: authorizer reason: 
2021-04-20T11:09:52.739566538Z 2021/04/20 11:09:52 provider.go:407: authorizer reason: 
2021-04-20T11:09:52.763426879Z 2021/04/20 11:09:52 provider.go:407: authorizer reason: 
2021-04-20T11:09:52.966189986Z 2021/04/20 11:09:52 provider.go:407: authorizer reason: 
2021-04-20T11:09:53.392129427Z 2021/04/20 11:09:53 provider.go:407: authorizer reason: 
2021-04-20T11:09:53.706211822Z 2021/04/20 11:09:53 provider.go:407: authorizer reason: 
2021-04-20T11:09:54.873927105Z 2021/04/20 11:09:54 provider.go:407: authorizer reason: 
2021-04-20T11:09:55.735697803Z 2021/04/20 11:09:55 provider.go:407: authorizer reason: 

% tail -10 pods/prometheus-k8s-0/prometheus/prometheus/logs/current.log
2021-04-20T11:08:19.640577479Z level=error ts=2021-04-20T11:08:19.640Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:428: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:08:37.319848959Z level=error ts=2021-04-20T11:08:37.319Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:426: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"endpoints\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:08:53.448878330Z level=error ts=2021-04-20T11:08:53.448Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:427: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"services\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:08:58.131243051Z level=error ts=2021-04-20T11:08:58.131Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:428: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:09:25.208319773Z level=error ts=2021-04-20T11:09:25.208Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:426: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"endpoints\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:09:27.281669576Z level=error ts=2021-04-20T11:09:27.281Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:427: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"services\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:09:44.840779771Z level=error ts=2021-04-20T11:09:44.840Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:428: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:09:59.866034134Z level=error ts=2021-04-20T11:09:59.865Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:427: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"services\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:10:18.092895390Z level=error ts=2021-04-20T11:10:18.092Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:428: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-operators\""
2021-04-20T11:10:21.095560237Z level=error ts=2021-04-20T11:10:21.095Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:426: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-k8s\" cannot list resource \"endpoints\" in API group \"\" in the namespace \"openshift-operators\""

% tail -20 openshift-console/pods/console-85f5c9cc86-swncp/console/console/logs/current.log 
2021-04-20T11:03:52.423623023Z 2021-04-20T11:03:52Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:03:55.553149818Z 2021-04-20T11:03:55Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:04:16.703003590Z 2021-04-20T11:04:16Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:04:30.197936031Z 2021-04-20T11:04:30Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:04:46.460235087Z 2021-04-20T11:04:46Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:04:48.940441485Z 2021-04-20T11:04:48Z http: proxy error: context canceled
2021-04-20T11:04:58.050038234Z 2021-04-20T11:04:58Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:16.703325267Z 2021-04-20T11:05:16Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:27.731962054Z 2021-04-20T11:05:27Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:28.570577367Z 2021-04-20T11:05:28Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:30.206242655Z 2021-04-20T11:05:30Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:46.760356760Z 2021-04-20T11:05:46Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:05:52.428707459Z 2021-04-20T11:05:52Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:06:16.713684354Z 2021-04-20T11:06:16Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:06:28.582405339Z 2021-04-20T11:06:28Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:06:30.212481881Z 2021-04-20T11:06:30Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:06:47.022228828Z 2021-04-20T11:06:47Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:06:53.963877457Z 2021-04-20T11:06:53Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:07:16.732356144Z 2021-04-20T11:07:16Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
2021-04-20T11:07:30.265008003Z 2021-04-20T11:07:30Z Failed to dial backend: 'websocket: bad handshake' Status: '403 Forbidden' URL: 'https://kubernetes.default.svc/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion'
~~~


Note You need to log in before you can comment on or make changes to this bug.