Bug 1972383 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings
Summary: Using bound SA tokens causes causes failures to /apis/authorization.openshift...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: apiserver-auth
Version: 4.8
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.9.0
Assignee: Standa Laznicka
QA Contact: liyao
URL:
Whiteboard:
: 1970996 1971540 (view as bug list)
Depends On:
Blocks: 1972687
TreeView+ depends on / blocked
 
Reported: 2021-06-15 18:57 UTC by Shawn Hurley
Modified: 2021-10-18 17:34 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1972687 (view as bug list)
Environment:
Last Closed: 2021-10-18 17:34:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift openshift-apiserver pull 217 0 None open Bug 1972383: openshift authorization proxy: escape header key values 2021-06-16 09:59:27 UTC
Red Hat Product Errata RHSA-2021:3759 0 None None None 2021-10-18 17:34:45 UTC

Description Shawn Hurley 2021-06-15 18:57:50 UTC
Description of problem:
When calling for resources in the authorization.openshift.io/v1 group hitting a failure in the API server

0615 15:34:54.343538       1 status.go:71] apiserver received an error that is not an metav1.Status: &url.Error{Op:“Get”, URL:“https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/empty/roles”, Err:(*errors.errorS
tring)(0xc002765b20)}: Get “https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/empty/roles”: net/http: invalid header field name “Impersonate-Extra-authentication.kubernetes.io/pod-name”


Appears that in this set of code:

https://github.com/openshift/openshift-apiserver/blob/ce7d8f6d16985237b29f88f55f0ae37230889215/pkg/client/impersonatingclient/impersonate.go#L52-L54

We will need to re-encode these values: https://github.com/kubernetes/kubernetes/pull/65799 I believe is an example. 

More info and context here:
https://kubernetes.slack.com/archives/C0EN96KUY/p1623779163086500

Version-Release number of selected component (if applicable):
4.8

How reproducible:
100%

Steps to Reproduce:
1. Use a pod to query for a resource in the authorization.api.io group that is proxied (rolebindings/roles) 
2. 
3.

Actual results:
The failure occurs in the API server.

Expected results:
We should be able to proxy to the kubernetes API server without error.

Additional info:

Comment 1 Shawn Hurley 2021-06-15 19:00:51 UTC
*** Bug 1970996 has been marked as a duplicate of this bug. ***

Comment 2 Osher De Paz 2021-06-15 19:57:15 UTC
Need to implement something like this:
https://github.com/kubernetes/client-go/blob/master/transport/round_trippers.go#L239

Comment 3 Osher De Paz 2021-06-15 20:02:48 UTC
*** Bug 1971540 has been marked as a duplicate of this bug. ***

Comment 4 Sergiusz Urbaniak 2021-06-16 08:10:25 UTC
Tentatively setting blocker+ as the API seems broken since 4.8. We are investigating and will reset the blocker flag if feasible.

Comment 6 Clayton Coleman 2021-06-16 20:13:13 UTC
Looks like this breaks access to the prometheus UI, and possibly others, from the console:

4.8-fc9, build02

Tried to access prometheus UI

Got a 500 error from prometheus-k8s route after going through login dance to auth, pod had:

2021/06/16 20:03:10 provider.go:587: Performing OAuth discovery against https://172.30.0.1/.well-known/oauth-authorization-server
2021/06/16 20:03:10 provider.go:627: 200 GET https://172.30.0.1/.well-known/oauth-authorization-server {
  "issuer": "https://oauth-openshift.apps.build02.gcp.ci.openshift.org",
  "authorization_endpoint": "https://oauth-openshift.apps.build02.gcp.ci.openshift.org/oauth/authorize",
  "token_endpoint": "https://oauth-openshift.apps.build02.gcp.ci.openshift.org/oauth/token",
  "scopes_supported": [
    "user:check-access",
    "user:full",
    "user:info",
    "user:list-projects",
    "user:list-scoped-projects"
  ],
  "response_types_supported": [
    "code",
    "token"
  ],
  "grant_types_supported": [
    "authorization_code",
    "implicit"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}
2021/06/16 20:03:12 oauthproxy.go:656: error redeeming code (client:10.129.40.5:43122): got 400 from "https://oauth-openshift.apps.build02.gcp.ci.openshift.org/oauth/token" {"error":"unauthorized_client","error_description":"The client is not authorized to request a token using this method."}
2021/06/16 20:03:12 oauthproxy.go:445: ErrorPage 500 Internal Error Internal Error

If we can't log in to oauth proxied endpoints that would be blocker+ for me generally.

Comment 7 Clayton Coleman 2021-06-16 20:22:18 UTC
I'm going to spawn this as a second bug in case it isn't related. https://bugzilla.redhat.com/show_bug.cgi?id=1972898

Comment 9 Sergiusz Urbaniak 2021-06-17 07:22:10 UTC
@clayton yes, this is marked as blocker+, the 4.8 cherry-pick just opened https://github.com/openshift/openshift-apiserver/pull/219

Comment 10 liyao 2021-06-24 08:29:09 UTC
Tested in cluster 4.9.0-0.nightly-2021-06-23-160041

2 ways to verify

Method 1
1. copy oc to ANY_POD
$ oc cp /usr/bin/oc ANY_POD:/tmp/oc
2. enter ANY_POD
$ oc rsh ANY_POD
3. get the resources in authorization.openshift.io group by oc CLI and check the result, no error and expected result is returned
sh-4.4# /tmp/oc get rolebinding.v1.authorization.openshift.io
NAME                    ROLE                                      USERS   GROUPS                                            SERVICE ACCOUNTS                      USERS
prometheus-k8s          openshift-authentication/prometheus-k8s                                                             openshift-monitoring/prometheus-k8s   
...                           
sh-4.4# /tmp/oc get role.v1.authorization.openshift.io
NAME
prometheus-k8s


Method 2
1) enter a different pod rather than KAS
$ oc get pods -n openshift-authentication
$ oc rsh -n openshift-authentication oauth-openshift-7dc8dbdd6b-mg6vk
2) curl the endpoint of the kube-apiserver from inside of the pod and check the result, no error and expected result is returned
$ token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
$ curl -k "https://${KUBERNETES_SERVICE_HOST}/apis/authorization.openshift.io/v1/clusterroles/view" -H "Authorization: Bearer ${token}"

{
  "kind": "ClusterRole",
  "apiVersion": "authorization.openshift.io/v1",
  "metadata": {
    "name": "view",
    ...
}

Comment 13 errata-xmlrpc 2021-10-18 17:34:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759


Note You need to log in before you can comment on or make changes to this bug.