Description of problem: When calling for resources in the authorization.openshift.io/v1 group hitting a failure in the API server 0615 15:34:54.343538 1 status.go:71] apiserver received an error that is not an metav1.Status: &url.Error{Op:“Get”, URL:“https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/empty/roles”, Err:(*errors.errorS tring)(0xc002765b20)}: Get “https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/empty/roles”: net/http: invalid header field name “Impersonate-Extra-authentication.kubernetes.io/pod-name” Appears that in this set of code: https://github.com/openshift/openshift-apiserver/blob/ce7d8f6d16985237b29f88f55f0ae37230889215/pkg/client/impersonatingclient/impersonate.go#L52-L54 We will need to re-encode these values: https://github.com/kubernetes/kubernetes/pull/65799 I believe is an example. More info and context here: https://kubernetes.slack.com/archives/C0EN96KUY/p1623779163086500 Version-Release number of selected component (if applicable): 4.8 How reproducible: 100% Steps to Reproduce: 1. Use a pod to query for a resource in the authorization.api.io group that is proxied (rolebindings/roles) 2. 3. Actual results: The failure occurs in the API server. Expected results: We should be able to proxy to the kubernetes API server without error. Additional info:
*** Bug 1970996 has been marked as a duplicate of this bug. ***
Need to implement something like this: https://github.com/kubernetes/client-go/blob/master/transport/round_trippers.go#L239
*** Bug 1971540 has been marked as a duplicate of this bug. ***
Tentatively setting blocker+ as the API seems broken since 4.8. We are investigating and will reset the blocker flag if feasible.
Looks like this breaks access to the prometheus UI, and possibly others, from the console: 4.8-fc9, build02 Tried to access prometheus UI Got a 500 error from prometheus-k8s route after going through login dance to auth, pod had: 2021/06/16 20:03:10 provider.go:587: Performing OAuth discovery against https://172.30.0.1/.well-known/oauth-authorization-server 2021/06/16 20:03:10 provider.go:627: 200 GET https://172.30.0.1/.well-known/oauth-authorization-server { "issuer": "https://oauth-openshift.apps.build02.gcp.ci.openshift.org", "authorization_endpoint": "https://oauth-openshift.apps.build02.gcp.ci.openshift.org/oauth/authorize", "token_endpoint": "https://oauth-openshift.apps.build02.gcp.ci.openshift.org/oauth/token", "scopes_supported": [ "user:check-access", "user:full", "user:info", "user:list-projects", "user:list-scoped-projects" ], "response_types_supported": [ "code", "token" ], "grant_types_supported": [ "authorization_code", "implicit" ], "code_challenge_methods_supported": [ "plain", "S256" ] } 2021/06/16 20:03:12 oauthproxy.go:656: error redeeming code (client:10.129.40.5:43122): got 400 from "https://oauth-openshift.apps.build02.gcp.ci.openshift.org/oauth/token" {"error":"unauthorized_client","error_description":"The client is not authorized to request a token using this method."} 2021/06/16 20:03:12 oauthproxy.go:445: ErrorPage 500 Internal Error Internal Error If we can't log in to oauth proxied endpoints that would be blocker+ for me generally.
I'm going to spawn this as a second bug in case it isn't related. https://bugzilla.redhat.com/show_bug.cgi?id=1972898
@clayton yes, this is marked as blocker+, the 4.8 cherry-pick just opened https://github.com/openshift/openshift-apiserver/pull/219
Tested in cluster 4.9.0-0.nightly-2021-06-23-160041 2 ways to verify Method 1 1. copy oc to ANY_POD $ oc cp /usr/bin/oc ANY_POD:/tmp/oc 2. enter ANY_POD $ oc rsh ANY_POD 3. get the resources in authorization.openshift.io group by oc CLI and check the result, no error and expected result is returned sh-4.4# /tmp/oc get rolebinding.v1.authorization.openshift.io NAME ROLE USERS GROUPS SERVICE ACCOUNTS USERS prometheus-k8s openshift-authentication/prometheus-k8s openshift-monitoring/prometheus-k8s ... sh-4.4# /tmp/oc get role.v1.authorization.openshift.io NAME prometheus-k8s Method 2 1) enter a different pod rather than KAS $ oc get pods -n openshift-authentication $ oc rsh -n openshift-authentication oauth-openshift-7dc8dbdd6b-mg6vk 2) curl the endpoint of the kube-apiserver from inside of the pod and check the result, no error and expected result is returned $ token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) $ curl -k "https://${KUBERNETES_SERVICE_HOST}/apis/authorization.openshift.io/v1/clusterroles/view" -H "Authorization: Bearer ${token}" { "kind": "ClusterRole", "apiVersion": "authorization.openshift.io/v1", "metadata": { "name": "view", ... }
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759