Bug 1955435 - "requestURI":"/apis/user.openshift.io/v1/users/kube:admin" from system:apiserver got code 422
Summary: "requestURI":"/apis/user.openshift.io/v1/users/kube:admin" from system:apiser...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oauth-apiserver
Version: 4.8
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.9.0
Assignee: Sebastian Łaskawiec
QA Contact: liyao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-30 06:39 UTC by Xingxing Xia
Modified: 2021-10-18 17:30 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-18 17:30:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift oauth-apiserver pull 54 0 None open Bug 1955435: Do not validate kube:admin user 2021-06-09 09:26:06 UTC
Red Hat Product Errata RHSA-2021:3759 0 None None None 2021-10-18 17:30:32 UTC

Description Xingxing Xia 2021-04-30 06:39:15 UTC
Description of problem:
Saw oauth-apiserver audit.log has:
... "requestURI":"/apis/user.openshift.io/v1/users/kube:admin",..."user":{"username":"system:apiserver",...status":"Failure","reason":"Invalid","code":422.

This should not happen. See below Expected results & Additional info.

Version-Release number of selected component (if applicable):
4.8.0-0.nightly-2021-04-29-222100

How reproducible:
Always

Steps to Reproduce:
1. oc login -u kubeadmin -p <password>
2. Run oc commands as kubeadmin

Actual results:
2. Commands succeeded, but oauth-apiserver audit.log has:
[root@xxia0430-48-vn8gc-master-1 ~]# grep '"requestURI":"/apis/user.openshift.io/v1/users/kube:admin".*"status":"Failure","reason":"Invalid","code":422' /var/log/*-apiserver/audit*.log
/var/log/oauth-apiserver/audit.log:{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"6aa65f19-517b-4206-8175-c23c3f059ca6","stage":"ResponseComplete","requestURI":"/apis/user.openshift.io/v1/users/kube:admin","verb":"get","user":{"username":"system:apiserver","uid":"a14ad305-b925-4fe7-a1b3-8a6b11e8a23d","groups":["system:masters"]},"sourceIPs":["::1"],"userAgent":"oauth-apiserver/v0.0.0 (linux/amd64) kubernetes/$Format","objectRef":{"resource":"users","name":"kube:admin","apiGroup":"user.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"status":"Failure","reason":"Invalid","code":422},"requestReceivedTimestamp":"2021-04-30T02:48:56.692725Z","stageTimestamp":"2021-04-30T02:48:56.693057Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}
...<snipped>...

Expected results:
2. If system:apiserver in the audit.log is allowed to query "requestURI":"/apis/user.openshift.io/v1/users/kube:admin", then we should fix it such that 200 instead of 422 is returned.
Otherwise, if system:apiserver is not allowed to query "requestURI":"/apis/user.openshift.io/v1/users/kube:admin", then we should fix it such that we won't see system:apiserver send such requests.

Additional info:
Checked older versions like 4.6, also reproduced code 422 for "requestURI":"/apis/user.openshift.io/v1/users/kube:admin" in oauth-apiserver audit.log. But the reasons are a bit different.  We can try oc get --raw to simulate above system:apiserver queries:
In 4.8, above ":" message is handled due to https://github.com/openshift/enhancements/blob/master/enhancements/authentication/allowing-uri-scheme-in-oidc-sub-claims.md .
$ oc get --raw /apis/user.openshift.io/v1/users/kube:admin
The User "kube:admin" is invalid: metadata.name: Invalid value: "kube:admin": usernames that contain ":" must begin with "b64:"

In older versions, like 4.6, originally ":" was not well handled as below message:
$ oc get --raw /apis/user.openshift.io/v1/users/kube:admin
The User "kube:admin" is invalid: metadata.name: Invalid value: "kube:admin": may not contain ":"

Comment 1 Standa Laznicka 2021-05-04 12:26:41 UTC
Thanks for the BZ, I was not aware of this error in our audit logs.

The message itself is harmless, and it being changes is simply the outcome of a 4.8 username validation change from the enhancement mentioned above. I don't think such a message should appear in the audit.log as the logged request is unnecessary but it might be a result of some generic logic handling users.

I propose keeping this BZ solely for the purpose of tracking down which call is causing this issue and investigating whether we can get rid of this API call.

Comment 2 Michal Fojtik 2021-06-03 12:29:11 UTC
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.

Comment 3 Sebastian Łaskawiec 2021-06-07 06:46:54 UTC
When a kubeadmin user logs in, the `bootstrapPassword#AuthenticatePassword` [1] returns an Authenticator Response object with the name set to `kube:admin` [2]. 

Later on, we perform number of username validations that eventually call `ValidateUserName` [3]. Some of those called contain checks against the bootstrap user (similar to this one [4]) but some are not.

Probably the best option for the fix is to pull the guarding if statement (the one the checks if this is a bootstrap user) into `ValidateUserName`. This way we'd make sure we allow the bootstrap user everywhere in the code.

[1] https://github.com/openshift/oauth-apiserver/blob/a44314ecea033192f8b4d796f99ae3dc972865d1/vendor/github.com/openshift/library-go/pkg/authentication/bootstrapauthenticator/bootstrap.go#L93
[2] https://github.com/openshift/oauth-apiserver/blob/a44314ecea033192f8b4d796f99ae3dc972865d1/vendor/github.com/openshift/library-go/pkg/authentication/bootstrapauthenticator/bootstrap.go#L24
[3] https://github.com/openshift/oauth-apiserver/blob/52e82bfca9914dd24b12dbc0d713e4cef40ff731/vendor/github.com/openshift/apiserver-library-go/pkg/apivalidation/uservalidation.go#L9
[4] https://github.com/openshift/oauth-apiserver/blob/79d57fb74fff0ed25b84c8edd18d12d4fc784845/pkg/oauth/apis/oauth/validation/validation.go#L347-L349

Comment 4 Sebastian Łaskawiec 2021-06-08 11:12:20 UTC
After investigating this a bit more, it tuned out that both `oc login -u kubeadmin -p [...]` and `oc get --raw /apis/user.openshift.io/v1/users/kube:admin` generate exactly the same error with exactly the same stacktrace.

The root problem is that the `REST#Get` validates the username without checking if it's really a bootstrap user, see [1].

One of the possible fixes could incorporate calling the `ValidateUserNameField` function [2], which is commonly used across token validation code.
Another idea would be to manually check the error against the bootstrap user.

As a side note, it's worth to mention that apart from `system:admin`, there's another problematic user found in my logs - `system:serviceaccount:openshift-monitoring:prometheus-k8s`.

[1] https://github.com/openshift/oauth-apiserver/blob/523718b26f0091e4cc8fa11c1e2b719fcda3e5c2/pkg/user/apiserver/registry/user/etcd/etcd.go#L104-L107 
[2] https://github.com/openshift/oauth-apiserver/blob/79d57fb74fff0ed25b84c8edd18d12d4fc784845/pkg/oauth/apis/oauth/validation/validation.go#L341

Comment 6 liyao 2021-06-25 08:51:25 UTC
Tested in cluster 4.9.0-0.nightly-2021-06-21-191858

1. oc login -u kubeadmin -p <password>
2. Run oc commands as kubeadmin
3. check audit logs for kubeadmin, no previous 422 code but 404 code is returned for valid user kubeadmin, which is not reasonable
sh-4.4#  grep '"requestURI":"/apis/user.openshift.io/v1/users/kube:admin".*"status":"Failure","reason":"Invalid","code":422' /var/log/*-apiserver/audit*.log
sh-4.4# 
sh-4.4# grep '"requestURI":"/apis/user.openshift.io/v1/users/kube:admin"' /var/log/*-apiserver/audit*.log
/var/log/kube-apiserver/audit-2021-06-25T07-27-52.688.log:{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"00b8eea1-85a5-4f16-bced-ff5f4012b82a","stage":"ResponseComplete","requestURI":"/apis/user.openshift.io/v1/users/kube:admin","verb":"get","user":{"username":"kube:admin","groups":["system:cluster-admins","system:authenticated"],"extra":{"scopes.authorization.openshift.io":["user:full"]}},"sourceIPs":["10.0.38.122"],"userAgent":"oc/4.8.0 (linux/amd64) kubernetes/4c2094c","objectRef":{"resource":"users","name":"kube:admin","apiGroup":"user.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":404},"requestReceivedTimestamp":"2021-06-25T07:27:23.732654Z","stageTimestamp":"2021-06-25T07:27:23.795508Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"cluster-admins\" of ClusterRole \"cluster-admin\" to Group \"system:cluster-admins\""}}
...<snipped>...

4. check oc commands result for kubeadmin, it also shows valid user kubeadmin not found
$ oc get --raw /apis/user.openshift.io/v1/users/kube:admin
Error from server (NotFound): users.user.openshift.io "kube:admin" not found

Comment 7 Sebastian Łaskawiec 2021-06-28 09:35:30 UTC
As we agreed with Standa and Sergiusz, we don't want to introduce any specific behavior for "kube:admin" user. This is a bootstrap user, which doesn't have it's object in Users.

Comment 8 liyao 2021-06-28 10:03:54 UTC
Based on https://bugzilla.redhat.com/show_bug.cgi?id=1955435#c7 and the verified results in https://bugzilla.redhat.com/show_bug.cgi?id=1955435#c6, test result is expected.

Comment 11 errata-xmlrpc 2021-10-18 17:30:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759


Note You need to log in before you can comment on or make changes to this bug.