Bug 1855500 - p&f panic: oc delete user --as --as-groups didn't works as expected
Summary: p&f panic: oc delete user --as --as-groups didn't works as expected
Keywords:
Status: VERIFIED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.6.0
Assignee: Abu Kashem
QA Contact: scheng
URL:
Whiteboard:
: 1873721 (view as bug list)
Depends On:
Blocks: 1874251
TreeView+ depends on / blocked
 
Reported: 2020-07-10 02:22 UTC by Wang Haoran
Modified: 2020-09-01 21:58 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1874251 (view as bug list)
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift kubernetes pull 323 None closed Bug 1855500: UPSTREAM: 94204: Add impersonated user to system:authenticated group 2020-09-13 08:12:39 UTC

Description Wang Haoran 2020-07-10 02:22:23 UTC
Description of problem:

On OSD, we allow dedicated admins to manage their own customer users, when I try to delete customer user using cluster-admin and impersonate dedicated admins, it failed with the follow errors, this only happens on ocp 4.5.



oc delete user afuc6@customdomain --as=test@customdomain --as-group=dedicated-admins --loglevel=8
I0709 13:52:03.394095   50512 loader.go:359] Config loaded from file /Users/haowang/.kube/config
I0709 13:52:03.419873   50512 request.go:942] Request Body: {"propagationPolicy":"Background"}
I0709 13:52:03.420987   50512 round_trippers.go:416] DELETE https://api.haowang-e2e.b9x1.s1.devshift.org:6443/apis/user.openshift.io/v1/users/afuc6@customdomain
I0709 13:52:03.421025   50512 round_trippers.go:423] Request Headers:
I0709 13:52:03.421038   50512 round_trippers.go:426]     User-Agent: oc/v0.0.0 (darwin/amd64) kubernetes/$Format
I0709 13:52:03.421052   50512 round_trippers.go:426]     Accept: application/json
I0709 13:52:03.421061   50512 round_trippers.go:426]     Content-Type: application/json
I0709 13:52:03.421070   50512 round_trippers.go:426]     Authorization: Bearer Bn26G6uIxU5oNodrWDo3DlSAeoHFKpiB95HOYtCVAxk
I0709 13:52:03.421078   50512 round_trippers.go:426]     Impersonate-User: test@customdomain
I0709 13:52:03.421088   50512 round_trippers.go:426]     Impersonate-Group: dedicated-admins
I0709 13:52:06.607717   50512 round_trippers.go:441] Response Status:  in 3186 milliseconds
I0709 13:52:06.607759   50512 round_trippers.go:444] Response Headers:
I0709 13:52:06.608241   50512 helpers.go:214] Connection error: Delete https://api.haowang-e2e.b9x1.s1.devshift.org:6443/apis/user.openshift.io/v1/users/afuc6@customdomain: stream error: stream ID 1; INTERNAL_ERROR
F0709 13:52:06.608656   50512 helpers.go:114] Unable to connect to the server: stream error: stream ID 1; INTERNAL_ERROR

Version-Release number of selected component (if applicable):
Client Version: v4.2.13
Server Version: 4.5.0-rc.7
Kubernetes Version: v1.18.3+3415b61

How reproducible:

always

Steps to Reproduce:
1.
2.
3.

Actual results:

failed
Expected results:
works as expected

Additional info:
We found this problem in our OSD e2e testing here: 
https://github.com/openshift/osde2e/blob/b7699b9d4e7a24d28509a6faf71c4092d4f7cd54/pkg/e2e/verify/user_webhook.go#L47-L64

Error logs: https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gcs/origin-ci-test/logs/osde2e-stage-aws-e2e-next/1280653081696014336#1:build-log.txt%3A1447

Comment 1 Jan Chaloupka 2020-08-05 13:30:45 UTC
Hello Wang Haoran, can you share both the kube-apiserver and openshift-apiserver logs around the time the incident occurred? Could be something related to invalid certificates.

Comment 3 Jan Chaloupka 2020-08-19 18:10:14 UTC
Thanks for the logs!!! kube-apiserver-ip-10-0-187-163.ec2.internal.log and kube-apiserver-ip-10-0-252-80.ec2.internal.log contains the following panic:
```
E0806 09:33:24.563135       1 runtime.go:76] Observed a panic: No match; rd=RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/apis/user.openshift.io/v1/users/afuc6@customdomain", Verb:"delete", APIPrefix:"apis", APIGroup:"user.openshift.io", APIVersion:"v1", Namespace:"", Resource:"users", Subresource:"", Name:"afuc6@customdomain", Parts:[]string{"users", "afuc6@customdomain"}}, User: &user.DefaultInfo{Name:"test@customdomain", UID:"", Groups:[]string{"dedicated-admins"}, Extra:map[string][]string{}}}, catchAll={"metadata":{"name":"catch-all","selfLink":"/apis/flowcontrol.apiserver.k8s.io/v1alpha1/flowschemas/catch-all","uid":"6ee022c7-65ef-4ee3-9547-5b804f76e9b5","resourceVersion":"87","generation":1,"creationTimestamp":"2020-08-06T01:09:58Z","managedFields":[{"manager":"api-priority-and-fairness-config-consumer-v1","operation":"Update","apiVersion":"flowcontrol.apiserver.k8s.io/v1alpha1","time":"2020-08-06T01:09:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Dangling\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}},{"manager":"api-priority-and-fairness-config-producer-v1","operation":"Update","apiVersion":"flowcontrol.apiserver.k8s.io/v1alpha1","time":"2020-08-06T01:09:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:distinguisherMethod":{".":{},"f:type":{}},"f:matchingPrecedence":{},"f:priorityLevelConfiguration":{"f:name":{}},"f:rules":{}}}}]},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2020-08-06T01:09:58Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}}
goroutine 37991583 [running]:
github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1.1(0xc02b0c5080)
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:108 +0x107
panic(0x3d2a960, 0xc01b669c30)
...
```

Sending the apiserver team for analysis.

Comment 4 Stefan Schimanski 2020-08-21 13:11:59 UTC
Looks like this is around priority & fairness:

E0806 09:33:09.815325       1 runtime.go:76] Observed a panic: No match; rd=RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/apis/user.openshift.io/v1/users/afuc6@customdomain", Verb:"delete", APIPrefix:"apis", APIGroup:"user.openshift.io", APIVersion:"v1", Namespace:"", Resource:"users", Subresource:"", Name:"afuc6@customdomain", Parts:[]string{"users", "afuc6@customdomain"}}, User: &user.DefaultInfo{Name:"test@customdomain", UID:"", Groups:[]string{"dedicated-admins"}, Extra:map[string][]string{}}}, catchAll={"metadata":{"name":"catch-all","selfLink":"/apis/flowcontrol.apiserver.k8s.io/v1alpha1/flowschemas/catch-all","uid":"6ee022c7-65ef-4ee3-9547-5b804f76e9b5","resourceVersion":"87","generation":1,"creationTimestamp":"2020-08-06T01:09:58Z","managedFields":[{"manager":"api-priority-and-fairness-config-consumer-v1","operation":"Update","apiVersion":"flowcontrol.apiserver.k8s.io/v1alpha1","time":"2020-08-06T01:09:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Dangling\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}},{"manager":"api-priority-and-fairness-config-producer-v1","operation":"Update","apiVersion":"flowcontrol.apiserver.k8s.io/v1alpha1","time":"2020-08-06T01:09:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:distinguisherMethod":{".":{},"f:type":{}},"f:matchingPrecedence":{},"f:priorityLevelConfiguration":{"f:name":{}},"f:rules":{}}}}]},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2020-08-06T01:09:58Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}}

Comment 5 Abu Kashem 2020-08-21 15:44:11 UTC
Hi haowang@redhat.com,

The impersonated user "test@customdomain" does not have "system:authenticated" in the "Groups".  

  User: &user.DefaultInfo{
    Name:"test@customdomain",
    UID:"",
    Groups:[]string{"dedicated-admins"},
    Extra:map[string][]string{}
  }


This is causing a no match in priority & fairness logic (matches a user to the default set of flow schema) and a panic consequently. I will do more investigation to find a proper solution.

In the meantime, as a workaround, if you add "--as-group=system:authenticated" to your oc command, this should work. You can make the change in your e2e test and let me know if this works.

Comment 6 Wang Haoran 2020-08-24 03:29:30 UTC
Hi akashem@redhat.com
The workaround works well.

Comment 7 Abu Kashem 2020-08-25 16:39:59 UTC
Opened an upstream PR to resolve the issue - https://github.com/kubernetes/kubernetes/pull/94204

In summary, an impersonated user, given successful authorization check, should be added to "system:authenticated" group.

Comment 8 Standa Laznicka 2020-08-31 07:07:27 UTC
*** Bug 1873721 has been marked as a duplicate of this bug. ***

Comment 9 Stefan Schimanski 2020-08-31 07:58:47 UTC
Client Version: v4.2.13
Server Version: 4.5.0-rc.7

This is an unsupported version skew.


Note You need to log in before you can comment on or make changes to this bug.