Bug 1905850
Summary: | `oc adm policy who-can` failed to check the `operatorcondition/status` resource | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Jian Zhang <jiazha> |
Component: | oc | Assignee: | Filip Krepinsky <fkrepins> |
oc sub component: | oc | QA Contact: | zhou ying <yinzhou> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | medium | ||
Priority: | medium | CC: | agreene, aos-bugs, fkrepins, krizza, kuiwang, maszulik, mfojtik |
Version: | 4.7 | Keywords: | Reopened |
Target Milestone: | --- | ||
Target Release: | 4.11.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Enhancement | |
Doc Text: |
Feature:
Add "--subresource" option to a command "oc adm policy who-can" to check who can perform a specified action on a subresource
Reason:
This functionality was missing before as it was only possible to check a resource.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-08-10 10:35:34 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jian Zhang
2020-12-09 08:32:05 UTC
Hello @jian, I do not believe that either of the issues you are encountering are bugs. For #1, the command you provided returns the same error for any of OLM's CRDs that introduce the status subresource [1] along with all K8s resources that have similar status resources. The resources absolutely exist as shown here: ``` $ kubectl get --raw=/apis/operators.coreos.com/v1 | jq { "kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "operators.coreos.com/v1", "resources": [ ... ... ... { "name": "operatorconditions", "singularName": "operatorcondition", "namespaced": true, "kind": "OperatorCondition", "verbs": [ "delete", "deletecollection", "get", "list", "patch", "create", "update", "watch" ], "storageVersionHash": "FTUxZd413Oo=" }, { "name": "operatorconditions/status", "singularName": "", "namespaced": true, "kind": "OperatorCondition", "verbs": [ "get", "patch", "update" ] } ... ... ... ] } ``` In regards to #2, I believe that the command is failing because the Service account only has permission to modify an OperatorCondition with a specific name, as shown below: ``` oc get roles -n default etcdoperator.v0.9.4 -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: etcdoperator.v0.9.4 namespace: default ... rules: - apiGroups: - operators.coreos.com resourceNames: - etcdoperator.v0.9.4 <-- Specific resource name resources: - operatorconditions verbs: - get - apiGroups: - operators.coreos.com resourceNames: - etcdoperator.v0.9.4 <-- Specific resource name resources: - operatorconditions/status verbs: - get,update,patch ``` Ref: [1] https://book-v1.book.kubebuilder.io/basics/status_subresource.html @Jian, I followed up on 2: 1. The operator's ServiceAccount can only get the resource, it is not given the ability to update or patch the resource by OLM. 2. The operator's ServiceAccount will not appear in the list even if you modify the command for get permissions. This is due to the resourceName constraint mentioned earlier which I tested below: ``` $ oc adm policy who-can get operatorcondition resourceaccessreviewresponse.authorization.openshift.io/<unknown> Namespace: default Verb: get Resource: operatorconditions.operators.coreos.com Users: system:admin system:serviceaccount:kube-system:generic-garbage-collector system:serviceaccount:kube-system:namespace-controller system:serviceaccount:openshift-apiserver-operator:openshift-apiserver-operator system:serviceaccount:openshift-apiserver:openshift-apiserver-sa system:serviceaccount:openshift-authentication-operator:authentication-operator system:serviceaccount:openshift-authentication:oauth-openshift system:serviceaccount:openshift-cluster-storage-operator:cluster-storage-operator system:serviceaccount:openshift-cluster-storage-operator:csi-snapshot-controller-operator system:serviceaccount:openshift-cluster-version:default system:serviceaccount:openshift-config-operator:openshift-config-operator system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator system:serviceaccount:openshift-controller-manager:openshift-controller-manager-sa system:serviceaccount:openshift-etcd-operator:etcd-operator system:serviceaccount:openshift-etcd:installer-sa system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator system:serviceaccount:openshift-kube-apiserver:installer-sa system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator system:serviceaccount:openshift-kube-controller-manager:installer-sa system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator system:serviceaccount:openshift-kube-scheduler:installer-sa system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client system:serviceaccount:openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa system:serviceaccount:openshift-machine-config-operator:default system:serviceaccount:openshift-network-operator:default system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa system:serviceaccount:openshift-operator-lifecycle-manager:olm-operator-serviceaccount system:serviceaccount:openshift-service-ca-operator:service-ca-operator system:serviceaccount:openshift-service-catalog-removed:openshift-service-catalog-apiserver-remover system:serviceaccount:openshift-service-catalog-removed:openshift-service-catalog-controller-manager-remover Groups: system:cluster-admins system:masters $ oc edit role -n default etcdoperator.v0.9.4 role.rbac.authorization.k8s.io/etcdoperator.v0.9.4 edited $ oc get role -n default etcdoperator.v0.9.4 -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: etcdoperator.v0.9.4 namespace: default ... rules: - apiGroups: - operators.coreos.com resources: - operatorconditions verbs: - get - apiGroups: - operators.coreos.com resourceNames: - etcdoperator.v0.9.4 resources: - operatorconditions/status verbs: - get,update,patch $ oc adm policy who-can get operatorcondition resourceaccessreviewresponse.authorization.openshift.io/<unknown> Namespace: default Verb: get Resource: operatorconditions.operators.coreos.com Users: system:admin system:serviceaccount:default:etcd-operator system:serviceaccount:kube-system:generic-garbage-collector system:serviceaccount:kube-system:namespace-controller system:serviceaccount:openshift-apiserver-operator:openshift-apiserver-operator system:serviceaccount:openshift-apiserver:openshift-apiserver-sa system:serviceaccount:openshift-authentication-operator:authentication-operator system:serviceaccount:openshift-authentication:oauth-openshift system:serviceaccount:openshift-cluster-storage-operator:cluster-storage-operator system:serviceaccount:openshift-cluster-storage-operator:csi-snapshot-controller-operator system:serviceaccount:openshift-cluster-version:default system:serviceaccount:openshift-config-operator:openshift-config-operator system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator system:serviceaccount:openshift-controller-manager:openshift-controller-manager-sa system:serviceaccount:openshift-etcd-operator:etcd-operator system:serviceaccount:openshift-etcd:installer-sa system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator system:serviceaccount:openshift-kube-apiserver:installer-sa system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator system:serviceaccount:openshift-kube-controller-manager:installer-sa system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator system:serviceaccount:openshift-kube-scheduler:installer-sa system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client system:serviceaccount:openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa system:serviceaccount:openshift-machine-config-operator:default system:serviceaccount:openshift-network-operator:default system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa system:serviceaccount:openshift-operator-lifecycle-manager:olm-operator-serviceaccount system:serviceaccount:openshift-service-ca-operator:service-ca-operator system:serviceaccount:openshift-service-catalog-removed:openshift-service-catalog-apiserver-remover system:serviceaccount:openshift-service-catalog-removed:openshift-service-catalog-controller-manager-remover Groups: system:cluster-admins system:masters ``` Marking as Closed/Not a Bug given that this seems to be user error rather than a bug, please reopen @jian if you disagree. Hi Alexander, Thanks for your updates! Seems like the SA permission as expected. [root@preserve-olm-env data]# oc login https://api.kui122400.qe.devcluster.openshift.com:6443 --token=$(oc sa get-token etcd-operator -n default) Logged into "https://api.kui122400.qe.devcluster.openshift.com:6443" as "system:serviceaccount:default:etcd-operator" using the token provided. You don't have any projects. Contact your system administrator to request a project. [root@preserve-olm-env data]# oc whoami system:serviceaccount:default:etcd-operator [root@preserve-olm-env data]# oc auth can-i --list --namespace=default |grep operatorcondition operatorconditions.operators.coreos.com/status [] [etcdoperator.v0.9.4] [get,update,patch] operatorconditions.operators.coreos.com [] [etcdoperator.v0.9.4] [get] But, I still not sure why the `oc adm policy who-can update` command throw out the Warning: the server doesn't have a resource type 'operatorcondition/status' [root@preserve-olm-env data]# oc adm policy who-can update operatorcondition/status etcdoperator.v0.9.4 -n default|grep default:etcd-operator Warning: the server doesn't have a resource type 'operatorcondition/status' [root@preserve-olm-env data]# oc adm policy who-can get operatorcondition etcdoperator.v0.9.4 -n default|grep default:etcd-operator system:serviceaccount:default:etcd-operator I also checked the help info, but no found how to check the subresource `status` permission. [root@preserve-olm-env data]# oc adm policy who-can update --help List who can perform the specified action on a resource Usage: oc adm policy who-can VERB RESOURCE [NAME] [flags] Options: -A, --all-namespaces=false: If true, list who can perform the specified action in all namespaces. --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. -o, --output='': Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file. --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview]. Use "oc adm options" for a list of global command-line options (applies to all commands). I reopen and forward this bug to the Auth team for a look. The verbs are wrong. There is no verb "get,update,patch". I think you meant either verbs: - get - update - patch or verbs: ["get", "update", "patch"] in your yaml. Thanks Standa, sorry for the noise. Cluster version is 4.7.0-0.nightly-2021-01-06-222035 [root@preserve-olm-env data]# oc -n openshift-operator-lifecycle-manager exec catalog-operator-7f7f97c9bc-4n64q -- olm --version OLM version: 0.17.0 git commit: abe648a8b0a1b0187ad6d9a4bb467e0ecfb8bf00 1, Login to the cluster as the SA role. [root@preserve-olm-env data]# oc project Using project "default" on server "https://api.piqin-0107-1.0107-aff.qe.rhcloud.com:6443". [root@preserve-olm-env data]# oc login https://api.piqin-0107-1.0107-aff.qe.rhcloud.com:6443 --token=$(oc sa get-token etcd-operator -n default) Logged into "https://api.piqin-0107-1.0107-aff.qe.rhcloud.com:6443" as "system:serviceaccount:default:etcd-operator" using the token provided. You don't have any projects. Contact your system administrator to request a project. [root@preserve-olm-env data]# [root@preserve-olm-env data]# oc whoami system:serviceaccount:default:etcd-operator The permission looks good. [root@preserve-olm-env data]# oc auth can-i --list --namespace=default |grep operatorcondition operatorconditions.operators.coreos.com/status [] [etcdoperator.v0.9.4] [get update patch] operatorconditions.operators.coreos.com [] [etcdoperator.v0.9.4] [get] 2, Switch to the cluster-admin role. [root@preserve-olm-env data]# oc config use-context default/api-piqin-0107-1-0107-aff-qe-rhcloud-com:6443/system:admin Switched to context "default/api-piqin-0107-1-0107-aff-qe-rhcloud-com:6443/system:admin". [root@preserve-olm-env data]# oc whoami system:admin [root@preserve-olm-env data]# oc adm policy who-can get operatorcondition etcdoperator.v0.9.4 -n default|grep default:etcd-operator system:serviceaccount:default:etcd-operator But, still fail to check the sub-resource "operatorcondition/status" [root@preserve-olm-env data]# oc adm policy who-can update operatorcondition/status etcdoperator.v0.9.4 -n default|grep default:etcd-operator Warning: the server doesn't have a resource type 'operatorcondition/status' Move on it to the Auth team, my question: How to check the permission of the sub-resource? such as, "operatorcondition/status" I don't think you can do that with oc, but I'm not an `oc` expert. The fact that this command does not work for you does not mean the fix by the other team was invalid and that a component switch is appropriate. Next time, please either use needinfo or find me on Slack - #forum-apiserver; I'll be more than happy to help you. Maciej, I can see that the resource is being parsed by https://github.com/openshift/oc/blob/8fbc95fdb0e31194797127fd79b891857fed36ac/pkg/cli/admin/policy/who_can.go#L110-L130 but I can't make out whether there's any chance that this would work for subresouces of CRs, can you confirm? Hi Standa, Thanks for your help! Waiting for Maciej's analysis, change the status to ASSIGNED first. (In reply to Standa Laznicka from comment #9) > Maciej, I can see that the resource is being parsed by > https://github.com/openshift/oc/blob/ > 8fbc95fdb0e31194797127fd79b891857fed36ac/pkg/cli/admin/policy/who_can. > go#L110-L130 but I can't make out whether there's any chance that this would > work for subresouces of CRs, can you confirm? The check will work fine with resource/subresource, we'll need to check the code behind mapper, why it's not recognizing resource/subresource form. Eventually we can split the argument and stick those two together after the check. linking a similar upstream issue posted a PR with a fix to oc the upstream issue is similar but is about a different command auth can-i: posted an upstream PR https://github.com/kubernetes/kubernetes/pull/110752 We introduced a new option in the fix. This should be the new form: oc adm policy who-can patch operatorcondition --subresource status ./oc version --client Client Version: 4.11.0-0.nightly-2022-06-28-160049 Kustomize Version: v4.5.4 ./oc adm policy who-can patch operatorcondition --subresource status etcdoperator.v0.9.4 -n default |grep etcd-operator system:serviceaccount:openshift-etcd-operator:etcd-operator But can't see the sa in default project. $ oc get role etcdoperator.v0.9.4 -n defaul -o yaml rules: - apiGroups: - operators.coreos.com resourceNames: - etcdoperator.v0.9.4 resources: - operatorconditions verbs: - get - update - patch I guess they must have removed the status from their RBAC requirements Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069 |