Bug 1905850 - `oc adm policy who-can` failed to check the `operatorcondition/status` resource
Summary: `oc adm policy who-can` failed to check the `operatorcondition/status` resource
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.7
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.11.0
Assignee: Filip Krepinsky
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-09 08:32 UTC by Jian Zhang
Modified: 2022-08-10 10:36 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Feature: Add "--subresource" option to a command "oc adm policy who-can" to check who can perform a specified action on a subresource Reason: This functionality was missing before as it was only possible to check a resource.
Clone Of:
Environment:
Last Closed: 2022-08-10 10:35:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubernetes kubectl issues 1217 0 None open auth/can-i: check subresource if it exists and belongs to resource 2022-05-25 17:13:23 UTC
Github openshift oc pull 1179 0 None open Bug 1905850: add a new option for checking a subresource to oc adm policy who-can 2022-06-23 21:46:49 UTC
Github operator-framework operator-lifecycle-manager pull 1939 0 None closed Bug 1905850: Fix operatorcondition role verbs 2021-02-02 17:46:31 UTC
Red Hat Product Errata RHSA-2022:5069 0 None None None 2022-08-10 10:36:06 UTC

Description Jian Zhang 2020-12-09 08:32:05 UTC
Description of problem:
1, There is no resource called "operatorcondition/status".

[root@preserve-olm-env data]# oc adm policy who-can patch operatorcondition/status
Warning: the server doesn't have a resource type 'operatorcondition/status'
resourceaccessreviewresponse.authorization.openshift.io/<unknown> 
...

2, The sa in the default project cannot patch/get/update this operatorcondition resource.

Version-Release number of selected component (if applicable):
[root@preserve-olm-env data]# oc version
Client Version: 4.7.0-0.nightly-2020-12-04-013308
Server Version: 4.7.0-0.nightly-2020-12-09-012634
Kubernetes Version: v1.19.2+ad738ba

How reproducible:
always

Steps to Reproduce:
1. Create an OCP 4.7.
2. Login as the clusteradmin and subscribe to an operator, such as etcd. 
web console: "Operators" -> "OperatorHub" -> "etcd", subscribe to it the default project.

3. Check the role in the default project.
[root@preserve-olm-env data]#  oc get role etcdoperator.v0.9.4 -o yaml
...
rules:
- apiGroups:
  - operators.coreos.com
  resourceNames:
  - etcdoperator.v0.9.4
  resources:
  - operatorconditions
  verbs:
  - get
- apiGroups:
  - operators.coreos.com
  resourceNames:
  - etcdoperator.v0.9.4
  resources:
  - operatorconditions/status
  verbs:
  - get,update,patch

[root@preserve-olm-env data]# oc get sa
NAME            SECRETS   AGE
...
etcd-operator   2         74m

4. check who can patch these resources.



Actual results:
1, the operatorcondition/status resource doesn't exsit.
[root@preserve-olm-env data]# oc adm policy who-can patch operatorcondition/status
Warning: the server doesn't have a resource type 'operatorcondition/status'
resourceaccessreviewresponse.authorization.openshift.io/<unknown> 

Namespace: default
Verb:      patch
Resource:  operatorcondition/status

...

2, The sa in the default project cannot patch/get/update this operatorcondition resource.

[root@preserve-olm-env data]# oc adm policy who-can patch operatorcondition |grep etcd-operator
        system:serviceaccount:openshift-etcd-operator:etcd-operator
[root@preserve-olm-env data]# oc adm policy who-can get operatorcondition |grep etcd-operator
        system:serviceaccount:openshift-etcd-operator:etcd-operator
[root@preserve-olm-env data]# oc adm policy who-can update operatorcondition |grep etcd-operator
        system:serviceaccount:openshift-etcd-operator:etcd-operator

[root@preserve-olm-env data]# oc get sa -n default|grep etcd-operator
etcd-operator   2         88m
[root@preserve-olm-env data]# oc get sa -n openshift-etcd-operator|grep etcd-operator
etcd-operator   2         152m


Expected results:
1, Should use the existing resource in the role.
2, the SA should get/patch/update the operatorcondition resource. 

Additional info:

[root@preserve-olm-env data]# oc adm policy who-can patch operatorcondition
resourceaccessreviewresponse.authorization.openshift.io/<unknown> 

Namespace: default
Verb:      patch
Resource:  operatorconditions.operators.coreos.com

Users:  system:admin
        system:serviceaccount:kube-system:generic-garbage-collector
        system:serviceaccount:openshift-apiserver-operator:openshift-apiserver-operator
        system:serviceaccount:openshift-apiserver:openshift-apiserver-sa
        system:serviceaccount:openshift-authentication-operator:authentication-operator
        system:serviceaccount:openshift-authentication:oauth-openshift
        system:serviceaccount:openshift-cluster-storage-operator:cluster-storage-operator
        system:serviceaccount:openshift-cluster-storage-operator:csi-snapshot-controller-operator
        system:serviceaccount:openshift-cluster-version:default
        system:serviceaccount:openshift-config-operator:openshift-config-operator
        system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator
        system:serviceaccount:openshift-etcd-operator:etcd-operator
        system:serviceaccount:openshift-etcd:installer-sa
        system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator
        system:serviceaccount:openshift-kube-apiserver:installer-sa
        system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client
        system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator
        system:serviceaccount:openshift-kube-controller-manager:installer-sa
        system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client
        system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator
        system:serviceaccount:openshift-kube-scheduler:installer-sa
        system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client
        system:serviceaccount:openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator
        system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa
        system:serviceaccount:openshift-machine-config-operator:default
        system:serviceaccount:openshift-network-operator:default
        system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa
        system:serviceaccount:openshift-operator-lifecycle-manager:olm-operator-serviceaccount
        system:serviceaccount:openshift-service-ca-operator:service-ca-operator
Groups: system:cluster-admins
        system:masters

Comment 1 Alexander Greene 2020-12-16 23:52:52 UTC
Hello @jian,

I do not believe that either of the issues you are encountering are bugs.

For #1, the command you provided returns the same error for any of OLM's CRDs that introduce the status subresource [1] along with all K8s resources that have similar status resources. The resources absolutely exist as shown here:
```
$ kubectl get --raw=/apis/operators.coreos.com/v1 | jq
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "operators.coreos.com/v1",
  "resources": [
    ...
    ...
    ...
    {
      "name": "operatorconditions",
      "singularName": "operatorcondition",
      "namespaced": true,
      "kind": "OperatorCondition",
      "verbs": [
        "delete",
        "deletecollection",
        "get",
        "list",
        "patch",
        "create",
        "update",
        "watch"
      ],
      "storageVersionHash": "FTUxZd413Oo="
    },
    {
      "name": "operatorconditions/status",
      "singularName": "",
      "namespaced": true,
      "kind": "OperatorCondition",
      "verbs": [
        "get",
        "patch",
        "update"
      ]
    }
    ...
    ...
    ...
  ]
}

```

In regards to #2, I believe that the command is failing because the Service account only has permission to modify an OperatorCondition with a specific name, as shown below:
```
oc get roles -n default etcdoperator.v0.9.4 -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: etcdoperator.v0.9.4
  namespace: default
  ...
rules:
- apiGroups:
  - operators.coreos.com
  resourceNames:
  - etcdoperator.v0.9.4 <-- Specific resource name
  resources:
  - operatorconditions
  verbs:
  - get
- apiGroups:
  - operators.coreos.com
  resourceNames:
  - etcdoperator.v0.9.4 <-- Specific resource name
  resources:
  - operatorconditions/status
  verbs:
  - get,update,patch
```


Ref:
[1] https://book-v1.book.kubebuilder.io/basics/status_subresource.html

Comment 2 Alexander Greene 2020-12-17 00:10:34 UTC
@Jian,

I followed up on 2:
1. The operator's ServiceAccount can only get the resource, it is not given the ability to update or patch the resource by OLM.
2. The operator's ServiceAccount will not appear in the list even if you modify the command for get permissions. This is due to the resourceName constraint mentioned earlier which I tested below:

```
 $ oc adm policy who-can get operatorcondition 
resourceaccessreviewresponse.authorization.openshift.io/<unknown> 

Namespace: default
Verb:      get
Resource:  operatorconditions.operators.coreos.com

Users:  system:admin
        system:serviceaccount:kube-system:generic-garbage-collector
        system:serviceaccount:kube-system:namespace-controller
        system:serviceaccount:openshift-apiserver-operator:openshift-apiserver-operator
        system:serviceaccount:openshift-apiserver:openshift-apiserver-sa
        system:serviceaccount:openshift-authentication-operator:authentication-operator
        system:serviceaccount:openshift-authentication:oauth-openshift
        system:serviceaccount:openshift-cluster-storage-operator:cluster-storage-operator
        system:serviceaccount:openshift-cluster-storage-operator:csi-snapshot-controller-operator
        system:serviceaccount:openshift-cluster-version:default
        system:serviceaccount:openshift-config-operator:openshift-config-operator
        system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator
        system:serviceaccount:openshift-controller-manager:openshift-controller-manager-sa
        system:serviceaccount:openshift-etcd-operator:etcd-operator
        system:serviceaccount:openshift-etcd:installer-sa
        system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator
        system:serviceaccount:openshift-kube-apiserver:installer-sa
        system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client
        system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator
        system:serviceaccount:openshift-kube-controller-manager:installer-sa
        system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client
        system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator
        system:serviceaccount:openshift-kube-scheduler:installer-sa
        system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client
        system:serviceaccount:openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator
        system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa
        system:serviceaccount:openshift-machine-config-operator:default
        system:serviceaccount:openshift-network-operator:default
        system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa
        system:serviceaccount:openshift-operator-lifecycle-manager:olm-operator-serviceaccount
        system:serviceaccount:openshift-service-ca-operator:service-ca-operator
        system:serviceaccount:openshift-service-catalog-removed:openshift-service-catalog-apiserver-remover
        system:serviceaccount:openshift-service-catalog-removed:openshift-service-catalog-controller-manager-remover
Groups: system:cluster-admins
        system:masters

$ oc edit role -n default etcdoperator.v0.9.4
role.rbac.authorization.k8s.io/etcdoperator.v0.9.4 edited

$ oc get role -n default etcdoperator.v0.9.4 -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: etcdoperator.v0.9.4
  namespace: default
  ...
rules:
- apiGroups:
  - operators.coreos.com
  resources:
  - operatorconditions
  verbs:
  - get
- apiGroups:
  - operators.coreos.com
  resourceNames:
  - etcdoperator.v0.9.4
  resources:
  - operatorconditions/status
  verbs:
  - get,update,patch

$ oc adm policy who-can get operatorcondition 
resourceaccessreviewresponse.authorization.openshift.io/<unknown> 

Namespace: default
Verb:      get
Resource:  operatorconditions.operators.coreos.com

Users:  system:admin
        system:serviceaccount:default:etcd-operator
        system:serviceaccount:kube-system:generic-garbage-collector
        system:serviceaccount:kube-system:namespace-controller
        system:serviceaccount:openshift-apiserver-operator:openshift-apiserver-operator
        system:serviceaccount:openshift-apiserver:openshift-apiserver-sa
        system:serviceaccount:openshift-authentication-operator:authentication-operator
        system:serviceaccount:openshift-authentication:oauth-openshift
        system:serviceaccount:openshift-cluster-storage-operator:cluster-storage-operator
        system:serviceaccount:openshift-cluster-storage-operator:csi-snapshot-controller-operator
        system:serviceaccount:openshift-cluster-version:default
        system:serviceaccount:openshift-config-operator:openshift-config-operator
        system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator
        system:serviceaccount:openshift-controller-manager:openshift-controller-manager-sa
        system:serviceaccount:openshift-etcd-operator:etcd-operator
        system:serviceaccount:openshift-etcd:installer-sa
        system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator
        system:serviceaccount:openshift-kube-apiserver:installer-sa
        system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client
        system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator
        system:serviceaccount:openshift-kube-controller-manager:installer-sa
        system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client
        system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator
        system:serviceaccount:openshift-kube-scheduler:installer-sa
        system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client
        system:serviceaccount:openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator
        system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa
        system:serviceaccount:openshift-machine-config-operator:default
        system:serviceaccount:openshift-network-operator:default
        system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa
        system:serviceaccount:openshift-operator-lifecycle-manager:olm-operator-serviceaccount
        system:serviceaccount:openshift-service-ca-operator:service-ca-operator
        system:serviceaccount:openshift-service-catalog-removed:openshift-service-catalog-apiserver-remover
        system:serviceaccount:openshift-service-catalog-removed:openshift-service-catalog-controller-manager-remover
Groups: system:cluster-admins
        system:masters
```

Comment 3 Alexander Greene 2020-12-17 18:41:24 UTC
Marking as Closed/Not a Bug given that this seems to be user error rather than a bug, please reopen @jian if you disagree.

Comment 4 Jian Zhang 2020-12-24 07:08:56 UTC
Hi Alexander,

Thanks for your updates! Seems like the SA permission as expected.
[root@preserve-olm-env data]# oc login https://api.kui122400.qe.devcluster.openshift.com:6443 --token=$(oc sa get-token etcd-operator -n default)
Logged into "https://api.kui122400.qe.devcluster.openshift.com:6443" as "system:serviceaccount:default:etcd-operator" using the token provided.

You don't have any projects. Contact your system administrator to request a project.
[root@preserve-olm-env data]# oc whoami
system:serviceaccount:default:etcd-operator
[root@preserve-olm-env data]# oc auth can-i --list --namespace=default |grep operatorcondition
operatorconditions.operators.coreos.com/status                       []                                    [etcdoperator.v0.9.4]   [get,update,patch]
operatorconditions.operators.coreos.com                              []                                    [etcdoperator.v0.9.4]   [get]

But, I still not sure why the `oc adm policy who-can update` command throw out the Warning: the server doesn't have a resource type 'operatorcondition/status'

[root@preserve-olm-env data]# oc adm policy who-can update  operatorcondition/status etcdoperator.v0.9.4 -n default|grep default:etcd-operator
Warning: the server doesn't have a resource type 'operatorcondition/status'

[root@preserve-olm-env data]# oc adm policy who-can get  operatorcondition etcdoperator.v0.9.4 -n default|grep default:etcd-operator
        system:serviceaccount:default:etcd-operator

I also checked the help info, but no found how to check the subresource `status` permission.

[root@preserve-olm-env data]# oc adm policy who-can update --help
List who can perform the specified action on a resource

Usage:
  oc adm policy who-can VERB RESOURCE [NAME] [flags]

Options:
  -A, --all-namespaces=false: If true, list who can perform the specified action in all namespaces.
      --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in
the template. Only applies to golang and jsonpath output formats.
  -o, --output='': Output format. One of:
json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file.
      --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The
template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

Use "oc adm options" for a list of global command-line options (applies to all commands).


I reopen and forward this bug to the Auth team for a look.

Comment 5 Standa Laznicka 2021-01-04 10:47:07 UTC
The verbs are wrong.

There is no verb "get,update,patch".

I think you meant either

verbs:
- get
- update
- patch

or

verbs: ["get", "update", "patch"]

in your yaml.

Comment 6 Alexander Greene 2021-01-05 21:46:04 UTC
Thanks Standa, sorry for the noise.

Comment 8 Jian Zhang 2021-01-07 07:18:49 UTC
Cluster version is 4.7.0-0.nightly-2021-01-06-222035
[root@preserve-olm-env data]# oc -n openshift-operator-lifecycle-manager  exec catalog-operator-7f7f97c9bc-4n64q -- olm --version
OLM version: 0.17.0
git commit: abe648a8b0a1b0187ad6d9a4bb467e0ecfb8bf00

1, Login to the cluster as the SA role.
[root@preserve-olm-env data]# oc project
Using project "default" on server "https://api.piqin-0107-1.0107-aff.qe.rhcloud.com:6443".
[root@preserve-olm-env data]# oc login https://api.piqin-0107-1.0107-aff.qe.rhcloud.com:6443 --token=$(oc sa get-token etcd-operator -n default)
Logged into "https://api.piqin-0107-1.0107-aff.qe.rhcloud.com:6443" as "system:serviceaccount:default:etcd-operator" using the token provided.

You don't have any projects. Contact your system administrator to request a project.
[root@preserve-olm-env data]# 
[root@preserve-olm-env data]# oc whoami
system:serviceaccount:default:etcd-operator

The permission looks good.
[root@preserve-olm-env data]# oc auth can-i --list --namespace=default |grep operatorcondition
operatorconditions.operators.coreos.com/status                       []                                    [etcdoperator.v0.9.4]   [get update patch]
operatorconditions.operators.coreos.com                              []                                    [etcdoperator.v0.9.4]   [get]

2, Switch to the cluster-admin role.

[root@preserve-olm-env data]# oc config use-context default/api-piqin-0107-1-0107-aff-qe-rhcloud-com:6443/system:admin
Switched to context "default/api-piqin-0107-1-0107-aff-qe-rhcloud-com:6443/system:admin".
[root@preserve-olm-env data]# oc whoami
system:admin

[root@preserve-olm-env data]# oc adm policy who-can get  operatorcondition etcdoperator.v0.9.4 -n default|grep default:etcd-operator
        system:serviceaccount:default:etcd-operator

But, still fail to check the sub-resource "operatorcondition/status"
[root@preserve-olm-env data]# oc adm policy who-can update  operatorcondition/status etcdoperator.v0.9.4 -n default|grep default:etcd-operator
Warning: the server doesn't have a resource type 'operatorcondition/status'

Move on it to the Auth team, my question: How to check the permission of the sub-resource? such as, "operatorcondition/status"

Comment 9 Standa Laznicka 2021-01-07 08:38:05 UTC
I don't think you can do that with oc, but I'm not an `oc` expert. The fact that this command does not work for you does not mean the fix by the other team was invalid and that a component switch is appropriate.

Next time, please either use needinfo or find me on Slack - #forum-apiserver; I'll be more than happy to help you.

Maciej, I can see that the resource is being parsed by https://github.com/openshift/oc/blob/8fbc95fdb0e31194797127fd79b891857fed36ac/pkg/cli/admin/policy/who_can.go#L110-L130 but I can't make out whether there's any chance that this would work for subresouces of CRs, can you confirm?

Comment 10 Jian Zhang 2021-01-08 03:14:07 UTC
Hi Standa,

Thanks for your help! Waiting for Maciej's analysis, change the status to ASSIGNED first.

Comment 11 Maciej Szulik 2021-01-08 11:17:51 UTC
(In reply to Standa Laznicka from comment #9)
> Maciej, I can see that the resource is being parsed by
> https://github.com/openshift/oc/blob/
> 8fbc95fdb0e31194797127fd79b891857fed36ac/pkg/cli/admin/policy/who_can.
> go#L110-L130 but I can't make out whether there's any chance that this would
> work for subresouces of CRs, can you confirm?

The check will work fine with resource/subresource, we'll need to check the 
code behind mapper, why it's not recognizing resource/subresource form. 
Eventually we can split the argument and stick those two together after the check.

Comment 13 Filip Krepinsky 2022-05-25 17:13:23 UTC
linking a similar upstream issue

Comment 14 Filip Krepinsky 2022-06-23 21:46:25 UTC
posted a PR with a fix to oc

the upstream issue is similar but is about a different command auth can-i: posted an upstream PR https://github.com/kubernetes/kubernetes/pull/110752

Comment 17 Filip Krepinsky 2022-06-29 11:22:23 UTC
We introduced a new option in the fix. This should be the new form:  

oc adm policy who-can patch operatorcondition --subresource status

Comment 19 zhou ying 2022-06-29 12:47:32 UTC
./oc version --client
Client Version: 4.11.0-0.nightly-2022-06-28-160049
Kustomize Version: v4.5.4



./oc adm policy who-can patch operatorcondition --subresource status etcdoperator.v0.9.4 -n default  |grep etcd-operator
        system:serviceaccount:openshift-etcd-operator:etcd-operator

But can't see the sa in default project.

Comment 20 Filip Krepinsky 2022-06-29 12:59:58 UTC
$ oc get role etcdoperator.v0.9.4 -n defaul -o yaml

rules:
- apiGroups:
  - operators.coreos.com
  resourceNames:
  - etcdoperator.v0.9.4
  resources:
  - operatorconditions
  verbs:
  - get
  - update
  - patch



I guess they must have removed the status from their RBAC requirements

Comment 22 errata-xmlrpc 2022-08-10 10:35:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069


Note You need to log in before you can comment on or make changes to this bug.