Bug 2044360 - [KMS] clusterrolebinding vault-tokenreview-binding is not deleted during uninstall of ODF cluster
Summary: [KMS] clusterrolebinding vault-tokenreview-binding is not deleted during unin...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: documentation
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Olive Lakra
QA Contact: Rachael
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-24 12:58 UTC by Rachael
Modified: 2023-08-09 16:43 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-21 09:12:44 UTC
Embargoed:


Attachments (Terms of Use)

Description Rachael 2022-01-24 12:58:34 UTC
Description of problem (please be detailed as possible and provide log
snippet):

When ODF 4.10 cluster is deployed with cluster wide encryption enabled using KMS kubernetes authentication method (service account), there are two resources created by the user, one is a serviceaccount in the openshift-storage namespace called "odf-vault-auth" and the other is a clusterrolebinding called  "vault-tokenreview-binding". When the storagesystem is deleted and the ODF cluster is uninstalled, the serviceaccount gets deleted as part of the namespace deletion, however the clusterrole binding is not cleaned up or deleted.


$ oc get project openshift-storage
Error from server (NotFound): namespaces "openshift-storage" not found

$ oc get clusterrolebinding vault-tokenreview-binding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2022-01-24T10:22:53Z"
  name: vault-tokenreview-binding
  resourceVersion: "132204"
  uid: 3d70d4c2-43e9-4626-be63-312559603d44
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: odf-vault-auth
  namespace: openshift-storage


Version of all relevant components (if applicable):
---------------------------------------------------
OCP: 4.10.0-0.nightly-2022-01-22-102609
ODF: odf-operator.v4.10.0      full_version=4.10.0-113




Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
-------------------

1. Install the ODF operator

2. In the openshift-storage namespace, create a service account called odf-vault-auth
   # oc -n openshift-storage create serviceaccount odf-vault-auth

3. Create clusterrolebinding as shown below
   # oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth

4. Get the secret name from the service account
   # oc -n openshift-storage get sa odf-vault-auth -o jsonpath="{.secrets[*]['name']}"

5. Get the Token and CA cert used to configure the kube auth in Vault
   # SA_JWT_TOKEN=$(oc -n openshift-storage get secret "$VAULT_SA_SECRET_NAME" -o jsonpath="{.data.token}" | base64 --decode; echo)
   # SA_CA_CRT=$(oc -n openshift-storage get secret "$VAULT_SA_SECRET_NAME" -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)

6. Get the OCP endpoint and sa issuer
   # K8S_HOST=$(oc config view --minify --flatten -o jsonpath="{.clusters[0].cluster.server}")
   # issuer="$(oc get authentication.config cluster -o template="{{ .spec.serviceAccountIssuer }}")"

7. On the vault node/pod, configure the kube auth method
   # vault auth enable kubernetes
   
   # vault write auth/kubernetes/config \
          token_reviewer_jwt="$SA_JWT_TOKEN" \
          kubernetes_host="$K8S_HOST" \
          kubernetes_ca_cert="$SA_CA_CRT" \
          issuer="$issuer"

   # vault write auth/kubernetes/role/odf-rook-ceph-op \
        bound_service_account_names=rook-ceph-system,rook-ceph-osd, noobaa \
        bound_service_account_namespaces=openshift-storage \
        policies=odf \
        ttl=1440h

   # vault write auth/kubernetes/role/odf-rook-ceph-osd \
        bound_service_account_names=rook-ceph-osd \
        bound_service_account_namespaces=openshift-storage \
        policies=odf \
        ttl=1440h

8. From the ODF management console, follow the steps to create the storagesystem.
9. On the Security and network page, click on "Enable data encryption for block and file storage"
10. Select "Cluster-wide encryption" from encryption level and click on "Connect to an external key management service".
11. Set Authentication method to "Kubernetes" and fill out the rest of the details 
12. Review and create the storagesystem
13. Uninstall the ODF cluster as mentioned here: https://access.redhat.com/articles/6525111


Actual results:
---------------
The clusterrolebinding vault-tokenreview-binding is still present in the cluster

Expected results:
-----------------
The clusterrolebinding should be deleted.

Comment 3 Sébastien Han 2022-01-25 14:00:56 UTC
clusterrolebinding are not namespaced ressources. I believe OLM is removing the clusterrolebinding associated with the CSV when uninstalling. So I believe this is expected since this is created by the user.
I think we should document this as part of the uninstall procedure. Essentially this clusterrolebinding must be removed manually with:

oc delete clusterrolebinding vault-tokenreview-binding

Thanks.


Note You need to log in before you can comment on or make changes to this bug.