Description of problem (please be detailed as possible and provide log snippets): The Provider API goes into crashloopbackoff state due to the below issue ``` failed to start the provider server. failed to create a new OCSConumer instance. failed to list storage consumers. storageconsumers.ocs.openshift.io is forbidden: User "system:serviceaccount:openshift-storage:ocs-provider-server" cannot list resource "storageconsumers" in API group "ocs.openshift.io" at the cluster scope ```
Backport PR is not merged yet, once merged the BZ will move to MODIFIED automatically.
Verified on ocs-operator.v4.10.0 full_version:"4.10.0-171" ====================================================================================================================== $ oc get csv NAME DISPLAY VERSION REPLACES PHASE configure-alertmanager-operator.v0.1.408-a047eaa configure-alertmanager-operator 0.1.408-a047eaa configure-alertmanager-operator.v0.1.406-7952da9 Succeeded mcg-operator.v4.10.0 NooBaa Operator 4.10.0 Succeeded ocs-operator.v4.10.0 OpenShift Container Storage 4.10.0 Succeeded odf-operator.v4.10.0 OpenShift Data Foundation 4.10.0 Succeeded route-monitor-operator.v0.1.402-706964f Route Monitor Operator 0.1.402-706964f route-monitor-operator.v0.1.399-91f142a Succeeded oc get csv -n openshift-storage -o json ocs-operator.v4.10.0 | jq '.metadata.labels["full_version"]' "4.10.0-171" $ oc get pods NAME READY STATUS RESTARTS AGE csi-cephfsplugin-5wxkn 3/3 Running 0 20h csi-cephfsplugin-hb922 3/3 Running 0 20h csi-cephfsplugin-provisioner-6d794d7cfd-74nmd 6/6 Running 0 20h csi-cephfsplugin-provisioner-6d794d7cfd-lrnwr 6/6 Running 0 20h csi-cephfsplugin-qxtjg 3/3 Running 0 20h csi-rbdplugin-49hvv 4/4 Running 0 20h csi-rbdplugin-5cgfg 4/4 Running 0 20h csi-rbdplugin-jzp2s 4/4 Running 0 20h csi-rbdplugin-provisioner-7cccf75546-nz2ql 7/7 Running 0 20h csi-rbdplugin-provisioner-7cccf75546-tksnc 7/7 Running 0 20h noobaa-operator-dd8fc9f48-k7pnj 1/1 Running 0 21h ocs-metrics-exporter-6dfb667c69-k6prq 1/1 Running 0 21h ocs-operator-544d8cc47d-nlbf6 1/1 Running 0 18h ocs-provider-server-549f6cb4dd-xzg6h 1/1 Running 0 20h odf-console-6bbf7d95-2lhxw 1/1 Running 0 21h odf-operator-controller-manager-557f7cc6c8-qrsz6 2/2 Running 0 21h rook-ceph-crashcollector-14828511aab675fafd31f3e091d9bd4a-lw2pc 1/1 Running 0 20h rook-ceph-crashcollector-69cbb061b0ac92be6fd92f985433e85d-kcpvq 1/1 Running 0 20h rook-ceph-crashcollector-6eb13f6db74059dfd4cb78f6ab73fce5-c962s 1/1 Running 0 20h rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-66b6d657wnfd4 2/2 Running 0 20h rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-8599dbb7fp7m7 2/2 Running 0 20h rook-ceph-mgr-a-6c8bc8bd77-h698r 2/2 Running 0 20h rook-ceph-mon-a-66dd577c5b-9thxn 2/2 Running 0 20h rook-ceph-mon-b-5f986c6797-w58lh 2/2 Running 0 20h rook-ceph-mon-c-697477d8c8-npnfn 2/2 Running 0 20h rook-ceph-operator-5db9f784b4-jphqd 1/1 Running 0 21h rook-ceph-osd-0-77fc764689-76tvq 2/2 Running 0 20h rook-ceph-osd-1-6c4ffddbc7-nd2pd 2/2 Running 0 20h rook-ceph-osd-2-8d88d9cfc-9t7jf 2/2 Running 0 20h rook-ceph-osd-prepare-ocs-deviceset-0-data-0jm7vg--1-cw9kb 0/1 Completed 0 20h rook-ceph-osd-prepare-ocs-deviceset-1-data-0m6rvx--1-c5rgx 0/1 Completed 0 20h rook-ceph-osd-prepare-ocs-deviceset-2-data-044bhp--1-m77b2 0/1 Completed 0 20h rook-ceph-tools-78bd95d497-f78fb 1/1 Running 0 20h ========================================================================================================== Pod 'ocs-provider-server-549f6cb4dd-xzg6h' is running and onboraded consumner suceesfully. Marking it as verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1372