Bug 2054147 - Provider/Consumer: Provider API server crashloopbackoff
Summary: Provider/Consumer: Provider API server crashloopbackoff
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.10.0
Assignee: Subham Rai
QA Contact: suchita
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-14 09:54 UTC by Subham Rai
Modified: 2023-08-09 17:00 UTC (History)
9 users (show)

Fixed In Version: 4.10.0-160
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-13 18:53:05 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator issues 1514 0 None open Provider/Consumer: Provider API server crashloopbackoff 2022-02-14 09:54:23 UTC
Github red-hat-storage ocs-operator pull 1516 0 None open ocs-to-ocs: list within namespace 2022-02-14 09:55:31 UTC
Github red-hat-storage ocs-operator pull 1517 0 None open Bug 2054147: [release-4.10] ocs-to-ocs: list within namespace 2022-02-14 11:59:47 UTC
Github red-hat-storage ocs-operator pull 1519 0 None Merged controllers: Do not list node in the namespace 2022-02-15 11:20:33 UTC
Github red-hat-storage ocs-operator pull 1520 0 None open Bug 2054147:[release-4.10] controllers: Do not list node in the namespace 2022-02-15 11:21:01 UTC
Red Hat Product Errata RHSA-2022:1372 0 None None None 2022-04-13 18:53:18 UTC

Description Subham Rai 2022-02-14 09:54:23 UTC
Description of problem (please be detailed as possible and provide log
snippets):



The Provider API goes into crashloopbackoff state due to the below issue

```
failed to start the provider server. failed to create a new OCSConumer instance. failed to list storage consumers. storageconsumers.ocs.openshift.io is forbidden: User "system:serviceaccount:openshift-storage:ocs-provider-server" cannot list resource "storageconsumers" in API group "ocs.openshift.io" at the cluster scope
```

Comment 5 Mudit Agarwal 2022-02-14 12:08:25 UTC
Backport PR is not merged yet, once merged the BZ will move to MODIFIED automatically.

Comment 10 suchita 2022-03-02 04:56:33 UTC
Verified on  ocs-operator.v4.10.0  full_version:"4.10.0-171"
======================================================================================================================
$ oc get csv
NAME                                               DISPLAY                           VERSION           REPLACES                                           PHASE
configure-alertmanager-operator.v0.1.408-a047eaa   configure-alertmanager-operator   0.1.408-a047eaa   configure-alertmanager-operator.v0.1.406-7952da9   Succeeded
mcg-operator.v4.10.0                               NooBaa Operator                   4.10.0                                                               Succeeded
ocs-operator.v4.10.0                               OpenShift Container Storage       4.10.0                                                               Succeeded
odf-operator.v4.10.0                               OpenShift Data Foundation         4.10.0                                                               Succeeded
route-monitor-operator.v0.1.402-706964f            Route Monitor Operator            0.1.402-706964f   route-monitor-operator.v0.1.399-91f142a            Succeeded

oc get csv -n openshift-storage -o json ocs-operator.v4.10.0 | jq '.metadata.labels["full_version"]'
"4.10.0-171"

$ oc get pods
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-5wxkn                                            3/3     Running     0          20h
csi-cephfsplugin-hb922                                            3/3     Running     0          20h
csi-cephfsplugin-provisioner-6d794d7cfd-74nmd                     6/6     Running     0          20h
csi-cephfsplugin-provisioner-6d794d7cfd-lrnwr                     6/6     Running     0          20h
csi-cephfsplugin-qxtjg                                            3/3     Running     0          20h
csi-rbdplugin-49hvv                                               4/4     Running     0          20h
csi-rbdplugin-5cgfg                                               4/4     Running     0          20h
csi-rbdplugin-jzp2s                                               4/4     Running     0          20h
csi-rbdplugin-provisioner-7cccf75546-nz2ql                        7/7     Running     0          20h
csi-rbdplugin-provisioner-7cccf75546-tksnc                        7/7     Running     0          20h
noobaa-operator-dd8fc9f48-k7pnj                                   1/1     Running     0          21h
ocs-metrics-exporter-6dfb667c69-k6prq                             1/1     Running     0          21h
ocs-operator-544d8cc47d-nlbf6                                     1/1     Running     0          18h
ocs-provider-server-549f6cb4dd-xzg6h                              1/1     Running     0          20h
odf-console-6bbf7d95-2lhxw                                        1/1     Running     0          21h
odf-operator-controller-manager-557f7cc6c8-qrsz6                  2/2     Running     0          21h
rook-ceph-crashcollector-14828511aab675fafd31f3e091d9bd4a-lw2pc   1/1     Running     0          20h
rook-ceph-crashcollector-69cbb061b0ac92be6fd92f985433e85d-kcpvq   1/1     Running     0          20h
rook-ceph-crashcollector-6eb13f6db74059dfd4cb78f6ab73fce5-c962s   1/1     Running     0          20h
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-66b6d657wnfd4   2/2     Running     0          20h
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-8599dbb7fp7m7   2/2     Running     0          20h
rook-ceph-mgr-a-6c8bc8bd77-h698r                                  2/2     Running     0          20h
rook-ceph-mon-a-66dd577c5b-9thxn                                  2/2     Running     0          20h
rook-ceph-mon-b-5f986c6797-w58lh                                  2/2     Running     0          20h
rook-ceph-mon-c-697477d8c8-npnfn                                  2/2     Running     0          20h
rook-ceph-operator-5db9f784b4-jphqd                               1/1     Running     0          21h
rook-ceph-osd-0-77fc764689-76tvq                                  2/2     Running     0          20h
rook-ceph-osd-1-6c4ffddbc7-nd2pd                                  2/2     Running     0          20h
rook-ceph-osd-2-8d88d9cfc-9t7jf                                   2/2     Running     0          20h
rook-ceph-osd-prepare-ocs-deviceset-0-data-0jm7vg--1-cw9kb        0/1     Completed   0          20h
rook-ceph-osd-prepare-ocs-deviceset-1-data-0m6rvx--1-c5rgx        0/1     Completed   0          20h
rook-ceph-osd-prepare-ocs-deviceset-2-data-044bhp--1-m77b2        0/1     Completed   0          20h
rook-ceph-tools-78bd95d497-f78fb                                  1/1     Running     0          20h

==========================================================================================================

Pod 'ocs-provider-server-549f6cb4dd-xzg6h' is running and onboraded consumner suceesfully. 

Marking it as verified

Comment 12 errata-xmlrpc 2022-04-13 18:53:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1372


Note You need to log in before you can comment on or make changes to this bug.