Description of problem (please be detailed as possible and provide log snippests): ocs-operator.v4.16.0-61 failed to install due to ocs-operator in CLBO Version of all relevant components (if applicable): ocs-operator.v4.16.0-61 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? 1/1 Can this issue reproduce from the UI? Not tried If this is a regression, please provide more details to justify this: Yes Steps to Reproduce: 1. install ODF using ocs-ci 2. check all CSV's status 3. Actual results: $ oc get csv NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.16.0-61.stable NooBaa Operator 4.16.0-61.stable Succeeded ocs-client-operator.v4.16.0-61.stable OpenShift Data Foundation Client 4.16.0-61.stable Succeeded ocs-operator.v4.16.0-61.stable OpenShift Container Storage 4.16.0-61.stable Failed odf-csi-addons-operator.v4.16.0-61.stable CSI Addons 4.16.0-61.stable Succeeded odf-operator.v4.16.0-61.stable OpenShift Data Foundation 4.16.0-61.stable Succeeded odf-prometheus-operator.v4.16.0-61.stable Prometheus Operator 4.16.0-61.stable Succeeded Expected results: All CSV's should be in Succeeded state Additional info: $ oc get pods NAME READY STATUS RESTARTS AGE compute-0-debug 1/1 Running 0 2m24s compute-1-debug 1/1 Running 0 2m24s compute-2-debug 1/1 Running 0 2m24s console-77c5cf46c9-qj4jm 1/1 Running 0 53m csi-addons-controller-manager-bd59f5579-ztbb5 2/2 Running 0 53m noobaa-operator-66766dbcfd-rlklq 1/1 Running 0 53m ocs-client-operator-console-77c5cf46c9-khcfq 1/1 Running 0 53m ocs-client-operator-controller-manager-67db5b7769-vmmw4 2/2 Running 0 53m ocs-operator-665db85bc6-8zmhl 0/1 CrashLoopBackOff 15 (106s ago) 53m odf-console-6dcdd9566-9gl6l 1/1 Running 0 53m odf-operator-controller-manager-7cbd98b958-7d65k 2/2 Running 0 53m ux-backend-server-6d4579c676-s4z7b 0/2 ContainerCreating 0 53m > $ oc get pod ocs-operator-665db85bc6-8zmhl -o yaml status: conditions: - lastProbeTime: null lastTransitionTime: "2024-04-01T11:46:51Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2024-04-01T11:46:44Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2024-04-01T11:46:44Z" message: 'containers with unready status: [ocs-operator]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2024-04-01T11:46:44Z" message: 'containers with unready status: [ocs-operator]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2024-04-01T11:46:44Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://082ad6e61167864b3048ee96c1e7a5ecb6236a5ce2be7678bae33591059c060a image: registry.redhat.io/odf4/ocs-rhel9-operator@sha256:0fd799cd27428bd0ee9fbe43eb9f4e734c9e2e90159c7f240a726e786c98df00 imageID: registry.redhat.io/odf4/ocs-rhel9-operator@sha256:0fd799cd27428bd0ee9fbe43eb9f4e734c9e2e90159c7f240a726e786c98df00 lastState: terminated: containerID: cri-o://082ad6e61167864b3048ee96c1e7a5ecb6236a5ce2be7678bae33591059c060a exitCode: 1 finishedAt: "2024-04-01T12:38:41Z" reason: Error startedAt: "2024-04-01T12:38:41Z" name: ocs-operator ready: false restartCount: 15 started: false state: waiting: message: back-off 5m0s restarting failed container=ocs-operator pod=ocs-operator-665db85bc6-8zmhl_openshift-storage(4763cc5c-b0bb-426e-a451-67d5ea794b0c) reason: CrashLoopBackOff hostIP: 10.1.112.65 hostIPs: - ip: 10.1.112.65 phase: Running podIP: 10.129.2.13 podIPs: - ip: 10.129.2.13 qosClass: BestEffort startTime: "2024-04-01T11:46:44Z" > ocs operator log: {"level":"info","ts":"2024-04-01T12:38:41Z","logger":"cmd","msg":"Go Version: go1.21.7 (Red Hat 1.21.7-1.el9)"} {"level":"info","ts":"2024-04-01T12:38:41Z","logger":"cmd","msg":"Go OS/Arch: linux/amd64"} {"level":"info","ts":"2024-04-01T12:38:41Z","logger":"cmd","msg":"Cluster is running on OpenShift."} {"level":"error","ts":"2024-04-01T12:38:41Z","logger":"cmd","msg":"unable to create controller","controller":"StorageClassRequest","error":"unable to set up FieldIndexer on CephBlockPoolRadosNamespaces for owner reference UIDs: failed to get restmapping: no matches for kind \"CephBlockPoolRadosNamespace\" in group \"ceph.rook.io\"","stacktrace":"main.main\n\t/remote-source/app/main.go:217\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:267"} must gather: https://url.corp.redhat.com/8ca24c3 job: https://url.corp.redhat.com/2362cc0
Pls test it with the latest build, It is fixed.
verified with build: ocs-registry:4.16.0-69 ( BUILD ID: 4.16.0-69 RUN ID: 1712373631 ) job: https://url.corp.redhat.com/eb6cf87 logs: https://url.corp.redhat.com/259b190
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:4591