Bug 2187765 - [Fusion aaS Rook][backport bug for 4.12.3] Rook-ceph-operator pod should allow OBC CRDs to be optional instead of causing a crash when not present
Summary: [Fusion aaS Rook][backport bug for 4.12.3] Rook-ceph-operator pod should allo...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.12.3
Assignee: Santosh Pillai
QA Contact: Elena Bondarenko
URL:
Whiteboard:
Depends On: 2183259 2183266
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-18 15:57 UTC by Neha Berry
Modified: 2023-08-09 17:03 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2183266
Environment:
Last Closed: 2023-05-23 09:17:30 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage rook pull 481 0 None open Bug 2187765: [4.12 backport] skip OBC and Notification controllers 2023-04-25 11:26:22 UTC
Github rook rook pull 12075 0 None Merged core: Skip OBC controllers based on env variable 2023-04-25 07:11:15 UTC
Red Hat Product Errata RHSA-2023:3265 0 None None None 2023-05-23 09:17:46 UTC

Description Neha Berry 2023-04-18 15:57:56 UTC
+++ This bug was initially created as a clone of Bug #2183266 +++

+++ This bug was initially created as a clone of Bug #2183259 +++


Cloning the MS bug to rook for 4.12.3 backport


Description of problem (please be detailed as possible and provide log
snippests):
==========================================================================
With the new Fusion aaS, in Managed service, Managed Fusion Agent is being configured in managed-fusion namespace and the OCS is installed via storagecluster yaml in the openshift-storage namespace

>>It is observed that the rook-ceph-operator pod is continuously restarting and entering the CrashLoopBackOff state

rook-ceph-operator-546f964678-c86z9                               0/1     CrashLoopBackOff   10 (2m21s ago)   52m   10.128.3.44   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>

>>Last meesage in the log before restart:

2023-03-30 17:02:26.568019 C | rookcmd: failed to run operator: gave up to run the operator manager: failed to run the controller-runtime manager: failed to wait for ceph-bucket-notification-controller caches to sync: timed out waiting for cache to be synced

>> Also seeing these messages at regular intervals which could be causing the issue

I0330 17:01:22.253669       1 request.go:601] Waited for 1.148121396s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/performance.openshift.io/v2?timeout=32s
I0330 17:01:32.253881       1 request.go:601] Waited for 1.148196268s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/cloud.network.openshift.io/v1?timeout=32s
I0330 17:01:42.303731       1 request.go:601] Waited for 1.197807986s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s
W0330 17:01:46.100751       1 reflector.go:324] github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/informers/externalversions/factory.go:117: failed to list *v1alpha1.ObjectBucketClaim: the server could not find the requested resource (get objectbucketclaims.objectbucket.io)
E0330 17:01:46.100778       1 reflector.go:138] github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1alpha1.ObjectBucketClaim: failed to list *v1alpha1.ObjectBucketClaim: the server could not find the requested resource (get objectbucketclaims.objectbucket.io)
I0330 17:01:52.303790       1 request.go:601] Waited for 1.198489729s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/ceph.rook.io/v1?timeout=32s
W0330 17:02:01.467902       1 reflector.go:324] github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/informers/externalversions/factory.go:117: failed to list *v1alpha1.ObjectBucket: the server could not find the requested resource (get objectbuckets.objectbucket.io)
E0330 17:02:01.467931       1 reflector.go:138] github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1alpha1.ObjectBucket: failed to list *v1alpha1.ObjectBucket: the server could not find the requested resource (get objectbuckets.objectbucket.io)




Version of all relevant components (if applicable):
=======================================================
oc get csv -n openshift-storage
NAME                                      DISPLAY                       VERSION           REPLACES                                  PHASE
managed-fusion-agent.v2.0.11              Managed Fusion Agent          2.0.11                                                      Succeeded
observability-operator.v0.0.20            Observability Operator        0.0.20            observability-operator.v0.0.19            Succeeded
ocs-operator.v4.12.1                      OpenShift Container Storage   4.12.1            ocs-operator.v4.12.0                      Installing
ose-prometheus-operator.4.10.0            Prometheus Operator           4.10.0                                                      Succeeded
route-monitor-operator.v0.1.493-a866e7c   Route Monitor Operator        0.1.493-a866e7c   route-monitor-operator.v0.1.489-7d9fe90   Succeeded

OCP =  4.12.8



Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
=================================================================
Yes. The rook-operator pod should be in RUnning state at all times

Is there any workaround available to the best of your knowledge?
=========================================================================
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
====================================================================================
3

Can this issue reproducible?
=================================
Always

Can this issue reproduce from the UI?
========================================
Not Applicable

If this is a regression, please provide more details to justify this:
========================================================================
Not sure. This agent based deployment and installing the Offering(OCS) without ODF is new as part of Fusion aaS

Steps to Reproduce:
==============================
1. Create a ODF to ODF cluster with ROSA 4.12 and on one of the cluster, install the agent following the doc [1]
2. Open the ports for ceph pods in the AWS by setting the securityGroups for worker nodes.
2. Create a namespace for offering
		$oc create ns openshift-storage
3. Create managedFusionOffering CR for DF offering, you can get a sample managedFusionOffering CR from here [2] (Change the namespace to openshift-storage after getting the file)
		$oc create -f <file.yaml>
4. After 2-5 mins, you will be able to see resources related to DF offering such as ocs-operator, rook-operator etc.

    

[1] https://docs.google.com/document/d/1Jdx8czlMjbumvilw8nZ6LtvWOMAx3H4TfwoVwiBs0nE/edit?hl=en&forcehl=1#
[2]  https://raw.githubusercontent.com/red-hat-storage/managed-fusion-agent/main/config/samples/misf_v1alpha1_managedfusionoffering.yaml
Actual results:
=================================
rook-ceph-operator pod is in CrashLoopBackOff state and is restarting continuously

Expected results:
========================
The rook operator pod should be healthy and in Running state


Additional info:
==========================
oc get pods -o wide -n openshift-storage              
NAME                                                              READY   STATUS             RESTARTS         AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
650dad71f7dd1f23353a1edfdd6bbb6ad36ba9267d88e2d9e8cecef3aesvz6s   0/1     Completed          0                52m   10.128.3.29   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
managed-fusion-offering-catalog-tcm2c                             1/1     Running            0                53m   10.128.3.15   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
ocs-metrics-exporter-74bfc6bc4c-5hxsx                             1/1     Running            0                52m   10.128.3.45   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
ocs-operator-7bf4645b4b-qtskf                                     1/1     Running            0                52m   10.128.3.43   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
ocs-provider-server-9c5d8c967-lk2wr                               1/1     Running            0                51m   10.128.3.53   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
rook-ceph-crashcollector-4ffaac79e37614c6013f570e3b27f7ec-ndqvx   1/1     Running            0                48m   10.0.16.212   ip-10-0-16-212.us-east-2.compute.internal   <none>           <none>
rook-ceph-crashcollector-849dd162cdc549f9ab37aa11b170e238-zxszt   1/1     Running            0                48m   10.0.12.147   ip-10-0-12-147.us-east-2.compute.internal   <none>           <none>
rook-ceph-crashcollector-8666d5fa535dba82fde439b962fbffde-ks74j   1/1     Running            0                48m   10.0.20.107   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-64c88b96znkm9   2/2     Running            0                47m   10.0.16.212   ip-10-0-16-212.us-east-2.compute.internal   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6f8c4995nl4z6   2/2     Running            0                47m   10.0.20.107   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
rook-ceph-mgr-a-7c989b5b78-ms7gs                                  2/2     Running            0                48m   10.0.12.147   ip-10-0-12-147.us-east-2.compute.internal   <none>           <none>
rook-ceph-mon-a-589b9446b7-5n9q6                                  2/2     Running            0                51m   10.0.12.147   ip-10-0-12-147.us-east-2.compute.internal   <none>           <none>
rook-ceph-mon-b-5f46dfcdd8-xm5m9                                  2/2     Running            0                49m   10.0.16.212   ip-10-0-16-212.us-east-2.compute.internal   <none>           <none>
rook-ceph-mon-c-655bf5d664-4vmpv                                  2/2     Running            0                48m   10.0.20.107   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
rook-ceph-operator-546f964678-c86z9                               0/1     CrashLoopBackOff   10 (2m21s ago)   52m   10.128.3.44   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
rook-ceph-osd-0-7745c89c76-dvjw6                                  2/2     Running            0                47m   10.0.20.107   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
rook-ceph-osd-1-7778595d9c-dzql9                                  2/2     Running            0                47m   10.0.12.147   ip-10-0-12-147.us-east-2.compute.internal   <none>           <none>
rook-ceph-osd-2-7d565f6495-8kvqp                                  2/2     Running            0                47m   10.0.16.212   ip-10-0-16-212.us-east-2.compute.internal   <none>           <none>
rook-ceph-osd-prepare-default-0-data-08bl8z-j76gj                 0/1     Completed          0                48m   10.0.20.107   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>
rook-ceph-osd-prepare-default-1-data-0w6d47-ss74g                 0/1     Completed          0                48m   10.0.16.212   ip-10-0-16-212.us-east-2.compute.internal   <none>           <none>
rook-ceph-osd-prepare-default-2-data-08rmgh-gfxdv                 0/1     Completed          0                48m   10.0.12.147   ip-10-0-12-147.us-east-2.compute.internal   <none>           <none>
rook-ceph-tools-565ffdb78c-28gzv                                  1/1     Running            0                51m   10.128.3.54   ip-10-0-20-107.us-east-2.compute.internal   <none>           <none>

--- Additional comment from Neha Berry on 2023-03-30 17:20:58 UTC ---

This issue is not seen a general ODF cluster with 4.12 and other versions

E.g. from ODF product

http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-270vuf1cs33-t1/j-270vuf1cs33-t1_20230327T062342/logs/deployment_1679900804/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-42e21aeadcaeff809b877661f236b482ca1dcd563bab345fe02c2e86945121eb/namespaces/openshift-storage/pods/rook-ceph-operator-6d74947d57-9m6rr/rook-ceph-operator/rook-ceph-operator/logs/current.log

--- Additional comment from RHEL Program Management on 2023-03-30 17:22:47 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.13.0' to '?', and so is being proposed to be fixed at the ODF 4.13.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from Neha Berry on 2023-03-30 17:35:25 UTC ---

Logs copied here 
http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bz-2183259/

zipped logs http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bz-2183259.zip

--- Additional comment from Travis Nielsen on 2023-03-30 17:37:07 UTC ---

From the end of the operator log:

W0330 17:31:16.801348       1 reflector.go:324] github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/informers/externalversions/factory.go:117: failed to list *v1alpha1.ObjectBucketClaim: the server could not find the requested resource (get objectbucketclaims.objectbucket.io)
E0330 17:31:16.801377       1 reflector.go:138] github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1alpha1.ObjectBucketClaim: failed to list *v1alpha1.ObjectBucketClaim: the server could not find the requested resource (get objectbucketclaims.objectbucket.io)
2023-03-30 17:31:19.547938 C | rookcmd: failed to run operator: gave up to run the operator manager: failed to run the controller-runtime manager: failed to wait for ceph-bucket-notification-controller caches to sync: timed out waiting for cache to be synced


It appears the CRDs for OBCs are missing. The new agent-based install must have missed them.

--- Additional comment from Subham Rai on 2023-03-31 01:50:43 UTC ---

These CRDs come from noobaa via ocs-operator IIRC. So, I think rook is not the right component here, we should move this to ocs-operator.

Tagging @nigoyal to correct me.

--- Additional comment from Subham Rai on 2023-03-31 04:26:36 UTC ---

(In reply to Subham Rai from comment #4)
> These CRDs come from noobaa via ocs-operator IIRC. So, I think rook is not
> the right component here, we should move this to ocs-operator.
> 
 I'll just correct myself, it's the odf-operator, not the ocs-operator which installs the crd.

--- Additional comment from Subham Rai on 2023-03-31 04:45:20 UTC ---

moving to odf operator and removing needinfo on Nitin and Neha as got confirmation in offline chat.

--- Additional comment from Ohad on 2023-03-31 09:40:07 UTC ---

@srai This is not an ODF bug as we are not even installing ODF.
I think the bug here is that Rook cannot be installed without noobaa, to me that does not make any sense because, to the best of my knowledge, as an upstream project, Rook is not coupled with NooBaa.

A similar problem happened with ocs-operator (being installed without of operator) around the NooBaa CR, but at least in that case it made sense. Nithin already have a patch for ocs-operator to remove that hard dependency. 

Moving this back to rook

--- Additional comment from Subham Rai on 2023-03-31 11:35:35 UTC ---

(In reply to Ohad from comment #7)
> @srai This is not an ODF bug as we are not even installing ODF.
> I think the bug here is that Rook cannot be installed without noobaa, to me
> that does not make any sense because, to the best of my knowledge, as an
> upstream project, Rook is not coupled with NooBaa.
> 

In upstream rook, rook creates all the crds it is watching for even ob/obcs https://github.com/rook/rook/blob/master/pkg/operator/ceph/cr_manager.go#L87-L106 https://github.com/rook/rook/blob/master/deploy/examples/crds.yaml#L14486  https://github.com/rook/rook/blob/master/deploy/examples/crds.yaml#L11734. But in downstream, rook doesn't create the ob/obc crds or include them in rook csv and it is the odf operator which creates the these crds. 

So, since you are not installing the odf operator, there are no ob/obc crds that the rook operator is looking for and so is the issue.

I'm not sure what bug is here for rook to fix.

--- Additional comment from Ohad on 2023-03-31 12:03:08 UTC ---

The issue I see here is that ocs-operator downstream as of rook downstream implementation cannot be independently installed (without the ODF operator).
It is either a bug or a flow in the design. In either case, action needs to be taken.

--- Additional comment from Travis Nielsen on 2023-03-31 19:51:40 UTC ---

To summarize, is this accurate?
- The OB/OBC CRDs are installed by the ODF operator
- The ODF operator is not installed with this offering
- The OBCs are not required in this offering?

Is the request to install the missing CRDs, or to allow Rook to run without them?

--- Additional comment from Ohad on 2023-04-02 19:25:54 UTC ---

Hi Travis,

Your summary is accurate. As for your question, the ask is to allow DS Rook to run without these CRDs on the cluster. 
This is a similar ask to the one we requested from ocs-operator where it was impossible to run the operator without the NooBaa CRD present on the cluster. But as per our request, the ocs-operator team is providing a patch to fix the situation.

--- Additional comment from Nitin Goyal on 2023-04-03 04:53:56 UTC ---

(In reply to Subham Rai from comment #4)
> These CRDs come from noobaa via ocs-operator IIRC. So, I think rook is not
> the right component here, we should move this to ocs-operator.
> 
> Tagging @nigoyal to correct me.

Clearing my and Neha's need info as Ohad already cleared all the doubts in comment 7.

--- Additional comment from RHEL Program Management on 2023-04-04 15:47:07 UTC ---

This BZ is being approved for ODF 4.13.0 release, upon receipt of the 3 ACKs (PM,Devel,QA) for the release flag 'odf‑4.13.0

--- Additional comment from RHEL Program Management on 2023-04-04 15:47:07 UTC ---

Since this bug has been approved for ODF 4.13.0 release, through release flag 'odf-4.13.0+', the Target Release is being set to 'ODF 4.13.0

--- Additional comment from Santosh Pillai on 2023-04-07 07:52:35 UTC ---

so had a discussion with the team about possible fixes:

- OCS operator could pass on details va env/config map and provide the resources that rook should ignore and not start a watch on.
- Rook should itself check if a CRD is available before starting a watch on the resource. 

Waiting on OCS operator team to see how easy it is to pass this info to rook config map.

--- Additional comment from Sunil Kumar Acharya on 2023-04-10 12:32:08 UTC ---

Please ensure to get the fix merged for this BZ by 18-Apr-2023. If you foresee any risk to it, please update the flags before 17-Apr-2023 EOD.

--- Additional comment from Travis Nielsen on 2023-04-13 13:43:07 UTC ---

Neha, which CRDs exactly are being created in the cluster? We need to confirm if we are skipping all the necessary CRDs. 

What does this show?
oc get crd

--- Additional comment from Santosh Pillai on 2023-04-14 05:45:24 UTC ---

must gather does not collect CRD info.

--- Additional comment from Neha Berry on 2023-04-17 08:34:21 UTC ---

(In reply to Santosh Pillai from comment #18)
> must gather does not collect CRD info.

Ack. We will try to share the output

IIUC Currently, we are working past this issue with a workaround in the agent build, will the output from such a cluster help?

--- Additional comment from Filip Balák on 2023-04-17 08:40:51 UTC ---

$ oc get crd
NAME                                                              CREATED AT
addoninstances.addons.managed.openshift.io                        2023-04-17T05:22:42Z
addonoperators.addons.managed.openshift.io                        2023-04-17T05:22:42Z
addons.addons.managed.openshift.io                                2023-04-17T05:22:42Z
alertmanagerconfigs.monitoring.coreos.com                         2023-04-17T04:53:21Z
alertmanagerconfigs.monitoring.rhobs                              2023-04-17T05:22:31Z
alertmanagers.monitoring.coreos.com                               2023-04-17T04:53:24Z
alertmanagers.monitoring.rhobs                                    2023-04-17T05:22:30Z
apirequestcounts.apiserver.openshift.io                           2023-04-17T04:53:04Z
apiservers.config.openshift.io                                    2023-04-17T04:52:39Z
authentications.config.openshift.io                               2023-04-17T04:52:40Z
authentications.operator.openshift.io                             2023-04-17T04:53:22Z
backups.velero.io                                                 2023-04-17T05:22:18Z
backupstoragelocations.velero.io                                  2023-04-17T05:22:18Z
baremetalhosts.metal3.io                                          2023-04-17T04:53:20Z
bmceventsubscriptions.metal3.io                                   2023-04-17T04:53:23Z
builds.config.openshift.io                                        2023-04-17T04:52:40Z
catalogsources.operators.coreos.com                               2023-04-17T04:53:22Z
cephblockpoolradosnamespaces.ceph.rook.io                         2023-04-17T05:25:22Z
cephblockpools.ceph.rook.io                                       2023-04-17T05:25:23Z
cephbucketnotifications.ceph.rook.io                              2023-04-17T05:25:23Z
cephbuckettopics.ceph.rook.io                                     2023-04-17T05:25:24Z
cephclients.ceph.rook.io                                          2023-04-17T05:25:24Z
cephclusters.ceph.rook.io                                         2023-04-17T05:25:23Z
cephfilesystemmirrors.ceph.rook.io                                2023-04-17T05:25:22Z
cephfilesystems.ceph.rook.io                                      2023-04-17T05:25:23Z
cephfilesystemsubvolumegroups.ceph.rook.io                        2023-04-17T05:25:24Z
cephnfses.ceph.rook.io                                            2023-04-17T05:25:22Z
cephobjectrealms.ceph.rook.io                                     2023-04-17T05:25:22Z
cephobjectstores.ceph.rook.io                                     2023-04-17T05:25:22Z
cephobjectstoreusers.ceph.rook.io                                 2023-04-17T05:25:22Z
cephobjectzonegroups.ceph.rook.io                                 2023-04-17T05:25:22Z
cephobjectzones.ceph.rook.io                                      2023-04-17T05:25:23Z
cephrbdmirrors.ceph.rook.io                                       2023-04-17T05:25:22Z
cloudcredentials.operator.openshift.io                            2023-04-17T04:53:04Z
cloudprivateipconfigs.cloud.network.openshift.io                  2023-04-17T04:55:04Z
clusterautoscalers.autoscaling.openshift.io                       2023-04-17T04:53:21Z
clustercsidrivers.operator.openshift.io                           2023-04-17T04:54:02Z
clusteroperators.config.openshift.io                              2023-04-17T04:52:28Z
clusterresourcequotas.quota.openshift.io                          2023-04-17T04:52:39Z
clusterserviceversions.operators.coreos.com                       2023-04-17T04:53:24Z
clusterurlmonitors.monitoring.openshift.io                        2023-04-17T05:21:20Z
clusterversions.config.openshift.io                               2023-04-17T04:52:28Z
configs.imageregistry.operator.openshift.io                       2023-04-17T04:53:19Z
configs.operator.openshift.io                                     2023-04-17T04:53:26Z
configs.samples.operator.openshift.io                             2023-04-17T04:53:17Z
consoleclidownloads.console.openshift.io                          2023-04-17T04:53:18Z
consoleexternalloglinks.console.openshift.io                      2023-04-17T04:53:18Z
consolelinks.console.openshift.io                                 2023-04-17T04:53:17Z
consolenotifications.console.openshift.io                         2023-04-17T04:53:17Z
consoleplugins.console.openshift.io                               2023-04-17T04:53:17Z
consolequickstarts.console.openshift.io                           2023-04-17T04:53:17Z
consoles.config.openshift.io                                      2023-04-17T04:52:40Z
consoles.operator.openshift.io                                    2023-04-17T04:53:18Z
consoleyamlsamples.console.openshift.io                           2023-04-17T04:53:17Z
containerruntimeconfigs.machineconfiguration.openshift.io         2023-04-17T04:53:42Z
controllerconfigs.machineconfiguration.openshift.io               2023-04-17T04:56:48Z
controlplanemachinesets.machine.openshift.io                      2023-04-17T04:53:19Z
credentialsrequests.cloudcredential.openshift.io                  2023-04-17T04:53:04Z
csisnapshotcontrollers.operator.openshift.io                      2023-04-17T04:53:21Z
customdomains.managed.openshift.io                                2023-04-17T05:21:50Z
deletebackuprequests.velero.io                                    2023-04-17T05:22:18Z
dnses.config.openshift.io                                         2023-04-17T04:52:41Z
dnses.operator.openshift.io                                       2023-04-17T04:53:26Z
dnsrecords.ingress.operator.openshift.io                          2023-04-17T04:53:22Z
downloadrequests.velero.io                                        2023-04-17T05:22:18Z
egressfirewalls.k8s.ovn.org                                       2023-04-17T04:55:11Z
egressips.k8s.ovn.org                                             2023-04-17T04:55:12Z
egressqoses.k8s.ovn.org                                           2023-04-17T04:55:12Z
egressrouters.network.operator.openshift.io                       2023-04-17T04:53:33Z
etcds.operator.openshift.io                                       2023-04-17T04:53:18Z
featuregates.config.openshift.io                                  2023-04-17T04:52:41Z
firmwareschemas.metal3.io                                         2023-04-17T04:53:26Z
hardwaredata.metal3.io                                            2023-04-17T04:53:27Z
helmchartrepositories.helm.openshift.io                           2023-04-17T04:53:18Z
hostfirmwaresettings.metal3.io                                    2023-04-17T04:53:30Z
imagecontentpolicies.config.openshift.io                          2023-04-17T04:52:42Z
imagecontentsourcepolicies.operator.openshift.io                  2023-04-17T04:52:42Z
imagepruners.imageregistry.operator.openshift.io                  2023-04-17T04:53:52Z
images.config.openshift.io                                        2023-04-17T04:52:42Z
infrastructures.config.openshift.io                               2023-04-17T04:52:43Z
ingresscontrollers.operator.openshift.io                          2023-04-17T04:53:07Z
ingresses.config.openshift.io                                     2023-04-17T04:52:43Z
insightsoperators.operator.openshift.io                           2023-04-17T05:03:57Z
installplans.operators.coreos.com                                 2023-04-17T04:53:26Z
ippools.whereabouts.cni.cncf.io                                   2023-04-17T04:55:05Z
kubeapiservers.operator.openshift.io                              2023-04-17T04:53:56Z
kubecontrollermanagers.operator.openshift.io                      2023-04-17T04:53:23Z
kubeletconfigs.machineconfiguration.openshift.io                  2023-04-17T04:53:43Z
kubeschedulers.operator.openshift.io                              2023-04-17T04:53:23Z
kubestorageversionmigrators.operator.openshift.io                 2023-04-17T04:53:17Z
machineautoscalers.autoscaling.openshift.io                       2023-04-17T04:53:23Z
machineconfigpools.machineconfiguration.openshift.io              2023-04-17T04:53:47Z
machineconfigs.machineconfiguration.openshift.io                  2023-04-17T04:53:46Z
machinehealthchecks.machine.openshift.io                          2023-04-17T04:54:01Z
machines.machine.openshift.io                                     2023-04-17T04:53:59Z
machinesets.machine.openshift.io                                  2023-04-17T04:54:01Z
managedfleetnotificationrecords.ocmagent.managed.openshift.io     2023-04-17T05:21:29Z
managedfleetnotifications.ocmagent.managed.openshift.io           2023-04-17T05:21:29Z
managedfusionofferings.misf.ibm.com                               2023-04-17T05:23:38Z
managednotifications.ocmagent.managed.openshift.io                2023-04-17T05:21:29Z
monitoringstacks.monitoring.rhobs                                 2023-04-17T05:22:27Z
mustgathers.managed.openshift.io                                  2023-04-17T05:21:13Z
network-attachment-definitions.k8s.cni.cncf.io                    2023-04-17T04:55:04Z
networks.config.openshift.io                                      2023-04-17T04:52:44Z
networks.operator.openshift.io                                    2023-04-17T04:53:23Z
nodes.config.openshift.io                                         2023-04-17T04:52:44Z
noobaas.noobaa.io                                                 2023-04-17T05:24:15Z
oauths.config.openshift.io                                        2023-04-17T04:52:44Z
objectbucketclaims.objectbucket.io                                2023-04-17T05:24:15Z
objectbuckets.objectbucket.io                                     2023-04-17T05:24:15Z
ocmagents.ocmagent.managed.openshift.io                           2023-04-17T05:21:29Z
ocsinitializations.ocs.openshift.io                               2023-04-17T05:24:15Z
olmconfigs.operators.coreos.com                                   2023-04-17T04:53:33Z
openshiftapiservers.operator.openshift.io                         2023-04-17T04:53:18Z
openshiftcontrollermanagers.operator.openshift.io                 2023-04-17T04:53:24Z
operatorconditions.operators.coreos.com                           2023-04-17T04:53:36Z
operatorgroups.operators.coreos.com                               2023-04-17T04:53:37Z
operatorhubs.config.openshift.io                                  2023-04-17T04:53:18Z
operatorpkis.network.operator.openshift.io                        2023-04-17T04:53:36Z
operators.operators.coreos.com                                    2023-04-17T04:53:40Z
overlappingrangeipreservations.whereabouts.cni.cncf.io            2023-04-17T04:55:05Z
performanceprofiles.performance.openshift.io                      2023-04-17T04:53:23Z
podmonitors.monitoring.coreos.com                                 2023-04-17T04:53:26Z
podmonitors.monitoring.rhobs                                      2023-04-17T05:22:30Z
podnetworkconnectivitychecks.controlplane.operator.openshift.io   2023-04-17T05:20:26Z
podvolumebackups.velero.io                                        2023-04-17T05:22:18Z
podvolumerestores.velero.io                                       2023-04-17T05:22:18Z
preprovisioningimages.metal3.io                                   2023-04-17T04:53:32Z
probes.monitoring.coreos.com                                      2023-04-17T04:53:29Z
probes.monitoring.rhobs                                           2023-04-17T05:22:30Z
profiles.tuned.openshift.io                                       2023-04-17T04:53:26Z
projecthelmchartrepositories.helm.openshift.io                    2023-04-17T04:53:17Z
projects.config.openshift.io                                      2023-04-17T04:52:45Z
prometheuses.monitoring.coreos.com                                2023-04-17T04:53:30Z
prometheuses.monitoring.rhobs                                     2023-04-17T05:22:30Z
prometheusrules.monitoring.coreos.com                             2023-04-17T04:53:33Z
prometheusrules.monitoring.rhobs                                  2023-04-17T05:22:30Z
provisionings.metal3.io                                           2023-04-17T04:53:39Z
proxies.config.openshift.io                                       2023-04-17T04:52:38Z
rangeallocations.security.internal.openshift.io                   2023-04-17T04:52:39Z
resticrepositories.velero.io                                      2023-04-17T05:22:18Z
restores.velero.io                                                2023-04-17T05:22:18Z
rolebindingrestrictions.authorization.openshift.io                2023-04-17T04:52:38Z
routemonitors.monitoring.openshift.io                             2023-04-17T05:21:20Z
schedulers.config.openshift.io                                    2023-04-17T04:52:45Z
schedules.velero.io                                               2023-04-17T05:22:18Z
securitycontextconstraints.security.openshift.io                  2023-04-17T04:52:39Z
serverstatusrequests.velero.io                                    2023-04-17T05:22:18Z
servicecas.operator.openshift.io                                  2023-04-17T04:53:26Z
servicemonitors.monitoring.coreos.com                             2023-04-17T04:53:36Z
servicemonitors.monitoring.rhobs                                  2023-04-17T05:22:31Z
splunkforwarders.splunkforwarder.managed.openshift.io             2023-04-17T05:21:47Z
storageclassclaims.ocs.openshift.io                               2023-04-17T05:25:22Z
storageclusters.ocs.openshift.io                                  2023-04-17T05:24:14Z
storageconsumers.ocs.openshift.io                                 2023-04-17T05:25:22Z
storages.operator.openshift.io                                    2023-04-17T04:54:02Z
storagestates.migration.k8s.io                                    2023-04-17T04:53:24Z
storageversionmigrations.migration.k8s.io                         2023-04-17T04:53:21Z
subjectpermissions.managed.openshift.io                           2023-04-17T05:22:34Z
subscriptions.operators.coreos.com                                2023-04-17T04:53:56Z
thanosqueriers.monitoring.rhobs                                   2023-04-17T05:22:31Z
thanosrulers.monitoring.coreos.com                                2023-04-17T04:53:37Z
thanosrulers.monitoring.rhobs                                     2023-04-17T05:22:31Z
tuneds.tuned.openshift.io                                         2023-04-17T04:53:29Z
upgradeconfigs.upgrade.managed.openshift.io                       2023-04-17T05:22:19Z
veleroinstalls.managed.openshift.io                               2023-04-17T05:21:58Z
volumesnapshotclasses.snapshot.storage.k8s.io                     2023-04-17T04:57:01Z
volumesnapshotcontents.snapshot.storage.k8s.io                    2023-04-17T04:57:01Z
volumesnapshotlocations.velero.io                                 2023-04-17T05:22:18Z
volumesnapshots.snapshot.storage.k8s.io                           2023-04-17T04:57:01Z

--- Additional comment from Dhruv Bindra on 2023-04-17 08:45:26 UTC ---

@tnielsen These are the CRDs that are not on the cluster when we install ocs operator but rook tries to watch the resource.
objectbuckets.objectbucket.io
objectbucketclaims.objectbucket.io

Comment 11 Elena Bondarenko 2023-05-03 16:00:28 UTC
I tested with ocs-operator.v4.12.3-rhodf. rook-ceph-operator pod is Running

Comment 17 errata-xmlrpc 2023-05-23 09:17:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.12.3 Security and Bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3265


Note You need to log in before you can comment on or make changes to this bug.