Bug 2182041 - OCS-Operator expects NooBaa CRDs to be present on the cluster when installed directly without ODF Operator
Summary: OCS-Operator expects NooBaa CRDs to be present on the cluster when installed ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.13.0
Assignee: Nitin Goyal
QA Contact: Filip Balák
URL:
Whiteboard:
Depends On:
Blocks: 2203827 2203828 2185725
TreeView+ depends on / blocked
 
Reported: 2023-03-27 11:44 UTC by Dhruv Bindra
Modified: 2023-08-09 17:00 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2185725 2203827 2203828 (view as bug list)
Environment:
Last Closed: 2023-06-21 15:25:01 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1972 0 None open do not watch noobaa CRs if noobaa CRD is not present in the cluster 2023-03-28 11:03:18 UTC
Github red-hat-storage ocs-operator pull 2001 0 None open Bug 2182041:[release-4.13] do not watch noobaa CRs if SKIP_NOOBAA_CRD_WATCH is set to true 2023-04-10 07:22:28 UTC
Red Hat Product Errata RHBA-2023:3742 0 None None None 2023-06-21 15:25:26 UTC

Description Dhruv Bindra 2023-03-27 11:44:58 UTC
Description of problem (please be detailed as possible and provide log
snippests):
When OCS-Operator is installed directly(not as a dependency of ODF Operator), the OCS-Operator pod throws the error "no matches for kind \"NooBaa\" in version \"noobaa.io/v1alpha1\""
This is because OCS-Operator does not install NooBaa CRD

Version of all relevant components (if applicable):
OCS-Operator v4.12.z

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?
Create a NooBaa CRD manually on the cluster

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Create OCS-Operator subscription
2. Check the logs of OCS-Operator pod


Actual results:
Error is shown in the logs of pod
{"level":"error","ts":1679650474.2685232,"logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"NooBaa.noobaa.io","error":"no matches for kind \"NooBaa\" in version \"noobaa.io/v1alpha1\"","stacktrace":"sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:139\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233\nk8s.io/apimachinery/pkg/util/wait.WaitForWithContext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660\nk8s.io/apimachinery/pkg/util/wait.poll\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:545\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1\n\t/remot... 


Expected results:
No error should be shown and OCS Operator should reconcile storageCluster

Additional info:

Comment 2 Nitin Goyal 2023-03-28 08:49:59 UTC
This is happening because we are adding noobaa into the scheme. We do create Noobaa CRs as part of the reconciliation that's why we need those. So I would say this is working as expected.

Comment 3 Dhruv Bindra 2023-03-28 09:05:50 UTC
Adding the NooBaa package to the scheme won't throw the error. OCS-Operator might be watching for NooBaa CRs so when the cache is built initially it cannot find NooBaa CRD on the cluster and it throws the error.
For this kind of scenario, we should only watch the CR if the CRD is present in the cluster.

Comment 10 Filip Balák 2023-05-02 09:34:42 UTC
The issue is not observed with ocs-operator.v4.13.0-164.stable. --> VERIFIED

Comment 12 errata-xmlrpc 2023-06-21 15:25:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3742


Note You need to log in before you can comment on or make changes to this bug.