Description of problem (please be detailed as possible and provide log snippests): When OCS-Operator is installed directly(not as a dependency of ODF Operator), the OCS-Operator pod throws the error "no matches for kind \"NooBaa\" in version \"noobaa.io/v1alpha1\"" This is because OCS-Operator does not install NooBaa CRD Version of all relevant components (if applicable): OCS-Operator v4.12.z Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Create a NooBaa CRD manually on the cluster Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Yes Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Create OCS-Operator subscription 2. Check the logs of OCS-Operator pod Actual results: Error is shown in the logs of pod {"level":"error","ts":1679650474.2685232,"logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"NooBaa.noobaa.io","error":"no matches for kind \"NooBaa\" in version \"noobaa.io/v1alpha1\"","stacktrace":"sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:139\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233\nk8s.io/apimachinery/pkg/util/wait.WaitForWithContext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660\nk8s.io/apimachinery/pkg/util/wait.poll\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:545\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1\n\t/remot... Expected results: No error should be shown and OCS Operator should reconcile storageCluster Additional info:
This is happening because we are adding noobaa into the scheme. We do create Noobaa CRs as part of the reconciliation that's why we need those. So I would say this is working as expected.
Adding the NooBaa package to the scheme won't throw the error. OCS-Operator might be watching for NooBaa CRs so when the cache is built initially it cannot find NooBaa CRD on the cluster and it throws the error. For this kind of scenario, we should only watch the CR if the CRD is present in the cluster.
The issue is not observed with ocs-operator.v4.13.0-164.stable. --> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3742