Description of problem (please be detailed as possible and provide log snippests): I am trying to deploy ocs 4.7 (quay.io/rhceph-dev/ocs-registry:4.7.0-190.ci) but the ocs-operator-fb78ff69b-2m6bm 0/1 doesn't report Ready at all. logs from the operator: {"level":"error","ts":"2020-12-14T08:33:13.785Z","logger":"controller_storagecluster","msg":"failed to reconcile metrics exporter","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster","error":"failed to retrieve metrics exporter servicemonitor openshift-storage/ocs-metrics-exporter. no kind is registered for the type v1.ServiceMonitor in scheme \"pkg/runtime/scheme.go:101\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132\ngithub.com/openshift/ocs-operator/pkg/controller/storagecluster.(*ReconcileStorageCluster).Reconcile\n\t/remote-source/app/pkg/controller/storagecluster/reconcile.go:366\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.3/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.3/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.3/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.3/pkg/util/wait/wait.go:90"} {"level":"error","ts":"2020-12-14T08:33:13.785Z","logger":"controller","msg":"Reconciler error","controller":"storagecluster-controller","name":"ocs-storagecluster","namespace":"openshift-storage","error":"failed to retrieve metrics exporter servicemonitor openshift-storage/ocs-metrics-exporter. no kind is registered for the type v1.ServiceMonitor in scheme \"pkg/runtime/scheme.go:101\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:246\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.3/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.3/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.3/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.3/pkg/util/wait/wait.go:90"} Version of all relevant components (if applicable): ocp version: 4.7.0-0.nightly-2020-12-13-175031 ocs: 4.7.0-190.ci Does this issue impact your ability to continue to work with the product ocs is not ready to use Is there any workaround available to the best of your knowledge? I have no idea Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? 100% Can this issue reproduce from the UI? i don't know Steps to Reproduce: 1. deploy ocs
Created attachment 1738878 [details] must-gather-output must-gather logs
OCS 4.7 deployment ends up successfully using v4.7.0-230.ci
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2041
A successful deployment will verify this bug. A specific test case is not needed.