Description of problem (please be detailed as possible and provide log snippests): following installation guide https://github.com/red-hat-storage/lvm-operator/blob/main/doc/usage/install.md I get these errors in vg-manager pods: ``` [root@fci1-installer ~]# oc logs vg-manager-c6fwc I0420 19:34:16.037493 1916189 request.go:665] Waited for 1.036972575s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/template.openshift.io/v1?timeout=32s {"level":"info","ts":1650483257.439126,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"} {"level":"info","ts":1650483257.439365,"logger":"setup","msg":"starting manager"} {"level":"info","ts":1650483257.4394543,"msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1650483257.4396744,"logger":"controller.lvmvolumegroup","msg":"Starting EventSource","reconciler group":"lvm.topolvm.io","reconciler kind":"LVMVolumeGroup","source":"kind source: /, Kind="} {"level":"info","ts":1650483257.4398167,"logger":"controller.lvmvolumegroup","msg":"Starting Controller","reconciler group":"lvm.topolvm.io","reconciler kind":"LVMVolumeGroup"} E0420 19:34:17.442859 1916189 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:250: Failed to watch *v1alpha1.LVMVolumeGroup: failed to list *v1alpha1.LVMVolumeGroup: lvmvolumegroups.lvm.topolvm.io is forbidden: User "system:serviceaccount:openshift-odf-lvm:vg-manager" cannot list resource "lvmvolumegroups" in API group "lvm.topolvm.io" in the namespace "openshift-odf-lvm" E0420 19:34:18.736224 1916189 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:250: Failed to watch *v1alpha1.LVMVolumeGroup: failed to list *v1alpha1.LVMVolumeGroup: lvmvolumegroups.lvm.topolvm.io is forbidden: User "system:serviceaccount:openshift-odf-lvm:vg-manager" cannot list resource "lvmvolumegroups" in API group "lvm.topolvm.io" in the namespace "openshift-odf-lvm" E0420 19:34:21.053801 1916189 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:250: Failed to watch *v1alpha1.LVMVolumeGroup: failed to list *v1alpha1.LVMVolumeGroup: lvmvolumegroups.lvm.topolvm.io is forbidden: User "system:serviceaccount:openshift-odf-lvm:vg-manager" cannot list resource "lvmvolumegroups" in API group "lvm.topolvm.io" in the namespace "openshift-odf-lvm" ``` Version of all relevant components (if applicable): 4.10 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? cannot install and/or use lvm-operator Is there any workaround available to the best of your knowledge? yes, I was able to work around by: ``` oc adm policy add-cluster-role-to-user cluster-admin -z vg-manager oc adm policy add-cluster-role-to-user cluster-admin -z topolvm-controller oc adm policy add-cluster-role-to-user cluster-admin -z topolvm-node ``` but obviously a more fine-grained, permanent solution is needed Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? yes Can this issue reproduce from the UI? you mean GUI? IDK assuming CLI is also form of user interface - yes. If this is a regression, please provide more details to justify this: IDK, never used any prior version Steps to Reproduce: basically follow the guide 1. install lvm-operator 2. create lvmcluster 3. create pvc 4. create a pod to use that pvc Actual results: vg-manager pods are failing Expected results: test pod successfully run Additional info:
As discussed offline, issue happens when using namespaces other than `openshift-storage`. Please use `openshift-storage` namespace only until the issue is resolved.
The CSV annotation : operatorframework.io/suggested-namespace: openshift-storage has been added in release-4.11. This should show up in the console so the user will know which namespace the operator should be installed in. We do not intend to support installing the operator in any other namespace as of now.
We don't have a good build for some days now, I will move it once we have a build
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156