Description of problem (please be detailed as possible and provide log snippests): Pods label seems to be inconsistent: "410": { "controller_manager_label": "control-plane=controller-manager", "topolvm-controller_label": "app.kubernetes.io/name=topolvm-controller", "topolvm-node_label": "app=topolvm-node", "vg-manager_label": "app=vg-manager", }, "411": { "controller_manager_label": "app.kubernetes.io/name=lvm-operator", "topolvm-controller_label": "app.lvm.openshift.io=topolvm-controller", "topolvm-node_label": "app.lvm.openshift.io=topolvm-node", "vg-manager_label": "app.lvm.openshift.io=vg-manager", }, Would rather have something uniform. Version of all relevant components (if applicable): 4.10 4.11 lvmo Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? yes Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? 1 Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: No Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
I don't understand what the issue is: - It's not GA'ed yet, we can rename as we please. - What's the user impact, if any? What's the severity here?
This after raising and discussing with lvmo dev which they want to follow k8s guidelines. Like mon in odf cluster has the following labels: pp.kubernetes.io/component=cephclusters.ceph.rook.io, app.kubernetes.io/created-by=rook-ceph-operator, app.kubernetes.io/instance=a, app.kubernetes.io/managed-by=rook-ceph-operator, app.kubernetes.io/name=ceph-mon, app.kubernetes.io/part-of=ocs-storagecluster-cephcluster, app=rook-ceph-mon,ceph_daemon_id=a, ceph_daemon_type=mon,mon=a, mon_cluster=openshift-storage, pod-template-hash=569dd9ccc9, pvc_name=rook-ceph-mon-a, pvc_size=50Gi, rook.io/operator-namespace=openshift-storage, rook_cluster=openshift-storage The user impact for instance is not having the option to select all lvmo pods. Other then that to follow some k8s guidelines. There is already a proposal by dev for new labels but still being discussed: lvm-operator-controller-manager: app.kubernetes.io/name=lvm-operator app.kubernetes.io/component=operator app.kubernetes.io/part-of=odf-lvm-provisioner app.kubernetes.io/managed-by=odf-lvm-operator topolvm-controller: app.kubernetes.io/name=topolvm-controller app.kubernetes.io/component=topolvm-csi-driver app.kubernetes.io/part-of=odf-lvm-provisioner app.kubernetes.io/managed-by=odf-lvm-operator topolvm-node: app.kubernetes.io/name=topolvm-node app.kubernetes.io/component=topolvm-csi-driver app.kubernetes.io/part-of=odf-lvm-provisioner app.kubernetes.io/managed-by=odf-lvm-operator vg-manager: app.kubernetes.io/name=vg-manager app.kubernetes.io/component=vg-manager app.kubernetes.io/part-of=odf-lvm-provisioner app.kubernetes.io/managed-by=odf-lvm-operator
The existing "app.lvm.openshift.io" label will be removed and the recommended kubernetes labels added to pods(actually to the deployments/daemonsets that create the pods) and services as a first step. The labels will be added to all resources created by LVMO in the future.
oc get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS lvm-operator-controller-manager-6884967457-qw6h2 3/3 Running 0 3m26s app.kubernetes.io/component=lvm-operator,app.kubernetes.io/name=lvm-operator,app.kubernetes.io/part-of=odf-lvm-provisioner,exporter=lvm-operator,pod-template-hash=6884967457 topolvm-controller-bcf9894b9-kbg7v 5/5 Running 0 2m38s app.kubernetes.io/component=topolvm-csi-driver,app.kubernetes.io/managed-by=lvm-operator,app.kubernetes.io/name=topolvm-controller,app.kubernetes.io/part-of=odf-lvm-provisioner,pod-template-hash=bcf9894b9 topolvm-node-f7fqx 4/4 Running 0 2m38s app.kubernetes.io/component=topolvm-csi-driver,app.kubernetes.io/managed-by=lvm-operator,app.kubernetes.io/name=topolvm-node,app.kubernetes.io/part-of=odf-lvm-provisioner,controller-revision-hash=6576754786,pod-template-generation=1 vg-manager-r692p 1/1 Running 0 2m38s app.kubernetes.io/component=vg-manager,app.kubernetes.io/managed-by=lvm-operator,app.kubernetes.io/name=vg-manager,app.kubernetes.io/part-of=odf-lvm-provisioner,controller-revision-hash=866b4c8b4f,pod-template-generation=1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156