Bug 2100352 - Make lvmo pod labels more uniform
Summary: Make lvmo pod labels more uniform
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: lvm-operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ODF 4.11.0
Assignee: N Balachandran
QA Contact: Shay Rozen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-23 07:37 UTC by Shay Rozen
Modified: 2023-08-09 16:46 UTC (History)
6 users (show)

Fixed In Version: 4.11.0-105
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-24 13:55:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage lvm-operator pull 219 0 None open Bug 2100352: Adds recommended k8s labels 2022-06-24 13:35:56 UTC
Red Hat Product Errata RHSA-2022:6156 0 None None None 2022-08-24 13:55:50 UTC

Description Shay Rozen 2022-06-23 07:37:05 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Pods label seems to be inconsistent:
    "410": {
        "controller_manager_label": "control-plane=controller-manager",
        "topolvm-controller_label": "app.kubernetes.io/name=topolvm-controller",
        "topolvm-node_label": "app=topolvm-node",
        "vg-manager_label": "app=vg-manager",
    },
    "411": {
        "controller_manager_label": "app.kubernetes.io/name=lvm-operator",
        "topolvm-controller_label": "app.lvm.openshift.io=topolvm-controller",
        "topolvm-node_label": "app.lvm.openshift.io=topolvm-node",
        "vg-manager_label": "app.lvm.openshift.io=vg-manager",
    },

Would rather have something uniform.


Version of all relevant components (if applicable):
4.10 4.11 lvmo

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?
yes

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
1

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:
No

Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 2 Yaniv Kaul 2022-06-23 08:22:14 UTC
I don't understand what the issue is:
- It's not GA'ed yet, we can rename as we please.
- What's the user impact, if any? What's the severity here?

Comment 3 Shay Rozen 2022-06-24 06:47:07 UTC
This after raising and discussing with lvmo dev which they want to follow k8s guidelines. Like mon in odf cluster has the following labels:
pp.kubernetes.io/component=cephclusters.ceph.rook.io,
app.kubernetes.io/created-by=rook-ceph-operator,
app.kubernetes.io/instance=a,
app.kubernetes.io/managed-by=rook-ceph-operator,
app.kubernetes.io/name=ceph-mon,
app.kubernetes.io/part-of=ocs-storagecluster-cephcluster,
app=rook-ceph-mon,ceph_daemon_id=a,
ceph_daemon_type=mon,mon=a,
mon_cluster=openshift-storage,
pod-template-hash=569dd9ccc9,
pvc_name=rook-ceph-mon-a,
pvc_size=50Gi,
rook.io/operator-namespace=openshift-storage,
rook_cluster=openshift-storage

The user impact for instance is not having the option to select all lvmo pods. Other then that to follow some k8s guidelines.
There is already a proposal by dev for new labels but still being discussed:

lvm-operator-controller-manager:
app.kubernetes.io/name=lvm-operator
app.kubernetes.io/component=operator
app.kubernetes.io/part-of=odf-lvm-provisioner
app.kubernetes.io/managed-by=odf-lvm-operator

topolvm-controller:
app.kubernetes.io/name=topolvm-controller
app.kubernetes.io/component=topolvm-csi-driver
app.kubernetes.io/part-of=odf-lvm-provisioner
app.kubernetes.io/managed-by=odf-lvm-operator

topolvm-node:
app.kubernetes.io/name=topolvm-node
app.kubernetes.io/component=topolvm-csi-driver
app.kubernetes.io/part-of=odf-lvm-provisioner
app.kubernetes.io/managed-by=odf-lvm-operator

vg-manager:
app.kubernetes.io/name=vg-manager
app.kubernetes.io/component=vg-manager
app.kubernetes.io/part-of=odf-lvm-provisioner
app.kubernetes.io/managed-by=odf-lvm-operator

Comment 4 N Balachandran 2022-06-24 11:36:02 UTC
The existing "app.lvm.openshift.io" label will be removed and the recommended kubernetes labels added to pods(actually to the deployments/daemonsets that create the pods) and services as a first step. The labels will be added to all resources created by LVMO in the future.

Comment 7 Shay Rozen 2022-06-28 08:56:57 UTC
oc get pods --show-labels
NAME                                               READY   STATUS    RESTARTS   AGE     LABELS
lvm-operator-controller-manager-6884967457-qw6h2   3/3     Running   0          3m26s   app.kubernetes.io/component=lvm-operator,app.kubernetes.io/name=lvm-operator,app.kubernetes.io/part-of=odf-lvm-provisioner,exporter=lvm-operator,pod-template-hash=6884967457
topolvm-controller-bcf9894b9-kbg7v                 5/5     Running   0          2m38s   app.kubernetes.io/component=topolvm-csi-driver,app.kubernetes.io/managed-by=lvm-operator,app.kubernetes.io/name=topolvm-controller,app.kubernetes.io/part-of=odf-lvm-provisioner,pod-template-hash=bcf9894b9
topolvm-node-f7fqx                                 4/4     Running   0          2m38s   app.kubernetes.io/component=topolvm-csi-driver,app.kubernetes.io/managed-by=lvm-operator,app.kubernetes.io/name=topolvm-node,app.kubernetes.io/part-of=odf-lvm-provisioner,controller-revision-hash=6576754786,pod-template-generation=1
vg-manager-r692p                                   1/1     Running   0          2m38s   app.kubernetes.io/component=vg-manager,app.kubernetes.io/managed-by=lvm-operator,app.kubernetes.io/name=vg-manager,app.kubernetes.io/part-of=odf-lvm-provisioner,controller-revision-hash=866b4c8b4f,pod-template-generation=1

Comment 10 errata-xmlrpc 2022-08-24 13:55:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156


Note You need to log in before you can comment on or make changes to this bug.