Hide Forgot
Description of problem: enable user workload and check pods' labels # oc -n openshift-user-workload-monitoring get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS prometheus-operator-59564d65c5-4gmm6 2/2 Running 0 19m app.kubernetes.io/component=controller,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.45.0,pod-template-hash=59564d65c5 prometheus-user-workload-0 5/5 Running 1 19m app.kubernetes.io/component=prometheus,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.24.0,app=prometheus,controller-revision-hash=prometheus-user-workload-5b6d4764ff,operator.prometheus.io/name=user-workload,operator.prometheus.io/shard=0,prometheus=user-workload,statefulset.kubernetes.io/pod-name=prometheus-user-workload-0 prometheus-user-workload-1 5/5 Running 1 19m app.kubernetes.io/component=prometheus,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.24.0,app=prometheus,controller-revision-hash=prometheus-user-workload-5b6d4764ff,operator.prometheus.io/name=user-workload,operator.prometheus.io/shard=0,prometheus=user-workload,statefulset.kubernetes.io/pod-name=prometheus-user-workload-1 thanos-ruler-user-workload-0 3/3 Running 0 19m app=thanos-ruler,controller-revision-hash=thanos-ruler-user-workload-f8f8f9f54,statefulset.kubernetes.io/pod-name=thanos-ruler-user-workload-0,thanos-ruler=user-workload thanos-ruler-user-workload-1 3/3 Running 0 19m app=thanos-ruler,controller-revision-hash=thanos-ruler-user-workload-f8f8f9f54,statefulset.kubernetes.io/pod-name=thanos-ruler-user-workload-1,thanos-ruler=user-workload ************************* no following labels for thanos-ruler pod kubernetes.io/component app.kubernetes.io/managed-by app.kubernetes.io/name app.kubernetes.io/part-of app.kubernetes.io/version and did not find labels change for thanos-ruler deployment in https://github.com/openshift/cluster-monitoring-operator/pull/1044/files Version-Release number of selected component (if applicable): 4.8.0-0.nightly-2021-03-15-144314 How reproducible: always Steps to Reproduce: 1. enable user workload and check pods' labels 2. 3. Actual results: Expected results: Additional info: if it is by design, we could close it
For UWM stack components (which are private to users spinning up user workloads) would not need shared label prefixes like 'kubernetes.io' 'app.kubernetes.io'. @
(In reply to SriKrishna from comment #1) > For UWM stack components (which are private to users spinning up user > workloads) would not need shared label prefixes like 'kubernetes.io' > 'app.kubernetes.io'. > > @ Why other pods under openshift-user-workload-monitoring have those labels?
assets deployed in the context of user workload monitoring (prometheus, thanos ruler, etc.) are not modifiable by the user, hence they should get the aforementioned labels.
Test with payload 4.8.0-0.nightly-2021-05-06-162549 oc -n openshift-user-workload-monitoring get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS prometheus-user-workload-0 5/5 Running 1 26s app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=user-workload,app.kubernetes.io/managed-by=prometheus-operator,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.26.0,app=prometheus,controller-revision-hash=prometheus-user-workload-684dc4bc,operator.prometheus.io/name=user-workload,operator.prometheus.io/shard=0,prometheus=user-workload,statefulset.kubernetes.io/pod-name=prometheus-user-workload-0 prometheus-user-workload-1 5/5 Running 1 26s app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=user-workload,app.kubernetes.io/managed-by=prometheus-operator,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.26.0,app=prometheus,controller-revision-hash=prometheus-user-workload-684dc4bc,operator.prometheus.io/name=user-workload,operator.prometheus.io/shard=0,prometheus=user-workload,statefulset.kubernetes.io/pod-name=prometheus-user-workload-1 thanos-ruler-user-workload-0 3/3 Running 0 21s app.kubernetes.io/instance=user-workload,app.kubernetes.io/managed-by=prometheus-operator,app.kubernetes.io/name=thanos-ruler,app=thanos-ruler,controller-revision-hash=thanos-ruler-user-workload-54969c8d7d,statefulset.kubernetes.io/pod-name=thanos-ruler-user-workload-0,thanos-ruler=user-workload thanos-ruler-user-workload-1 3/3 Running 0 21s app.kubernetes.io/instance=user-workload,app.kubernetes.io/managed-by=prometheus-operator,app.kubernetes.io/name=thanos-ruler,app=thanos-ruler,controller-revision-hash=thanos-ruler-user-workload-54969c8d7d,statefulset.kubernetes.io/pod-name=thanos-ruler-user-workload-1,thanos-ruler=user-workload thanos-ruler pods are still missing the following labels app.kubernetes.io/component app.kubernetes.io/part-of app.kubernetes.io/version
Confirmed with Dev, can skip those 3. The most important ones were managed-by and name which have been added, change the bug as verified. https://coreos.slack.com/archives/G79AW9Q7R/p1620375313047900
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438