Description of problem: check pods' labels # oc -n openshift-monitoring get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS alertmanager-main-0 5/5 Running 0 6h41m alertmanager=main,app.kubernetes.io/component=alert-router,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=alertmanager,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.21.0,app=alertmanager,controller-revision-hash=alertmanager-main-f7c6966db,statefulset.kubernetes.io/pod-name=alertmanager-main-0 ... cluster-monitoring-operator-595c97cbdf-mtjwj 2/2 Running 0 6h39m app=cluster-monitoring-operator,pod-template-hash=595c97cbdf grafana-c89cf6765-hg28h 2/2 Running 0 6h41m app.kubernetes.io/component=grafana,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=grafana,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=7.3.5,pod-template-hash=c89cf6765 kube-state-metrics-d7f68ff5d-pm2wf 3/3 Running 0 6h39m app.kubernetes.io/component=exporter,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=1.9.7,pod-template-hash=d7f68ff5d node-exporter-4zhsq 2/2 Running 0 7h15m app.kubernetes.io/component=exporter,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=node-exporter,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=1.0.1,controller-revision-hash=6758f7cbff,pod-template-generation=1 ... openshift-state-metrics-5d5f954dd6-5cq25 3/3 Running 0 6h39m k8s-app=openshift-state-metrics,pod-template-hash=5d5f954dd6 prometheus-adapter-68cb94f484-8gf7f 1/1 Running 0 92m app.kubernetes.io/component=metrics-adapter,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=prometheus-adapter,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.8.2,pod-template-hash=68cb94f484 ... prometheus-k8s-0 7/7 Running 1 6h41m app.kubernetes.io/component=prometheus,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.24.0,app=prometheus,controller-revision-hash=prometheus-k8s-669877fb77,operator.prometheus.io/name=k8s,operator.prometheus.io/shard=0,prometheus=k8s,statefulset.kubernetes.io/pod-name=prometheus-k8s-0 ... prometheus-operator-5bd788647f-mw78w 2/2 Running 0 6h41m app.kubernetes.io/component=controller,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.45.0,pod-template-hash=5bd788647f telemeter-client-659f78dd6-fpg4b 3/3 Running 0 6h41m k8s-app=telemeter-client,pod-template-hash=659f78dd6 thanos-querier-65bf7cf788-b8mxg 5/5 Running 0 2m37s app.kubernetes.io/component=query-layer,app.kubernetes.io/instance=thanos-querier,app.kubernetes.io/name=thanos-query,app.kubernetes.io/version=0.17.2,pod-template-hash=65bf7cf788 ********************************************************************************** pods cluster-monitoring-operator-595c97cbdf-mtjwj 2/2 Running 0 6h39m app=cluster-monitoring-operator,pod-template-hash=595c97cbdf openshift-state-metrics-5d5f954dd6-5cq25 3/3 Running 0 6h39m k8s-app=openshift-state-metrics,pod-template-hash=5d5f954dd6 telemeter-client-659f78dd6-fpg4b 3/3 Running 0 6h41m k8s-app=telemeter-client,pod-template-hash=659f78dd6 missed labels kubernetes.io/component app.kubernetes.io/managed-by app.kubernetes.io/name app.kubernetes.io/part-of app.kubernetes.io/version ********************************** pod thanos-querier-65bf7cf788-b8mxg 5/5 Running 0 2m37s app.kubernetes.io/component=query-layer,app.kubernetes.io/instance=thanos-querier,app.kubernetes.io/name=thanos-query,app.kubernetes.io/version=0.17.2,pod-template-hash=65bf7cf788 missed labels app.kubernetes.io/managed-by app.kubernetes.io/part-of Version-Release number of selected component (if applicable): 4.8.0-0.nightly-2021-03-15-144314 How reproducible: always Steps to Reproduce: 1. check pods' labels 2. 3. Actual results: Expected results: Additional info: if it is by design, we could close it
Created attachment 1763595 [details] CMO/openshift-state-metric/telemeter-client/thanos-querier deployment files
the fix is in 4.10.0-0.nightly-2021-09-17-190348 # oc -n openshift-monitoring get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS alertmanager-main-0 5/5 Running 0 66m alertmanager=main,app.kubernetes.io/component=alert-router,app.kubernetes.io/instance=main,app.kubernetes.io/managed-by=prometheus-operator,app.kubernetes.io/name=alertmanager,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.22.2,app=alertmanager,controller-revision-hash=alertmanager-main-6548c7866,statefulset.kubernetes.io/pod-name=alertmanager-main-0 cluster-monitoring-operator-745c46b676-f42rv 2/2 Running 3 (77m ago) 79m app=cluster-monitoring-operator,pod-template-hash=745c46b676 grafana-744dfb5d65-zjdp4 2/2 Running 0 66m app.kubernetes.io/component=grafana,app.kubernetes.io/name=grafana,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=7.5.5,pod-template-hash=744dfb5d65 kube-state-metrics-6c45bcfd65-5r9vl 3/3 Running 0 77m app.kubernetes.io/component=exporter,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.0.0,pod-template-hash=6c45bcfd65 node-exporter-74v2k 2/2 Running 0 77m app.kubernetes.io/component=exporter,app.kubernetes.io/name=node-exporter,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=1.1.2,controller-revision-hash=6f94d8dfc7,pod-template-generation=1 openshift-state-metrics-754768c4f9-95kp7 3/3 Running 0 77m app.kubernetes.io/component=exporter,app.kubernetes.io/name=openshift-state-metrics,pod-template-hash=754768c4f9 prometheus-adapter-d5946c687-kp5mt 1/1 Running 0 66m app.kubernetes.io/component=metrics-adapter,app.kubernetes.io/name=prometheus-adapter,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.9.0,pod-template-hash=d5946c687 prometheus-k8s-0 7/7 Running 0 66m app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=k8s,app.kubernetes.io/managed-by=prometheus-operator,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.29.2,app=prometheus,controller-revision-hash=prometheus-k8s-79f8f59546,operator.prometheus.io/name=k8s,operator.prometheus.io/shard=0,prometheus=k8s,statefulset.kubernetes.io/pod-name=prometheus-k8s-0 prometheus-operator-79f9cf5b94-rslkj 2/2 Running 1 (77m ago) 77m app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.50.0,pod-template-hash=79f9cf5b94 telemeter-client-88896f597-7ffnj 3/3 Running 0 66m app.kubernetes.io/component=telemetry-metrics-collector,app.kubernetes.io/name=telemeter-client,pod-template-hash=88896f597 thanos-querier-6488c5cd8f-9j5zl 5/5 Running 0 13m app.kubernetes.io/component=query-layer,app.kubernetes.io/instance=thanos-querier,app.kubernetes.io/name=thanos-query,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.22.0,pod-template-hash=6488c5cd8f no "app.kubernetes.io/managed-by: cluster-monitoring-operator" label for grafana/kube-state-metrics/node-exporter/openshift-state-metrics/prometheus-adapter/prometheus-operator/telemeter-client/thanos-querier pods, checked the deployment file, the label is udner metadata.labels, not under spec.template.metadata.labels, example # oc -n openshift-monitoring get deploy kube-state-metrics -oyaml metadata: ... labels: app.kubernetes.io/component: exporter app.kubernetes.io/managed-by: cluster-monitoring-operator app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 2.0.0 ... spec: template: metadata: labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 2.0.0 ***************************** if we add the label under spec.template.metadata.labels, would see the label for pod # oc -n openshift-monitoring get pod --show-labels | grep kube-state-metrics kube-state-metrics-df8df7758-wsbx7 3/3 Running 0 4m7s app.kubernetes.io/component=exporter,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.0.0,pod-template-hash=df8df7758
tested with https://github.com/openshift/cluster-monitoring-operator/pull/1442, issue in Comment 6 is fixed, but find "app.kubernetes.io/part-of" is missing for openshift-state-metrics/telemeter-client, need to add the label under spec.template.metadata.labels # oc -n openshift-monitoring get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS alertmanager-main-0 6/6 Running 0 16m alertmanager=main,app.kubernetes.io/component=alert-router,app.kubernetes.io/instance=main,app.kubernetes.io/managed-by=prometheus-operator,app.kubernetes.io/name=alertmanager,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.22.2,app=alertmanager,controller-revision-hash=alertmanager-main-8858b84fb,statefulset.kubernetes.io/pod-name=alertmanager-main-0 cluster-monitoring-operator-d494cc9db-264d4 2/2 Running 4 (24m ago) 30m app.kubernetes.io/name=cluster-monitoring-operator,app=cluster-monitoring-operator,pod-template-hash=d494cc9db grafana-58b8d6847d-r9q28 3/3 Running 0 16m app.kubernetes.io/component=grafana,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=grafana,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=7.5.11,pod-template-hash=58b8d6847d kube-state-metrics-5c455fc849-w9fzd 3/3 Running 0 24m app.kubernetes.io/component=exporter,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.2.3,pod-template-hash=5c455fc849 node-exporter-8js9w 2/2 Running 0 24m app.kubernetes.io/component=exporter,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=node-exporter,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=1.2.2,controller-revision-hash=56cdc74cfc,pod-template-generation=1 openshift-state-metrics-7df45cfd8c-dvp6q 3/3 Running 0 24m app.kubernetes.io/component=exporter,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=openshift-state-metrics,pod-template-hash=7df45cfd8c prometheus-adapter-68d4d8ff66-j8bl2 1/1 Running 0 18m app.kubernetes.io/component=metrics-adapter,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=prometheus-adapter,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.9.0,pod-template-hash=68d4d8ff66 prometheus-k8s-0 6/6 Running 0 16m app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=k8s,app.kubernetes.io/managed-by=prometheus-operator,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.30.3,app=prometheus,controller-revision-hash=prometheus-k8s-546cdbcfdb,operator.prometheus.io/name=k8s,operator.prometheus.io/shard=0,prometheus=k8s,statefulset.kubernetes.io/pod-name=prometheus-k8s-0 prometheus-operator-5d577847dc-jm72j 2/2 Running 1 (24m ago) 25m app.kubernetes.io/component=controller,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.51.2,pod-template-hash=5d577847dc telemeter-client-7589698b-764cr 3/3 Running 0 24m app.kubernetes.io/component=telemetry-metrics-collector,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=telemeter-client,pod-template-hash=7589698b thanos-querier-dd9dcdb5-jtd4b 6/6 Running 0 16m app.kubernetes.io/component=query-layer,app.kubernetes.io/instance=thanos-querier,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=thanos-query,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=0.22.0,pod-template-hash=dd9dcdb5 # oc -n openshift-monitoring get deploy openshift-state-metrics -oyaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: "2021-10-26T02:32:20Z" generation: 1 labels: app.kubernetes.io/component: exporter app.kubernetes.io/managed-by: cluster-monitoring-operator app.kubernetes.io/name: openshift-state-metrics app.kubernetes.io/part-of: openshift-monitoring ... spec: ... template: metadata: annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: null labels: app.kubernetes.io/component: exporter app.kubernetes.io/managed-by: cluster-monitoring-operator app.kubernetes.io/name: openshift-state-metrics spec: containers:
I updated the PR hence changing the status to POST
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056