Bug 1950908 - kube_pod_labels metric does not contain k8s labels
Summary: kube_pod_labels metric does not contain k8s labels
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.8.0
Assignee: Pawel Krupa
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-19 06:44 UTC by Erkan Erol
Modified: 2021-07-27 23:01 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 23:01:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Screenshot from Openshift 4.8 (312.35 KB, image/png)
2021-04-19 06:44 UTC, Erkan Erol
no flags Details
Screenshot from Openshift 4.6.24 (246.72 KB, image/png)
2021-04-19 06:45 UTC, Erkan Erol
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 1145 0 None open Bug 1950908: Allow all pod labels in metric labels 2021-05-04 11:52:48 UTC
Github openshift kube-state-metrics pull 51 0 None open Bug 1950908: Add wildcard option to labels-metric-allow-list 2021-05-05 07:38:03 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 23:01:58 UTC

Description Erkan Erol 2021-04-19 06:44:09 UTC
Created attachment 1773199 [details]
Screenshot from Openshift 4.8

Description of problem:


Version-Release number of selected component (if applicable):
4.8.0-0.nightly-2021-04-09-101800


Steps to Reproduce:
1. Install a 4.8 cluster
2. Check "kube_pod_labels" metric

Actual results:
The metric has these labels: container, endpoint, job, namespace, pod, service

Expected results:
It must have k8s labels of the pod as metric labels.

Comment 1 Erkan Erol 2021-04-19 06:45:34 UTC
Created attachment 1773200 [details]
Screenshot from Openshift 4.6.24

Comment 2 Pawel Krupa 2021-04-19 07:27:42 UTC
We moved to kube-state-metrics v2 which is responsible for creating `kube_pod_labels` metric. In this version, there is no option to set all labels on this metric and we can only set an allow-list of labels. This is done to prevent cardinality explosion and reduce prometheus memory consumption.

Which labels you want to be set on kube_pod_labels metric?

Comment 3 Erkan Erol 2021-04-19 08:03:29 UTC
Hi Pawel. Thanks for the explanation. It seems there is a breaking change on this metric. 

I need `app.kubernetes.io/component` for my current task to compute resource consumption per component. I am using this metric for aggregation since the other metrics don't include k8s labels. Here is my query: https://github.com/erkanerol/visualize-k8s-labels/blob/main/metrics.sh#L5


I may need more queries based on recommended labels by the k8s community. See https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/  

You may think of adding these labels to allow-list.

Is there any alternative metric that contains pod-k8s labels info?

Comment 4 Erkan Erol 2021-04-21 19:12:13 UTC
Hi,

Is there a target release for this issue?

Best,
Erkan

Comment 11 Junqi Zhao 2021-05-08 03:11:45 UTC
issue is fixed with 4.8.0-0.nightly-2021-05-07-075528, example:
# oc -n openshift-monitoring get pod prometheus-k8s-0 -oyaml | grep labels -A14
  labels:
    app: prometheus
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/instance: k8s
    app.kubernetes.io/managed-by: prometheus-operator
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: openshift-monitoring
    app.kubernetes.io/version: 2.26.0
    controller-revision-hash: prometheus-k8s-5bbbffd649
    operator.prometheus.io/name: k8s
    operator.prometheus.io/shard: "0"
    prometheus: k8s
    statefulset.kubernetes.io/pod-name: prometheus-k8s-0
  name: prometheus-k8s-0
  namespace: openshift-monitoring

search with kube_pod_labels{pod="prometheus-k8s-0"}
# token=`oc sa get-token prometheus-k8s -n openshift-monitoring`
# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kube_pod_labels%7Bpod%3D%22prometheus-k8s-0%22%7D' | jq
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": [
      {
        "metric": {
          "__name__": "kube_pod_labels",
          "container": "kube-rbac-proxy-main",
          "endpoint": "https-main",
          "job": "kube-state-metrics",
          "label_app": "prometheus",
          "label_app_kubernetes_io_component": "prometheus",
          "label_app_kubernetes_io_instance": "k8s",
          "label_app_kubernetes_io_managed_by": "prometheus-operator",
          "label_app_kubernetes_io_name": "prometheus",
          "label_app_kubernetes_io_part_of": "openshift-monitoring",
          "label_app_kubernetes_io_version": "2.26.0",
          "label_controller_revision_hash": "prometheus-k8s-5bbbffd649",
          "label_operator_prometheus_io_name": "k8s",
          "label_operator_prometheus_io_shard": "0",
          "label_prometheus": "k8s",
          "label_statefulset_kubernetes_io_pod_name": "prometheus-k8s-0",
          "namespace": "openshift-monitoring",
          "pod": "prometheus-k8s-0",
          "service": "kube-state-metrics"
        },
        "value": [
          1620439161.148,
          "1"
        ]
      }
    ]
  }
}

Comment 12 Erkan Erol 2021-05-13 11:20:15 UTC
I verified the fix on 4.8.0-0.nightly-2021-05-13-002125 as well. Thanks!

Comment 15 errata-xmlrpc 2021-07-27 23:01:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.