Created attachment 1773199 [details] Screenshot from Openshift 4.8 Description of problem: Version-Release number of selected component (if applicable): 4.8.0-0.nightly-2021-04-09-101800 Steps to Reproduce: 1. Install a 4.8 cluster 2. Check "kube_pod_labels" metric Actual results: The metric has these labels: container, endpoint, job, namespace, pod, service Expected results: It must have k8s labels of the pod as metric labels.
Created attachment 1773200 [details] Screenshot from Openshift 4.6.24
We moved to kube-state-metrics v2 which is responsible for creating `kube_pod_labels` metric. In this version, there is no option to set all labels on this metric and we can only set an allow-list of labels. This is done to prevent cardinality explosion and reduce prometheus memory consumption. Which labels you want to be set on kube_pod_labels metric?
Hi Pawel. Thanks for the explanation. It seems there is a breaking change on this metric. I need `app.kubernetes.io/component` for my current task to compute resource consumption per component. I am using this metric for aggregation since the other metrics don't include k8s labels. Here is my query: https://github.com/erkanerol/visualize-k8s-labels/blob/main/metrics.sh#L5 I may need more queries based on recommended labels by the k8s community. See https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/ You may think of adding these labels to allow-list. Is there any alternative metric that contains pod-k8s labels info?
Hi, Is there a target release for this issue? Best, Erkan
issue is fixed with 4.8.0-0.nightly-2021-05-07-075528, example: # oc -n openshift-monitoring get pod prometheus-k8s-0 -oyaml | grep labels -A14 labels: app: prometheus app.kubernetes.io/component: prometheus app.kubernetes.io/instance: k8s app.kubernetes.io/managed-by: prometheus-operator app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 2.26.0 controller-revision-hash: prometheus-k8s-5bbbffd649 operator.prometheus.io/name: k8s operator.prometheus.io/shard: "0" prometheus: k8s statefulset.kubernetes.io/pod-name: prometheus-k8s-0 name: prometheus-k8s-0 namespace: openshift-monitoring search with kube_pod_labels{pod="prometheus-k8s-0"} # token=`oc sa get-token prometheus-k8s -n openshift-monitoring` # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kube_pod_labels%7Bpod%3D%22prometheus-k8s-0%22%7D' | jq { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "kube_pod_labels", "container": "kube-rbac-proxy-main", "endpoint": "https-main", "job": "kube-state-metrics", "label_app": "prometheus", "label_app_kubernetes_io_component": "prometheus", "label_app_kubernetes_io_instance": "k8s", "label_app_kubernetes_io_managed_by": "prometheus-operator", "label_app_kubernetes_io_name": "prometheus", "label_app_kubernetes_io_part_of": "openshift-monitoring", "label_app_kubernetes_io_version": "2.26.0", "label_controller_revision_hash": "prometheus-k8s-5bbbffd649", "label_operator_prometheus_io_name": "k8s", "label_operator_prometheus_io_shard": "0", "label_prometheus": "k8s", "label_statefulset_kubernetes_io_pod_name": "prometheus-k8s-0", "namespace": "openshift-monitoring", "pod": "prometheus-k8s-0", "service": "kube-state-metrics" }, "value": [ 1620439161.148, "1" ] } ] } }
I verified the fix on 4.8.0-0.nightly-2021-05-13-002125 as well. Thanks!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438