Created attachment 1563660 [details] prometheus-k8s pod logs Description of problem: After Installing a fresh environment, there is error "many-to-many matching not allowed: matching labels must be unique on one side" in prometheus container, it seems it does not affect the function, but it is not user friendly, show not show such error in a fresh environment Version-Release number of selected component (if applicable): 4.1.0-0.nightly-2019-05-04-210601 How reproducible: Always Steps to Reproduce: 1. oc -n openshift-monitoring logs -c prometheus prometheus-k8s-0 2. 3. Actual results: there is error "many-to-many matching not allowed: matching labels must be unique on one side" in prometheus container Expected results: show not show such error in a fresh environment Additional info:
Created attachment 1568328 [details] error in Kubernetes / USE Method / Cluster page "many-to-many matching not allowed: matching labels must be unique on one side" shows in grafana "Kubernetes / USE Method / Cluster page" if no there is not PV
I've made a PR that most likely fixes the issue upstream: https://github.com/kubernetes-monitoring/kubernetes-mixin/pull/203 Once it's merged we need to trickle it down across kube-prometheus into the cluster-monitoring-operator.
still can see the warn message payload: 4.2.0-0.ci-2019-06-19-224917 # oc -n openshift-monitoring logs -c prometheus prometheus-k8s-0 | grep "many-to-many" eg: level=warn ts=2019-06-20T01:14:25.758Z caller=manager.go:512 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_memory_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_memory_bytes{job=\"kube-state-metrics\"}\n * on(endpoint, instance, job, namespace, pod, service) group_left(phase) (kube_pod_status_phase{phase=~\"^(Pending|Running)$\"}\n == 1)) * on(namespace, pod) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="found duplicate series for the match group {namespace=\"openshift-kube-apiserver\", pod=\"kube-apiserver-ip-10-0-158-229.us-east-2.compute.internal\"} on the right hand-side of the operation: [{__name__=\"kube_pod_labels\", endpoint=\"https-main\", instance=\"10.131.0.5:8443\", job=\"kube-state-metrics\", label_apiserver=\"true\", label_app=\"openshift-kube-apiserver\", label_revision=\"7\", namespace=\"openshift-kube-apiserver\", pod=\"kube-apiserver-ip-10-0-158-229.us-east-2.compute.internal\", pod_name=\"kube-apiserver-ip-10-0-158-229.us-east-2.compute.internal\", service=\"kube-state-metrics\"}, {__name__=\"kube_pod_labels\", endpoint=\"https-main\", instance=\"10.131.0.5:8443\", job=\"kube-state-metrics\", label_apiserver=\"true\", label_app=\"openshift-kube-apiserver\", label_revision=\"5\", namespace=\"openshift-kube-apiserver\", pod=\"kube-apiserver-ip-10-0-158-229.us-east-2.compute.internal\", pod_name=\"kube-apiserver-ip-10-0-158-229.us-east-2.compute.internal\", service=\"kube-state-metrics\"}];many-to-many matching not allowed: matching labels must be unique on one side" level=warn ts=2019-06-20T01:14:25.764Z caller=manager.go:512 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_cpu_cores:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_cpu_cores{job=\"kube-state-metrics\"}\n * on(endpoint, instance, job, namespace, pod, service) group_left(phase) (kube_pod_status_phase{phase=~\"^(Pending|Running)$\"}\n == 1)) * on(namespace, pod) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="found duplicate series for the match group {namespace=\"openshift-kube-apiserver\", pod=\"kube-apiserver-ip-10-0-158-229.us-east-2.compute.internal\"} on the right hand-side of the operation: [{__name__=\"kube_pod_labels\", endpoint=\"https-main\", instance=\"10.131.0.5:8443\", job=\"kube-state-metrics\", label_apiserver=\"true\", label_app=\"openshift-kube-apiserver\", label_revision=\"7\", namespace=\"openshift-kube-apiserver\", pod=\"kube-apiserver-ip-10-0-158-229.us-east-2.compute.internal\", pod_name=\"kube-apiserver-ip-10-0-158-229.us-east-2.compute.internal\", service=\"kube-state-metrics\"}, {__name__=\"kube_pod_labels\", endpoint=\"https-main\", instance=\"10.131.0.5:8443\", job=\"kube-state-metrics\", label_apiserver=\"true\", label_app=\"openshift-kube-apiserver\", label_revision=\"5\", namespace=\"openshift-kube-apiserver\", pod=\"kube-apiserver-ip-10-0-158-229.us-east-2.compute.internal\", pod_name=\"kube-apiserver-ip-10-0-158-229.us-east-2.compute.internal\", service=\"kube-state-metrics\"}];many-to-many matching not allowed: matching labels must be unique on one side"
Created attachment 1603236 [details] other rules with "many-to-many matching not allowed: matching labels must be unique on one side" error tested on 4.11, besides the rules in Comment 2, there are other rules with "many-to-many matching not allowed: matching labels must be unique on one side" error eg: cluster:cpu_usage_cores:sum mixin_pod_workload node:node_cpu_utilisation:avg1m Did not find these error on 4.2
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days