Description of problem: # oc -n openshift-monitoring logs -c prometheus prometheus-k8s-0 ... level=warn ts=2020-12-02T00:16:36.716Z caller=manager.go:598 component="rule manager" group=kubernetes.rules msg="Evaluating rule failed" rule="record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\nexpr: sum by(provisioner) (kube_persistentvolumeclaim_resource_requests_storage_bytes * on(namespace, persistentvolumeclaim) group_right() (kube_persistentvolumeclaim_info * on(storageclass) group_left(provisioner) kube_storageclass_info))\n" err="found duplicate series for the match group {storageclass=\"gp2-csi\"} on the right hand-side of the operation: [{__name__=\"kube_storageclass_info\", container=\"kube-rbac-proxy-main\", endpoint=\"https-main\", instance=\"10.129.2.14:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", pod=\"kube-state-metrics-bb6546c66-z78cj\", provisioner=\"ebs.csi.aws.com\", reclaimPolicy=\"Delete\", service=\"kube-state-metrics\", storageclass=\"gp2-csi\", volumeBindingMode=\"WaitForFirstConsumer\"}, {__name__=\"kube_storageclass_info\", container=\"kube-rbac-proxy-main\", endpoint=\"https-main\", instance=\"10.128.2.10:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", pod=\"kube-state-metrics-bb6546c66-qjl75\", provisioner=\"ebs.csi.aws.com\", reclaimPolicy=\"Delete\", service=\"kube-state-metrics\", storageclass=\"gp2-csi\", volumeBindingMode=\"WaitForFirstConsumer\"}];many-to-many matching not allowed: matching labels must be unique on one side" level=warn ts=2020-12-02T00:16:36.716Z caller=manager.go:598 component="rule manager" group=kubernetes.rules msg="Evaluating rule failed" rule="record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\nexpr: sum by(provisioner) (kubelet_volume_stats_used_bytes * on(namespace, persistentvolumeclaim) group_right() (kube_persistentvolumeclaim_info * on(storageclass) group_left(provisioner) kube_storageclass_info))\n" err="found duplicate series for the match group {storageclass=\"gp2-csi\"} on the right hand-side of the operation: [{__name__=\"kube_storageclass_info\", container=\"kube-rbac-proxy-main\", endpoint=\"https-main\", instance=\"10.129.2.14:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", pod=\"kube-state-metrics-bb6546c66-z78cj\", provisioner=\"ebs.csi.aws.com\", reclaimPolicy=\"Delete\", service=\"kube-state-metrics\", storageclass=\"gp2-csi\", volumeBindingMode=\"WaitForFirstConsumer\"}, {__name__=\"kube_storageclass_info\", container=\"kube-rbac-proxy-main\", endpoint=\"https-main\", instance=\"10.128.2.10:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", pod=\"kube-state-metrics-bb6546c66-qjl75\", provisioner=\"ebs.csi.aws.com\", reclaimPolicy=\"Delete\", service=\"kube-state-metrics\", storageclass=\"gp2-csi\", volumeBindingMode=\"WaitForFirstConsumer\"}];many-to-many matching not allowed: matching labels must be unique on one side" ... # oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 7h2m gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 7h2m # oc get pvc --all-namespaces No resources found # oc get pv --all-namespaces No resources found ************************************************************************* record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum expr: sum by(provisioner) (kube_persistentvolumeclaim_resource_requests_storage_bytes * on(namespace, persistentvolumeclaim) group_right() (kube_persistentvolumeclaim_info * on(storageclass) group_left(provisioner) kube_storageclass_info)) # token=`oc sa get-token prometheus-k8s -n openshift-monitoring` # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum' | jq { "status": "success", "data": { "resultType": "vector", "result": [] } } # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kube_persistentvolumeclaim_resource_requests_storage_bytes' | jq { "status": "success", "data": { "resultType": "vector", "result": [] } } # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kube_persistentvolumeclaim_info' | jq { "status": "success", "data": { "resultType": "vector", "result": [] } } # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kube_storageclass_info' | jq { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "kube_storageclass_info", "container": "kube-rbac-proxy-main", "endpoint": "https-main", "instance": "10.128.2.10:8443", "job": "kube-state-metrics", "namespace": "openshift-monitoring", "pod": "kube-state-metrics-bb6546c66-qjl75", "provisioner": "ebs.csi.aws.com", "reclaimPolicy": "Delete", "service": "kube-state-metrics", "storageclass": "gp2-csi", "volumeBindingMode": "WaitForFirstConsumer" }, "value": [ 1606892763.624, "1" ] }, { "metric": { "__name__": "kube_storageclass_info", "container": "kube-rbac-proxy-main", "endpoint": "https-main", "instance": "10.128.2.10:8443", "job": "kube-state-metrics", "namespace": "openshift-monitoring", "pod": "kube-state-metrics-bb6546c66-qjl75", "provisioner": "kubernetes.io/aws-ebs", "reclaimPolicy": "Delete", "service": "kube-state-metrics", "storageclass": "gp2", "volumeBindingMode": "WaitForFirstConsumer" }, "value": [ 1606892763.624, "1" ] } ] } } ************************************************************************* record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum expr: sum by(provisioner) (kubelet_volume_stats_used_bytes * on(namespace, persistentvolumeclaim) group_right() (kube_persistentvolumeclaim_info * on(storageclass) group_left(provisioner) kube_storageclass_info)) # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster:kubelet_volume_stats_used_bytes:provisioner:sum' | jq { "status": "success", "data": { "resultType": "vector", "result": [] } } # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kubelet_volume_stats_used_bytes' | jq { "status": "success", "data": { "resultType": "vector", "result": [] } } # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kube_persistentvolumeclaim_info' | jq { "status": "success", "data": { "resultType": "vector", "result": [] } } Version-Release number of selected component (if applicable): 4.7.0-0.nightly-2020-11-30-172451 How reproducible: frequently Steps to Reproduce: 1. see from the description 2. 3. Actual results: Expected results: Additional info:
*** Bug 1879520 has been marked as a duplicate of this bug. ***
*** Bug 1920569 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633