Bug 1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum"
Summary: "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_reso...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.7
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 4.7.0
Assignee: Pawel Krupa
QA Contact: Junqi Zhao
URL:
Whiteboard:
: 1879520 1920569 (view as bug list)
Depends On:
Blocks: 1897674 1907830 1908566
TreeView+ depends on / blocked
 
Reported: 2020-12-02 07:12 UTC by Junqi Zhao
Modified: 2021-02-24 15:37 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1907830 1908566 (view as bug list)
Environment:
Last Closed: 2021-02-24 15:37:21 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 965 0 None closed Bug 1903464: jsonnet: fix recording rules with many-to-many matching errors 2021-02-21 08:25:52 UTC
Red Hat Knowledge Base (Solution) 5584091 0 None None None 2020-12-04 13:16:46 UTC
Red Hat Knowledge Base (Solution) 5594541 0 None None None 2020-12-04 13:16:46 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:37:55 UTC

Description Junqi Zhao 2020-12-02 07:12:22 UTC
Description of problem:
# oc -n openshift-monitoring logs -c prometheus prometheus-k8s-0
...
level=warn ts=2020-12-02T00:16:36.716Z caller=manager.go:598 component="rule manager" group=kubernetes.rules msg="Evaluating rule failed" rule="record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\nexpr: sum by(provisioner) (kube_persistentvolumeclaim_resource_requests_storage_bytes * on(namespace, persistentvolumeclaim) group_right() (kube_persistentvolumeclaim_info * on(storageclass) group_left(provisioner) kube_storageclass_info))\n" err="found duplicate series for the match group {storageclass=\"gp2-csi\"} on the right hand-side of the operation: [{__name__=\"kube_storageclass_info\", container=\"kube-rbac-proxy-main\", endpoint=\"https-main\", instance=\"10.129.2.14:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", pod=\"kube-state-metrics-bb6546c66-z78cj\", provisioner=\"ebs.csi.aws.com\", reclaimPolicy=\"Delete\", service=\"kube-state-metrics\", storageclass=\"gp2-csi\", volumeBindingMode=\"WaitForFirstConsumer\"}, {__name__=\"kube_storageclass_info\", container=\"kube-rbac-proxy-main\", endpoint=\"https-main\", instance=\"10.128.2.10:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", pod=\"kube-state-metrics-bb6546c66-qjl75\", provisioner=\"ebs.csi.aws.com\", reclaimPolicy=\"Delete\", service=\"kube-state-metrics\", storageclass=\"gp2-csi\", volumeBindingMode=\"WaitForFirstConsumer\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2020-12-02T00:16:36.716Z caller=manager.go:598 component="rule manager" group=kubernetes.rules msg="Evaluating rule failed" rule="record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\nexpr: sum by(provisioner) (kubelet_volume_stats_used_bytes * on(namespace, persistentvolumeclaim) group_right() (kube_persistentvolumeclaim_info * on(storageclass) group_left(provisioner) kube_storageclass_info))\n" err="found duplicate series for the match group {storageclass=\"gp2-csi\"} on the right hand-side of the operation: [{__name__=\"kube_storageclass_info\", container=\"kube-rbac-proxy-main\", endpoint=\"https-main\", instance=\"10.129.2.14:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", pod=\"kube-state-metrics-bb6546c66-z78cj\", provisioner=\"ebs.csi.aws.com\", reclaimPolicy=\"Delete\", service=\"kube-state-metrics\", storageclass=\"gp2-csi\", volumeBindingMode=\"WaitForFirstConsumer\"}, {__name__=\"kube_storageclass_info\", container=\"kube-rbac-proxy-main\", endpoint=\"https-main\", instance=\"10.128.2.10:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", pod=\"kube-state-metrics-bb6546c66-qjl75\", provisioner=\"ebs.csi.aws.com\", reclaimPolicy=\"Delete\", service=\"kube-state-metrics\", storageclass=\"gp2-csi\", volumeBindingMode=\"WaitForFirstConsumer\"}];many-to-many matching not allowed: matching labels must be unique on one side"
...
# oc get sc
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2 (default)   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   true                   7h2m
gp2-csi         ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   7h2m

# oc get pvc --all-namespaces
No resources found

# oc get pv --all-namespaces
No resources found
*************************************************************************
record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum
expr: sum by(provisioner) (kube_persistentvolumeclaim_resource_requests_storage_bytes * on(namespace, persistentvolumeclaim) group_right() (kube_persistentvolumeclaim_info * on(storageclass) group_left(provisioner) kube_storageclass_info))

# token=`oc sa get-token prometheus-k8s -n openshift-monitoring`
# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum' | jq
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": []
  }
}
# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kube_persistentvolumeclaim_resource_requests_storage_bytes' | jq
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": []
  }
}
# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kube_persistentvolumeclaim_info' | jq
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": []
  }
}
# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kube_storageclass_info' | jq
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": [
      {
        "metric": {
          "__name__": "kube_storageclass_info",
          "container": "kube-rbac-proxy-main",
          "endpoint": "https-main",
          "instance": "10.128.2.10:8443",
          "job": "kube-state-metrics",
          "namespace": "openshift-monitoring",
          "pod": "kube-state-metrics-bb6546c66-qjl75",
          "provisioner": "ebs.csi.aws.com",
          "reclaimPolicy": "Delete",
          "service": "kube-state-metrics",
          "storageclass": "gp2-csi",
          "volumeBindingMode": "WaitForFirstConsumer"
        },
        "value": [
          1606892763.624,
          "1"
        ]
      },
      {
        "metric": {
          "__name__": "kube_storageclass_info",
          "container": "kube-rbac-proxy-main",
          "endpoint": "https-main",
          "instance": "10.128.2.10:8443",
          "job": "kube-state-metrics",
          "namespace": "openshift-monitoring",
          "pod": "kube-state-metrics-bb6546c66-qjl75",
          "provisioner": "kubernetes.io/aws-ebs",
          "reclaimPolicy": "Delete",
          "service": "kube-state-metrics",
          "storageclass": "gp2",
          "volumeBindingMode": "WaitForFirstConsumer"
        },
        "value": [
          1606892763.624,
          "1"
        ]
      }
    ]
  }
}
*************************************************************************
record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum
expr: sum by(provisioner) (kubelet_volume_stats_used_bytes * on(namespace, persistentvolumeclaim) group_right() (kube_persistentvolumeclaim_info * on(storageclass) group_left(provisioner) kube_storageclass_info))


# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster:kubelet_volume_stats_used_bytes:provisioner:sum' | jq
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": []
  }
}
# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kubelet_volume_stats_used_bytes' | jq
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": []
  }
}
# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -g -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=kube_persistentvolumeclaim_info' | jq
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": []
  }
}

Version-Release number of selected component (if applicable):
4.7.0-0.nightly-2020-11-30-172451

How reproducible:
frequently

Steps to Reproduce:
1. see from the description
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Pawel Krupa 2020-12-04 13:16:46 UTC
*** Bug 1879520 has been marked as a duplicate of this bug. ***

Comment 11 Damien Grisonnet 2021-01-26 16:19:47 UTC
*** Bug 1920569 has been marked as a duplicate of this bug. ***

Comment 14 errata-xmlrpc 2021-02-24 15:37:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.