Bug 2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate
Summary: KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 4.10.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.11.0
Assignee: Igor Bezukh
QA Contact: Denys Shchedrivyi
URL:
Whiteboard:
: 2029357 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-25 15:18 UTC by Kedar Bidarkar
Modified: 2023-11-13 08:16 UTC (History)
4 users (show)

Fixed In Version: virt-operator-container-v4.11.0-40
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-09-14 19:28:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 7052 0 None open Fix kubevirt_vm_container_free_memory_bytes 2022-01-26 14:27:29 UTC
Github kubevirt kubevirt pull 7146 0 None open [release-0.49] Fix kubevirt_vm_container_free_memory_bytes 2022-01-27 07:55:15 UTC
Red Hat Issue Tracker CNV-16000 0 None None None 2023-11-13 08:16:05 UTC
Red Hat Product Errata RHSA-2022:6526 0 None None None 2022-09-14 19:28:56 UTC

Description Kedar Bidarkar 2022-01-25 15:18:26 UTC
This bug was initially created as a copy of Bug #2033077

I am copying this bug because: 



Description of problem:

Received alerts from the two prometheus pods:
openshift-monitoring/prometheus-k8s-0 has failed to evaluate 10 rules in the last 5m.

openshift-monitoring/prometheus-k8s-1 has failed to evaluate 10 rules in the last 5m.



Version-Release number of selected component (if applicable):
OpenShift 4.9.10
CNV 4.9.1

How reproducible:
Unsure, but error occurs continually. This is on an upgraded cluster (4.8 -> 4.9.) Not sure if it can be reproduced on a fresh cluster

Steps to Reproduce:
1. Have cluster running latest CNV, and OpenShift v4.8.22
2. Upgrade cluster to 4.9.10


Actual results:
Cluster begins firing alerts failing to evaluate a prometheus rule.

Expected results:
Prometheus happily evaluates all the CNV alerting rules

Additional info:
The alert that is specifically failing is KubeVirtComponentExceedsRequestedMemory.

The error is:
found duplicate series for the match group {pod="bridge-marker-dv592"} on the right hand-side of the operation: [{__name__="container_memory_usage_bytes", container="bridge-marker", endpoint="https-metrics", id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68aaef0e_d95a_47d0_a898_d45d4d613f58.slice/crio-bca86b7bfe14679147f29a0a806f04ee9f8ceb6008f5b6bd58e9be4b2f5e35e8.scope", image="registry.redhat.io/container-native-virtualization/bridge-marker@sha256:83d6f2fbf4118162aed2d2b0153b4ad39cfe3b97a3ef06e9c4fbb5e2a3aae915", instance="10.42.0.102:10250", job="kubelet", metrics_path="/metrics/cadvisor", name="k8s_bridge-marker_bridge-marker-dv592_openshift-cnv_68aaef0e-d95a-47d0-a898-d45d4d613f58_0", namespace="openshift-cnv", node="node1.cloud.xana.du", pod="bridge-marker-dv592", prometheus="openshift-monitoring/k8s", service="kubelet"}, {__name__="container_memory_usage_bytes", container="POD", endpoint="https-metrics", id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68aaef0e_d95a_47d0_a898_d45d4d613f58.slice/crio-e346225c7c5270220cb6b2cce4de9f528c63603b2ba2c87be1e5642f0ac57b0f.scope", instance="10.42.0.102:10250", job="kubelet", metrics_path="/metrics/cadvisor", name="k8s_POD_bridge-marker-dv592_openshift-cnv_68aaef0e-d95a-47d0-a898-d45d4d613f58_0", namespace="openshift-cnv", node="node1.cloud.xana.du", pod="bridge-marker-dv592", prometheus="openshift-monitoring/k8s", service="kubelet"}];many-to-many matching not allowed: matching labels must be unique on one side

The contents of the rule:
Expression

    ((kube_pod_container_resource_requests{container=~"virt-controller|virt-api|virt-handler|virt-operator",namespace="openshift-cnv",resource="memory"}) - on(pod) group_left(node) container_memory_usage_bytes{namespace="openshift-cnv"}) < 0

Testing that rule in the alerting dashboard also returns the error.

NOTE: the similarly named KubeVirtComponentExceedsRequestedCPU does not appear to be failing, and is slightly different:

((kube_pod_container_resource_requests{container=~"virt-controller|virt-api|virt-handler|virt-operator",namespace="openshift-cnv",resource="cpu"}) - on(pod) group_left(node) node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate{namespace="openshift-cnv"}) < 0

Noting the difference after 'group_left(node)...', I tried replacing `container_memory_usage_bytes{namespace="openshift-cnv"}` with `node_namespace_pod_container:container_memory_working_set_bytes:sum_rate{namespace="openshift-cnv"}` in the rule and testing in the alerting console returns no error. So

((kube_pod_container_resource_requests{container=~"virt-controller|virt-api|virt-handler|virt-operator",namespace="openshift-cnv",resource="memory"}) - on(pod) group_left(node) node_namespace_pod_container:container_memory_working_set_bytes:sum_rate{namespace="openshift-cnv"}) < 0

seems to work as expected.

Comment 1 Denys Shchedrivyi 2022-01-31 20:10:38 UTC
The issue is still present on a freshly installed virt-operator-container-v4.10.0-203 hco-bundle-v4.10.0-636:

When I open alert KubeVirtComponentExceedsRequestedMemory in UI - I see this:

An error occurred
found duplicate series for the match group {pod="bridge-marker-fj4pl"} on the right hand-side of the operation: [{__name__="container_memory_usage_bytes", container="bridge-marker", endpoint="https-metrics", id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod333c8eb0_22dd_4d11_bc21_69b94b085d28.slice/crio-fe078919ccac0a41a41cd08f097f623b594a4bbb5b5c08a251d02de88b82c7b0.scope", image="registry.redhat.io/container-native-virtualization/bridge-marker@sha256:23d3e1b923ed0196997c6c2f4206d514b83a78b2771075d9afcc6473c44c1c97", instance="192.168.1.183:10250", job="kubelet", metrics_path="/metrics/cadvisor", name="k8s_bridge-marker_bridge-marker-fj4pl_openshift-cnv_333c8eb0-22dd-4d11-bc21-69b94b085d28_1", namespace="openshift-cnv", node="virt-den-410-88lfr-worker-0-m5d4p", pod="bridge-marker-fj4pl", prometheus="openshift-monitoring/k8s", service="kubelet"}, {__name__="container_memory_usage_bytes", container="POD", endpoint="https-metrics", id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod333c8eb0_22dd_4d11_bc21_69b94b085d28.slice/crio-dd8d1aff4453865c08b1a99cc6e245e0aef2d31c21053e9d1b38138f26493687.scope", instance="192.168.1.183:10250", job="kubelet", metrics_path="/metrics/cadvisor", name="k8s_POD_bridge-marker-fj4pl_openshift-cnv_333c8eb0-22dd-4d11-bc21-69b94b085d28_0", namespace="openshift-cnv", node="virt-den-410-88lfr-worker-0-m5d4p", pod="bridge-marker-fj4pl", prometheus="openshift-monitoring/k8s", service="kubelet"}];many-to-many matching not allowed: matching labels must be unique on one side


Moving this bz back to "Assigned"

Comment 5 Igor Bezukh 2022-02-01 12:38:19 UTC
My apologies, the PR indeed isn't related to the bug scope.

However, the suggested query cannot be used since its evaluating a re-defined record rule taken from openshift-monitoring.

Thus, it may work on downstream on OCP but its useless on upstream.

Moreover, I can tell that the KubeVirtComponentExceedsRequestedCPU has a faulty query as well, again because its using downstream record rules.

I also think we need to re-visit the logic of the alerts since the queries don't return a scalar value, but a vector, so I am not sure how the "< 0" could be evaluated.

Even if you don't see errors when evaluating the alert query, it doesn't mean the query is correct, since it can return an empty vector, and then the expression "< 0" may work.

Comment 6 Igor Bezukh 2022-02-01 12:43:48 UTC
Also it looks like with this PR https://github.com/openshift/cluster-monitoring-operator/pull/1214

Openshifit-monitoring has deprecated the node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate record rule

So KubeVirtComponentExceedsRequestedMemory seems unreliable even on downstream.

Comment 7 Igor Bezukh 2022-02-01 12:44:44 UTC
I meant to say that KubeVirtComponentExceedsRequestedCPU is unreliable

Comment 9 oshoval 2022-03-17 13:08:14 UTC
*** Bug 2029357 has been marked as a duplicate of this bug. ***

Comment 10 Denys Shchedrivyi 2022-04-25 14:51:47 UTC
Verified on CNV-v4.11.0-244

Issue was fixed - no messages about evaluating fails

Comment 13 errata-xmlrpc 2022-09-14 19:28:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Virtualization 4.11.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6526


Note You need to log in before you can comment on or make changes to this bug.