Bug 2223654 - VMI CPU metrics are counters not gauges
Summary: VMI CPU metrics are counters not gauges
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Metrics
Version: 4.13.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.14.0
Assignee: João Vilaça
QA Contact: Ahmad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-18 12:51 UTC by Shirly Radco
Modified: 2023-11-08 14:06 UTC (History)
4 users (show)

Fixed In Version: hco-bundle-registry-container-v4.14.0.rhel9-1744
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-08 14:05:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 10138 0 None Merged Change kubevirt_vmi_*_usage_seconds from Gauges to Counters 2023-08-22 12:47:49 UTC
Github kubevirt kubevirt pull 10273 0 None Merged [release-1.0] Change kubevirt_vmi_*_usage_seconds from Gauges to Counters 2023-08-22 12:47:40 UTC
Github kubevirt kubevirt pull 10276 0 None Merged [release-0.59] Change kubevirt_vmi_*_usage_seconds from Gauges to Counters 2023-08-22 13:08:13 UTC
Red Hat Issue Tracker CNV-31131 0 None None None 2023-07-18 12:53:02 UTC
Red Hat Product Errata RHSA-2023:6817 0 None None None 2023-11-08 14:06:08 UTC

Description Shirly Radco 2023-07-18 12:51:18 UTC
Description of problem:
The metrics added in  https://github.com/kubevirt/kubevirt/pull/8774
- kubevirt_vmi_cpu_system_seconds
- kubevirt_vmi_cpu_user_seconds
- kubevirt_vmi_cpu_usage_seconds

are counters.
We need to update them and the documentation for them accordingly.

Version-Release number of selected component (if applicable):
4.13.0

How reproducible:
100%

Steps to Reproduce:
1. Create a VM.
2. Run load on the VM CPU
3. Check the CPU metric value

Actual results:


Expected results:


Additional info:

Comment 1 Ahmad 2023-09-11 11:58:38 UTC
QA: verified on 4.14 

metrics name are renamed to the following:

kubevirt_vmi_cpu_system_usage_seconds_total
kubevirt_vmi_cpu_usage_seconds_total
kubevirt_vmi_cpu_user_usage_seconds_total


oc port-forward pod/virt-handler-vndv9 -n openshift-cnv 8443 &
curl localhost:8443/metrics

[cloud-user@ocp-psi-executor ~]$ curl localhost:8443/metricscurl --insecure https://localhost:8443/metrics | grep '_usage_seconds_total'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0Handling connection for 8443
100    48    0    48    0     0     47      0 --:--:--  0:00:01 --:--:--    47
Handling connection for 8443
# HELP kubevirt_vmi_cpu_system_usage_seconds_total Total CPU time spent in system mode.
# TYPE kubevirt_vmi_cpu_system_usage_seconds_total counter
kubevirt_vmi_cpu_system_usage_seconds_total{kubernetes_vmi_label_kubevirt_io_domain="fedora-artificial-vulture",kubernetes_vmi_label_kubevirt_io_nodeName="c01-ahmad414l-d6rf2-worker-0-8h89t",kubernetes_vmi_label_kubevirt_io_size="small",name="fedora-artificial-vulture",namespace="default",node="c01-ahmad414l-d6rf2-worker-0-8h89t"} 24.43
# HELP kubevirt_vmi_cpu_usage_seconds_total Total CPU time spent in all modes (sum of both vcpu and hypervisor usage).
# TYPE kubevirt_vmi_cpu_usage_seconds_total counter
kubevirt_vmi_cpu_usage_seconds_total{kubernetes_vmi_label_kubevirt_io_domain="fedora-artificial-vulture",kubernetes_vmi_label_kubevirt_io_nodeName="c01-ahmad414l-d6rf2-worker-0-8h89t",kubernetes_vmi_label_kubevirt_io_size="small",name="fedora-artificial-vulture",namespace="default",node="c01-ahmad414l-d6rf2-worker-0-8h89t"} 160.81
# HELP kubevirt_vmi_cpu_user_usage_seconds_total Total CPU time spent in user mode.
# TYPE kubevirt_vmi_cpu_user_usage_seconds_total counter
kubevirt_vmi_cpu_user_usage_seconds_total{kubernetes_vmi_label_kubevirt_io_domain="fedora-artificial-vulture",kubernetes_vmi_label_kubevirt_io_nodeName="c01-ahmad414l-d6rf2-worker-0-8h89t",kubernetes_vmi_label_kubevirt_io_size="small",name="fedora-artificial-vulture",namespace="default",node="c01-ahmad414l-d6rf2-worker-0-8h89t"} 136.38
100   98k    0   98k    0     0  3512k      0 --:--:-- --:--:-- --:--:-- 3512k


metrics outputs:


[cloud-user@ocp-psi-executor ~]$ oc exec -n openshift-monitoring prometheus-k8s-0 -c prometheus -- curl -s http://127.0.0.1:9090/api/v1/query?query=kubevirt_vmi_cpu_system_usage_seconds_total | jq .
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": [
      {
        "metric": {
          "__name__": "kubevirt_vmi_cpu_system_usage_seconds_total",
          "container": "virt-handler",
          "endpoint": "metrics",
          "instance": "10.129.2.78:8443",
          "job": "kubevirt-prometheus-metrics",
          "kubernetes_vmi_label_kubevirt_io_domain": "fedora-artificial-vulture",
          "kubernetes_vmi_label_kubevirt_io_nodeName": "c01-ahmad414l-d6rf2-worker-0-8h89t",
          "kubernetes_vmi_label_kubevirt_io_size": "small",
          "name": "fedora-artificial-vulture",
          "namespace": "default",
          "node": "c01-ahmad414l-d6rf2-worker-0-8h89t",
          "pod": "virt-handler-vndv9",
          "service": "kubevirt-prometheus-metrics"
        },
        "value": [
          1694433421.024,
          "24.74"
        ]
      }
    ]
  }
}



[cloud-user@ocp-psi-executor ~]$ oc exec -n openshift-monitoring prometheus-k8s-0 -c prometheus -- curl -s http://127.0.0.1:9090/api/v1/query?query=kubevirt_vmi_cpu_usage_seconds_total | jq .
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": [
      {
        "metric": {
          "__name__": "kubevirt_vmi_cpu_usage_seconds_total",
          "container": "virt-handler",
          "endpoint": "metrics",
          "instance": "10.129.2.78:8443",
          "job": "kubevirt-prometheus-metrics",
          "kubernetes_vmi_label_kubevirt_io_domain": "fedora-artificial-vulture",
          "kubernetes_vmi_label_kubevirt_io_nodeName": "c01-ahmad414l-d6rf2-worker-0-8h89t",
          "kubernetes_vmi_label_kubevirt_io_size": "small",
          "name": "fedora-artificial-vulture",
          "namespace": "default",
          "node": "c01-ahmad414l-d6rf2-worker-0-8h89t",
          "pod": "virt-handler-vndv9",
          "service": "kubevirt-prometheus-metrics"
        },
        "value": [
          1694433449.470,
          "163.06"
        ]
      }
    ]
  }
}
[cloud-user@ocp-psi-executor ~]$ oc exec -n openshift-monitoring prometheus-k8s-0 -c prometheus -- curl -s http://127.0.0.1:9090/api/v1/query?query=kubevirt_vmi_cpu_user_usage_seconds_total | jq .
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": [
      {
        "metric": {
          "__name__": "kubevirt_vmi_cpu_user_usage_seconds_total",
          "container": "virt-handler",
          "endpoint": "metrics",
          "instance": "10.129.2.78:8443",
          "job": "kubevirt-prometheus-metrics",
          "kubernetes_vmi_label_kubevirt_io_domain": "fedora-artificial-vulture",
          "kubernetes_vmi_label_kubevirt_io_nodeName": "c01-ahmad414l-d6rf2-worker-0-8h89t",
          "kubernetes_vmi_label_kubevirt_io_size": "small",
          "name": "fedora-artificial-vulture",
          "namespace": "default",
          "node": "c01-ahmad414l-d6rf2-worker-0-8h89t",
          "pod": "virt-handler-vndv9",
          "service": "kubevirt-prometheus-metrics"
        },
        "value": [
          1694433475.577,
          "138.43"
        ]
      }
    ]
  }
}

Comment 3 errata-xmlrpc 2023-11-08 14:05:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Virtualization 4.14.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6817


Note You need to log in before you can comment on or make changes to this bug.