Bug 1566495 - CNS Prometheus metrics wrong
Summary: CNS Prometheus metrics wrong
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Hawkular
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Oved Ourfali
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-12 12:31 UTC by Mangirdas Judeikis
Modified: 2018-04-12 16:00 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-12 15:42:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Mangirdas Judeikis 2018-04-12 12:31:00 UTC
1. Provision registry pv with 30G storage.

2. get metrics (kubelet_volume_stats_capacity_bytes{persistentvolumeclaim="registry-claim"})
3. Shows 30... profit..

4. delete registry pvc, create new pvc with 10G storage.
5. Check pvc
[root@console-REPL summit]# oc get pvc -n default
NAME             STATUS    VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
registry-claim   Bound     registry-volume   10Gi       RWX                           53m
[root@console-REPL summit]# oc get pvc -n default
NAME             STATUS    VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
registry-claim   Bound     registry-volume   10Gi       RWX                           54m
[root@console-REPL summit]# 

6. check metrics:
kubelet_volume_stats_capacity_bytes{persistentvolumeclaim="registry-claim"}

7. still shows 30...
Element	Value
kubelet_volume_stats_capacity_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",glusterfs="storage-host",instance="infra1.example.com",job="kubernetes-nodes",kubernetes_io_hostname="infra1.example.com",logging_infra_fluentd="true",namespace="default",persistentvolumeclaim="registry-claim",prometheus="true",region="infra",zone="az1"}	32192331776
kubelet_volume_stats_capacity_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",glusterfs="storage-host",instance="infra2.example.com",job="kubernetes-nodes",kubernetes_io_hostname="infra2.example.com",logging_infra_fluentd="true",namespace="default",persistentvolumeclaim="registry-claim",prometheus="true",region="infra",zone="az2"}	32192331776
kubelet_volume_stats_capacity_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",glusterfs="storage-host",instance="infra3.example.com",job="kubernetes-nodes",kubernetes_io_hostname="infra3.example.com",logging_infra_fluentd="true",namespace="default",persistentvolumeclaim="registry-claim",prometheus="true",region="infra",zone="az3"}	32192331776


8. rebounce nodes:
ansible all -m shell -a "systemctl restart atomic-openshift-node"

9 same result:
kubelet_volume_stats_capacity_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",glusterfs="storage-host",instance="infra2.example.com",job="kubernetes-nodes",kubernetes_io_hostname="infra2.example.com",logging_infra_fluentd="true",namespace="default",persistentvolumeclaim="registry-claim",prometheus="true",region="infra",zone="az2"}32192331776kubelet_volume_stats_capacity_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",glusterfs="storage-host",instance="infra3.example.com",job="kubernetes-nodes",kubernetes_io_hostname="infra3.example.com",logging_infra_fluentd="true",namespace="default",persistentvolumeclaim="registry-claim",prometheus="true",region="infra",zone="az3"}   32192331776

Comment 1 Mangirdas Judeikis 2018-04-12 15:42:41 UTC
So, when recreating it didnt deleted (done via ansible) underlying PV in the gluster. So 

[root@console-repl ~]# oc get pvc
NAME             STATUS    VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
registry-claim   Bound     registry-volume   10Gi       RWX                           4h


sh-4.2$ df -h /registry/
Filesystem                              Size  Used Avail Use% Mounted on
192.168.0.21:glusterfs-registry-volume   30G   30G     0 100% /registry


So I think its ansible bug to clean storage, and not kubelet.


Note You need to log in before you can comment on or make changes to this bug.