Description of problem: We use Red Hat OpenShift version 3.11.16 and would like to monitor PersistentVolumeClaims utilization. We see KubePersistentVolumeFullInFourDays and KubePersistentVolumeUsageCritical alerts configured in Prometheus and assume that this alert does not work properly because I don`t see any data in: 1 alert: KubePersistentVolumeUsageCritical expr: 100 * kubelet_volume_stats_available_bytes{job="kubelet",namespace=~"(openshift.*|kube.*|default|logging)"} / kubelet_volume_stats_capacity_bytes{job="kubelet",namespace=~"(openshift.*|kube.*|default|logging)"} < 3 2 alert: KubePersistentVolumeFullInFourDays expr: kubelet_volume_stats_available_bytes{job="kubelet",namespace=~"(openshift.*|kube.*|default|logging)"} and predict_linear(kubelet_volume_stats_available_bytes{job="kubelet",namespace=~"(openshift.*|kube.*|default|logging)"}[6h], 4 * 24 * 3600) < 0 How can I expose kubelet_volume_stats_available_bytes and kubelet_volume_stats_capacity_bytes to Prometheus ? Version-Release number of selected component (if applicable): Red Hat OpenShift version 3.11.16 Backend Storage for Persistent Volumes: NFS exports in IBM Spectrum Scale How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Following commands reveal no metrics: export TOKEN=`oc whoami -t`; for node in $nodes; do echo "========= $node ==========";curl -sk -H "Authorization: Bearer ${TOKEN}" https://${node}:10250/metrics | grep -E "kubelet_volume_stats_available_bytes|kubelet_volume_stats_capacity_bytes" | grep -v "# " ;done They are using following StorageClass: $ oc get sc NAME PROVISIONER AGE local-storage kubernetes.io/no-provisioner 104d Expected results: Didn't come across any specific requirement for the type of storage backend so far (Static or Dynamic Storage) or support only for Gluster, Cinder etc so having a PVC should ideally expose these metrics if have understood it correctly. Additional info: Reference: https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/kubelet/metrics/collectors/volume_stats.go#L81-L119 Maybe these details would have limited scope at Prometheus level. Need to check why the metrics are not getting exposed.
Hi Chance, Can you please let us know the status on the case? Regards, Deepak
I have not been able to reproduce this issue. Junqi, can you verify this bug? Specifically we want to create a PersistentVolume on a 3.11 cluster and verify that we can query Prometheus and see a metric named "kubelet_volume_stats_available_bytes". -Lucas
*** Bug 1729438 has been marked as a duplicate of this bug. ***
https://github.com/openshift/origin/pull/23474
Tested with 4.2.0-0.nightly-2019-08-06-195545, i can get those info for NFS. kubelet_volume_stats_capacity_bytes Element Value kubelet_volume_stats_capacity_bytes{endpoint="https-metrics",instance="10.0.32.4:10250",job="kubelet",namespace="i7h18",node="qe-lxia-0806-195545-8g87r-worker-centralus2-d5dvg",persistentvolumeclaim="nfsc",service="kubelet"} 136352628736 kubelet_volume_stats_capacity_bytes{endpoint="https-metrics",instance="10.0.32.5:10250",job="kubelet",namespace="i7h18",node="qe-lxia-0806-195545-8g87r-worker-centralus1-kr9d7",persistentvolumeclaim="nfsc",service="kubelet"} 136352628736 kubelet_volume_stats_available_bytes Element Value kubelet_volume_stats_available_bytes{endpoint="https-metrics",instance="10.0.32.4:10250",job="kubelet",namespace="i7h18",node="qe-lxia-0806-195545-8g87r-worker-centralus2-d5dvg",persistentvolumeclaim="nfsc",service="kubelet"} 128094044160 kubelet_volume_stats_available_bytes{endpoint="https-metrics",instance="10.0.32.5:10250",job="kubelet",namespace="i7h18",node="qe-lxia-0806-195545-8g87r-worker-centralus1-kr9d7",persistentvolumeclaim="nfsc",service="kubelet"} 128253427712
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922