Bug 1910303 - Confusing node's filesystem utilization metric in UI console
Summary: Confusing node's filesystem utilization metric in UI console
Keywords:
Status: CLOSED DUPLICATE of bug 1893601
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 4.6.z
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Jakub Hadvig
QA Contact: Yadan Pei
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-23 11:22 UTC by Filip Brychta
Modified: 2021-01-06 06:59 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-06 06:59:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Filip Brychta 2020-12-23 11:22:25 UTC
Description of problem:
UI console shows node's filesystem usage bigger than the node's disk actually is. VMs running in OpenStack are using a flavor with 45GB disk but console shows node's filesystem utilization e.g.:
61.23 GiB / 133.9 GiB

Version-Release number of selected component (if applicable):
4.6.8

How reproducible:
Always

Steps to Reproduce:
1. Install a cluster on OpenStack using a flavor with 45GB disk
2. Open console and navigate to Compute->Nodes

Actual results:
When adding some apps the Filesystem metrics of some nodes show e.g. 60.56 GiB / 133.9 GiB which is very confusing as the root disk of that VM is only 45GB and there are no volumes attached.
Also when running oc descibe node, it shows that the ephemeral-storage is 46633964Ki:
Capacity:
  attachable-volumes-cinder:  256
  cpu:                        4
  ephemeral-storage:          46633964Ki
  hugepages-1Gi:              0
  hugepages-2Mi:              0
  memory:                     16419132Ki
  pods:                       250
Allocatable:
  attachable-volumes-cinder:  256
  cpu:                        3500m
  ephemeral-storage:          41904119328
  hugepages-1Gi:              0
  hugepages-2Mi:              0
  memory:                     15268156Ki
  pods:                       250


Expected results:
I would expect that max Filesystem for given node is 45GB not 133.9 GiB and that actuall usage can't be bigger than 45GB.

Additional info:
If this is expected, is there any documentation describing what is actually shown and where is the number 60.56 GiB / 133.9 GiB coming from? Without documentation it's very confusing.

Comment 1 Yaacov Zamir 2021-01-06 06:59:35 UTC

*** This bug has been marked as a duplicate of bug 1893601 ***


Note You need to log in before you can comment on or make changes to this bug.