Bug 1910303

Summary: Confusing node's filesystem utilization metric in UI console
Product: OpenShift Container Platform Reporter: Filip Brychta <fbrychta>
Component: Management ConsoleAssignee: Jakub Hadvig <jhadvig>
Status: CLOSED DUPLICATE QA Contact: Yadan Pei <yapei>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.6.zCC: aballant, aos-bugs, jokerman, nmukherj, yzamir
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-01-06 06:59:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Filip Brychta 2020-12-23 11:22:25 UTC
Description of problem:
UI console shows node's filesystem usage bigger than the node's disk actually is. VMs running in OpenStack are using a flavor with 45GB disk but console shows node's filesystem utilization e.g.:
61.23 GiB / 133.9 GiB

Version-Release number of selected component (if applicable):
4.6.8

How reproducible:
Always

Steps to Reproduce:
1. Install a cluster on OpenStack using a flavor with 45GB disk
2. Open console and navigate to Compute->Nodes

Actual results:
When adding some apps the Filesystem metrics of some nodes show e.g. 60.56 GiB / 133.9 GiB which is very confusing as the root disk of that VM is only 45GB and there are no volumes attached.
Also when running oc descibe node, it shows that the ephemeral-storage is 46633964Ki:
Capacity:
  attachable-volumes-cinder:  256
  cpu:                        4
  ephemeral-storage:          46633964Ki
  hugepages-1Gi:              0
  hugepages-2Mi:              0
  memory:                     16419132Ki
  pods:                       250
Allocatable:
  attachable-volumes-cinder:  256
  cpu:                        3500m
  ephemeral-storage:          41904119328
  hugepages-1Gi:              0
  hugepages-2Mi:              0
  memory:                     15268156Ki
  pods:                       250


Expected results:
I would expect that max Filesystem for given node is 45GB not 133.9 GiB and that actuall usage can't be bigger than 45GB.

Additional info:
If this is expected, is there any documentation describing what is actually shown and where is the number 60.56 GiB / 133.9 GiB coming from? Without documentation it's very confusing.

Comment 1 Yaacov Zamir 2021-01-06 06:59:35 UTC

*** This bug has been marked as a duplicate of bug 1893601 ***