Bug 1911465 - IOPS display wrong unit
Summary: IOPS display wrong unit
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Console Storage Plugin
Version: 4.7
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.7.0
Assignee: Ankush Behl
QA Contact: Avi Liani
Depends On:
TreeView+ depends on / blocked
Reported: 2020-12-29 15:26 UTC by Avi Liani
Modified: 2021-02-24 15:49 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2021-02-24 15:49:11 UTC
Target Upstream Version:

Attachments (Terms of Use)
UI IOPS output (143.58 KB, image/png)
2020-12-29 15:31 UTC, Avi Liani
no flags Details

System ID Private Priority Status Summary Last Updated
Github openshift console pull 7784 0 None closed Bug 1911465: Fix humanize IOPS function 2021-02-18 08:32:23 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:49:25 UTC

Description Avi Liani 2020-12-29 15:26:44 UTC
Description of problem (please be detailed as possible and provide log

while running IOPS on OCS volume, th UI show the IOPS number without units

Version of all relevant components (if applicable):

OCP : 4.6.9
OCS : 4.7.0-222.ci

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?

If this is a regression, please provide more details to justify this:

Steps to Reproduce:
1. deploy OCS
2. Create PVC and attached it to POD
3. run IO on the PVC
4. monitor via the UI, and compare to the output from rook-ceph-tools pod : 
   ceph status

Actual results:

the CLI output show 3.5K IOPS while the UI show 3.5 IOPS

Expected results:

the UI will show same output as the CLI

Additional info:

Comment 2 Avi Liani 2020-12-29 15:31:16 UTC
Created attachment 1742925 [details]
UI IOPS output

example of the UI IOPS report while in the CLI i see :

# ceph status
    id:     34045fdb-26bd-408e-879d-da4edc7a3ddf
    health: HEALTH_OK
    mon: 3 daemons, quorum a,b,c (age 2h)
    mgr: a(active, since 2h)
    mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay
    osd: 3 osds: 3 up (since 2h), 3 in (since 2h)
    rgw: 1 daemon active (ocs.storagecluster.cephobjectstore.a)
  task status:
    scrub status:
        mds.ocs-storagecluster-cephfilesystem-a: idle
        mds.ocs-storagecluster-cephfilesystem-b: idle
    pools:   11 pools, 208 pgs
    objects: 97.22k objects, 375 GiB
    usage:   1.1 TiB used, 3.3 TiB / 4.4 TiB avail
    pgs:     208 active+clean
    client:   1.3 KiB/s rd, 223 MiB/s wr, 2 op/s rd, 3.58k op/s wr

Comment 4 Avi Liani 2021-01-13 07:49:50 UTC
Now it display the IOPS unit as well.

verifyed on builds :

OCP : 4.7.0-0.nightly-2021-01-12-203716
OCS : 4.7.0-230.ci

# oc get csv,clusterversion -n openshift-storage
NAME                                                                    DISPLAY                       VERSION        REPLACES   PHASE
clusterserviceversion.operators.coreos.com/ocs-operator.v4.7.0-230.ci   OpenShift Container Storage   4.7.0-230.ci              Succeeded

NAME                                         VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
clusterversion.config.openshift.io/version   4.7.0-0.nightly-2021-01-12-203716   True        False         54m     Cluster version is 4.7.0-0.nightly-2021-01-12-203716

Comment 7 errata-xmlrpc 2021-02-24 15:49:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.