Bug 2359188 - QoS values configured via CLI in gigabytes are displayed in bytes on the Dashboard, which reduces clarity and makes the information less user-friendly
Summary: QoS values configured via CLI in gigabytes are displayed in bytes on the Dash...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Dashboard
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 8.1z3
Assignee: naman munet
QA Contact: Manisha Saini
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-04-11 21:59 UTC by Manisha Saini
Modified: 2025-09-30 09:22 UTC (History)
9 users (show)

Fixed In Version: ceph-19.2.1-249.el9cp
Doc Type: No Doc Update
Doc Text:
No doc text needed
Clone Of:
Environment:
Last Closed: 2025-09-30 09:21:58 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-11156 0 None None None 2025-04-11 21:59:46 UTC
Red Hat Issue Tracker RHCSDASH-2023 0 None None None 2025-04-11 21:59:49 UTC
Red Hat Product Errata RHBA-2025:17047 0 None None None 2025-09-30 09:22:02 UTC

Description Manisha Saini 2025-04-11 21:59:16 UTC
Description of problem:
=========

Set the cluster-level QoS limits via the CLI (in GB) and verify the values on the Dashboard. Initially, the values are displayed in bytes, which are harder to interpret. However, when you click on specific Dashboard fields (e.g., export read bandwidth), the values update to GB, matching what was configured via the CLI.

# ceph nfs cluster qos enable bandwidth_control nfsganesha PerShare --max_export_write_bw 2GB --max_export_read_bw 3GB

[ceph: root@ceph-nfsclusterlive-vbrwai-node1-installer /]# ceph nfs cluster qos get nfsganesha
{
  "combined_rw_bw_control": false,
  "enable_bw_control": true,
  "enable_iops_control": false,
  "enable_qos": true,
  "max_export_read_bw": "3.0GB",
  "max_export_write_bw": "2.0GB",
  "qos_type": "PerShare"
}

Dashboard
----


Bandwidth QOS Type

Per Share
Allows individual per share setting of export and client bandwidth
Export read bandwidth (required)
3000000000
Limits the maximum bandwidth that can be used for export read per second
Export write bandwidth (required)
2000000000
Limits the maximum bandwidth that can be used for export write per second

------
Click on the fields on Dashboard --> The values are now shown in GB
------

Bandwidth QOS Type

Per Share
Allows individual per share setting of export and client bandwidth
Export read bandwidth (required)
2.8 GiB/s
Limits the maximum bandwidth that can be used for export read per second
Export write bandwidth (required)
1.9 GiB/s
Limits the maximum bandwidth that can be used for export write per second



Version-Release number of selected component (if applicable):
----------
# ceph --version
ceph version 19.2.1-126.el9cp (cfd2907537ba633f7d638895efd70ef5d0f1c99b) squid (stable)

How reproducible:
-----
2/2


Steps to Reproduce:
-------
1. Set the QoS values at cluster level from Dashboard CLI

# ceph nfs cluster qos enable bandwidth_control nfsganesha PerShare --max_export_write_bw 2GB --max_export_read_bw 3GB

# ceph nfs cluster qos get nfsganesha
{
  "combined_rw_bw_control": false,
  "enable_bw_control": true,
  "enable_iops_control": false,
  "enable_qos": true,
  "max_export_read_bw": "3.0GB",
  "max_export_write_bw": "2.0GB",
  "qos_type": "PerShare"
}

2. Check the Values set at cluster level from Dashboard


Actual Results:
--------
The values appear in Bytes on the Dashboard, even though they were set in GB via the CLI. Upon clicking the respective fields on the Dashboard, the units update and display in GB.

Expected Results:
--------
When QoS limits are set in GB via CLI, the Dashboard should consistently display them in GB for better readability and user clarity.


Additional info:

Comment 5 errata-xmlrpc 2025-09-30 09:21:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.1 security, bug fix and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2025:17047


Note You need to log in before you can comment on or make changes to this bug.