Bug 2359194 - [NFS-Ganesha] [Dashboard][QoS] Mismatch in bandwidth values: 3 GB set via Dashboard is shown as 3.2 GB in CLI
Summary: [NFS-Ganesha] [Dashboard][QoS] Mismatch in bandwidth values: 3 GB set via Das...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 8.1
Assignee: Shweta Bhosale
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks: 2359770
TreeView+ depends on / blocked
 
Reported: 2025-04-11 22:17 UTC by Manisha Saini
Modified: 2025-06-26 12:24 UTC (History)
6 users (show)

Fixed In Version: ceph-19.2.1-132.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2359770 (view as bug list)
Environment:
Last Closed: 2025-06-26 12:23:53 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-11157 0 None None None 2025-04-11 22:19:47 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:23:59 UTC

Description Manisha Saini 2025-04-11 22:17:46 UTC
Description of problem:
===========

When setting QoS bandwidth limits through the Ceph Dashboard (e.g., client read/write bandwidth), entering a value of 3 GB results in the CLI displaying a slightly higher value, such as 3.2 GB. 

This inconsistency can be confusing to users and may appear as a rounding or conversion discrepancy between the Dashboard and CLI.

This issue is likely due to differences in how the units are calculated between the Dashboard UI and CLI backend.


Version-Release number of selected component (if applicable):
-------

# ceph --version
ceph version 19.2.1-126.el9cp (cfd2907537ba633f7d638895efd70ef5d0f1c99b) squid (stable)


How reproducible:
-----
2/2


Steps to Reproduce:
------
1. Create NFS Ganesha cluster

2. Go to the Ceph Dashboard → NFS --> Select cluster --> Set QoS --> Set the read and write bandwidth to 3 GB for client

3. Use the CLI to retrieve the configured QoS values:

# ceph nfs cluster qos get nfsganesha
{
  "combined_rw_bw_control": false,
  "enable_bw_control": true,
  "enable_iops_control": true,
  "enable_qos": true,
  "max_client_iops": 5000,
  "max_client_read_bw": "3.2GB",
  "max_client_write_bw": "3.2GB",
  "qos_type": "PerClient"
}

Actual results:
=============

The CLI returns the bandwidth value as 3.2 GB, which is slightly higher than the 3 GB set via the Dashboard.


Expected results:
=============

The CLI should return the value as 3.0 GB, matching what was set in the Dashboard. Both interfaces should use consistent unit representations and conversions to avoid confusion.


Additional info:

======
Dashboard output snippet
---

Set Cluster Quality of Service
Client ID
nfsganesha

Bandwidth QOS Type

Per Client
Allows individual per client setting of export and client bandwidth
Client read bandwidth (required)
3 GiB/s
Limits the maximum bandwidth that client can use for read per second
Client write bandwidth (required)
3 GiB/s
Limits the maximum bandwidth that client can use for write per second

IOPS QOS Type

Per Client
Allows individual per client setting of export and client IOPS
Client IOPS (required)
5000
Limits the maximum IOPS

Comment 8 errata-xmlrpc 2025-06-26 12:23:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.