Bug 2359194

Summary: [NFS-Ganesha] [Dashboard][QoS] Mismatch in bandwidth values: 3 GB set via Dashboard is shown as 3.2 GB in CLI
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manisha Saini <msaini>
Component: CephadmAssignee: Shweta Bhosale <shbhosal>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 8.1CC: afrahman, akane, ceph-eng-bugs, cephqe-warriors, dtalweka, tserlin
Target Milestone: ---Keywords: Reopened
Target Release: 8.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.2.1-132.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 2359770 (view as bug list) Environment:
Last Closed: 2025-06-26 12:23:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2359770    

Description Manisha Saini 2025-04-11 22:17:46 UTC
Description of problem:
===========

When setting QoS bandwidth limits through the Ceph Dashboard (e.g., client read/write bandwidth), entering a value of 3 GB results in the CLI displaying a slightly higher value, such as 3.2 GB. 

This inconsistency can be confusing to users and may appear as a rounding or conversion discrepancy between the Dashboard and CLI.

This issue is likely due to differences in how the units are calculated between the Dashboard UI and CLI backend.


Version-Release number of selected component (if applicable):
-------

# ceph --version
ceph version 19.2.1-126.el9cp (cfd2907537ba633f7d638895efd70ef5d0f1c99b) squid (stable)


How reproducible:
-----
2/2


Steps to Reproduce:
------
1. Create NFS Ganesha cluster

2. Go to the Ceph Dashboard → NFS --> Select cluster --> Set QoS --> Set the read and write bandwidth to 3 GB for client

3. Use the CLI to retrieve the configured QoS values:

# ceph nfs cluster qos get nfsganesha
{
  "combined_rw_bw_control": false,
  "enable_bw_control": true,
  "enable_iops_control": true,
  "enable_qos": true,
  "max_client_iops": 5000,
  "max_client_read_bw": "3.2GB",
  "max_client_write_bw": "3.2GB",
  "qos_type": "PerClient"
}

Actual results:
=============

The CLI returns the bandwidth value as 3.2 GB, which is slightly higher than the 3 GB set via the Dashboard.


Expected results:
=============

The CLI should return the value as 3.0 GB, matching what was set in the Dashboard. Both interfaces should use consistent unit representations and conversions to avoid confusion.


Additional info:

======
Dashboard output snippet
---

Set Cluster Quality of Service
Client ID
nfsganesha

Bandwidth QOS Type

Per Client
Allows individual per client setting of export and client bandwidth
Client read bandwidth (required)
3 GiB/s
Limits the maximum bandwidth that client can use for read per second
Client write bandwidth (required)
3 GiB/s
Limits the maximum bandwidth that client can use for write per second

IOPS QOS Type

Per Client
Allows individual per client setting of export and client IOPS
Client IOPS (required)
5000
Limits the maximum IOPS

Comment 8 errata-xmlrpc 2025-06-26 12:23:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775