Bug 2348670

Summary: [NFS-Ganesha][Ceph-Mgr] Correct the min and max supported values for bandwidth and iops limits in rate limiting
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manisha Saini <msaini>
Component: CephadmAssignee: Shweta Bhosale <shbhosal>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: high Docs Contact:
Priority: unspecified    
Version: 8.0CC: akane, cephqe-warriors, shbhosal, tserlin, vdas
Target Milestone: ---   
Target Release: 8.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.2.1-33.el9cp Doc Type: No Doc Update
Doc Text:
This is the defect for new 8.1 feature.
Story Points: ---
Clone Of:
: 2351205 (view as bug list) Environment:
Last Closed: 2025-06-26 12:26:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2351205    

Description Manisha Saini 2025-02-27 07:13:04 UTC
Description of problem:
==========

Currently, the maximum and minimum values allowed for Bandwidth and IOPS via ceph-mgr commands are incorrect.

As per the design, the max and min values allowed are as below

For all bandwidth values:  minimum  1Mbps Maximum. 4Gbps
For all iops values :   minimum 10 iops,  maximum 16384 iops

Test logs
========
#  ceph nfs cluster qos enable ops_control nfsganesha PerShare --max_export_iops 1
Error EINVAL: Provided IOS count value is not in range, Please enter a value between 1000 (1K) and 500000 (5L) bytes

# ceph nfs cluster qos enable bandwidth_control nfsganesha PerShare --max_export_write_bw 5GB --max_export_read_bw 5GB
Error EINVAL: Invalid bandwidth value. Provided bandwidth value is not in range, Please enter a value between 1000000 (1MB) and 2000000000 (2GB) bytes


Version-Release number of selected component (if applicable):
===================

[ceph: root@ceph-manisaini-yuvs96-node1-installer /]# rpm -qa | grep nfs
libnfsidmap-2.5.4-27.el9.x86_64
nfs-utils-2.5.4-27.el9.x86_64
nfs-ganesha-selinux-6.5-1.7.el9cp.noarch
nfs-ganesha-6.5-1.7.el9cp.x86_64
nfs-ganesha-ceph-6.5-1.7.el9cp.x86_64
nfs-ganesha-rados-grace-6.5-1.7.el9cp.x86_64
nfs-ganesha-rados-urls-6.5-1.7.el9cp.x86_64
nfs-ganesha-rgw-6.5-1.7.el9cp.x86_64
nfs-ganesha-utils-6.5-1.7.el9cp.x86_64


[ceph: root@ceph-manisaini-yuvs96-node1-installer /]# ceph --version
ceph version 19.2.0-80.2.TEST.ganeshafeatures002.el9cp (ffc22ab18dc4b177c9aa46f98447068155679ff0) squid (stable)


How reproducible:
==============
Everytime


Steps to Reproduce:
============
1. Create NFS Ganesha cluster
2. Enable the "bandwidth_control" and "ops_control" on the cluster level


Actual results:
===========
The max and min values allowed are not as per the QoS design. We need to set the limit on ceph-mgr commands accordingly

Expected results:
==========
The max and mon values should be as per the design

For all bandwidth values:  minimum  1Mbps Maximum. 4Gbps
For all iops values :   minimum 10 iops,  maximum 16384 iops


Additional info:

Comment 6 errata-xmlrpc 2025-06-26 12:26:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775