Bug 2348670 - [NFS-Ganesha][Ceph-Mgr] Correct the min and max supported values for bandwidth and iops limits in rate limiting
Summary: [NFS-Ganesha][Ceph-Mgr] Correct the min and max supported values for bandwidt...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 8.1
Assignee: Shweta Bhosale
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks: 2351205
TreeView+ depends on / blocked
 
Reported: 2025-02-27 07:13 UTC by Manisha Saini
Modified: 2025-06-26 12:26 UTC (History)
5 users (show)

Fixed In Version: ceph-19.2.1-33.el9cp
Doc Type: No Doc Update
Doc Text:
This is the defect for new 8.1 feature.
Clone Of:
: 2351205 (view as bug list)
Environment:
Last Closed: 2025-06-26 12:26:26 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-10673 0 None None None 2025-02-27 07:14:40 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:26:33 UTC

Description Manisha Saini 2025-02-27 07:13:04 UTC
Description of problem:
==========

Currently, the maximum and minimum values allowed for Bandwidth and IOPS via ceph-mgr commands are incorrect.

As per the design, the max and min values allowed are as below

For all bandwidth values:  minimum  1Mbps Maximum. 4Gbps
For all iops values :   minimum 10 iops,  maximum 16384 iops

Test logs
========
#  ceph nfs cluster qos enable ops_control nfsganesha PerShare --max_export_iops 1
Error EINVAL: Provided IOS count value is not in range, Please enter a value between 1000 (1K) and 500000 (5L) bytes

# ceph nfs cluster qos enable bandwidth_control nfsganesha PerShare --max_export_write_bw 5GB --max_export_read_bw 5GB
Error EINVAL: Invalid bandwidth value. Provided bandwidth value is not in range, Please enter a value between 1000000 (1MB) and 2000000000 (2GB) bytes


Version-Release number of selected component (if applicable):
===================

[ceph: root@ceph-manisaini-yuvs96-node1-installer /]# rpm -qa | grep nfs
libnfsidmap-2.5.4-27.el9.x86_64
nfs-utils-2.5.4-27.el9.x86_64
nfs-ganesha-selinux-6.5-1.7.el9cp.noarch
nfs-ganesha-6.5-1.7.el9cp.x86_64
nfs-ganesha-ceph-6.5-1.7.el9cp.x86_64
nfs-ganesha-rados-grace-6.5-1.7.el9cp.x86_64
nfs-ganesha-rados-urls-6.5-1.7.el9cp.x86_64
nfs-ganesha-rgw-6.5-1.7.el9cp.x86_64
nfs-ganesha-utils-6.5-1.7.el9cp.x86_64


[ceph: root@ceph-manisaini-yuvs96-node1-installer /]# ceph --version
ceph version 19.2.0-80.2.TEST.ganeshafeatures002.el9cp (ffc22ab18dc4b177c9aa46f98447068155679ff0) squid (stable)


How reproducible:
==============
Everytime


Steps to Reproduce:
============
1. Create NFS Ganesha cluster
2. Enable the "bandwidth_control" and "ops_control" on the cluster level


Actual results:
===========
The max and min values allowed are not as per the QoS design. We need to set the limit on ceph-mgr commands accordingly

Expected results:
==========
The max and mon values should be as per the design

For all bandwidth values:  minimum  1Mbps Maximum. 4Gbps
For all iops values :   minimum 10 iops,  maximum 16384 iops


Additional info:

Comment 6 errata-xmlrpc 2025-06-26 12:26:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.