Description of problem: ========== Currently, the maximum and minimum values allowed for Bandwidth and IOPS via ceph-mgr commands are incorrect. As per the design, the max and min values allowed are as below For all bandwidth values: minimum 1Mbps Maximum. 4Gbps For all iops values : minimum 10 iops, maximum 16384 iops Test logs ======== # ceph nfs cluster qos enable ops_control nfsganesha PerShare --max_export_iops 1 Error EINVAL: Provided IOS count value is not in range, Please enter a value between 1000 (1K) and 500000 (5L) bytes # ceph nfs cluster qos enable bandwidth_control nfsganesha PerShare --max_export_write_bw 5GB --max_export_read_bw 5GB Error EINVAL: Invalid bandwidth value. Provided bandwidth value is not in range, Please enter a value between 1000000 (1MB) and 2000000000 (2GB) bytes Version-Release number of selected component (if applicable): =================== [ceph: root@ceph-manisaini-yuvs96-node1-installer /]# rpm -qa | grep nfs libnfsidmap-2.5.4-27.el9.x86_64 nfs-utils-2.5.4-27.el9.x86_64 nfs-ganesha-selinux-6.5-1.7.el9cp.noarch nfs-ganesha-6.5-1.7.el9cp.x86_64 nfs-ganesha-ceph-6.5-1.7.el9cp.x86_64 nfs-ganesha-rados-grace-6.5-1.7.el9cp.x86_64 nfs-ganesha-rados-urls-6.5-1.7.el9cp.x86_64 nfs-ganesha-rgw-6.5-1.7.el9cp.x86_64 nfs-ganesha-utils-6.5-1.7.el9cp.x86_64 [ceph: root@ceph-manisaini-yuvs96-node1-installer /]# ceph --version ceph version 19.2.0-80.2.TEST.ganeshafeatures002.el9cp (ffc22ab18dc4b177c9aa46f98447068155679ff0) squid (stable) How reproducible: ============== Everytime Steps to Reproduce: ============ 1. Create NFS Ganesha cluster 2. Enable the "bandwidth_control" and "ops_control" on the cluster level Actual results: =========== The max and min values allowed are not as per the QoS design. We need to set the limit on ceph-mgr commands accordingly Expected results: ========== The max and mon values should be as per the design For all bandwidth values: minimum 1Mbps Maximum. 4Gbps For all iops values : minimum 10 iops, maximum 16384 iops Additional info:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2025:9775