Description of problem: ======================= In the current design, storage.reserve option takes a numeric percentage value as input to save the predetermined percentage of disk space. Input in Percentage: ================== Let's say we have 100TB of backend bricks and enabled storage.reserve option. By default the minimum value is 1% and would be reserving this space on the disk. 1% of 100TB would be 1TB. So, if this option is enabled we would endup not using 1TB space on the bricks which can be worrying. The more the brick size, the more the unutilized space here. Input in size: ============== If we take size as an input and if the admin wants to reserve 200GB of space on the 1TB bricks. With this approach, he will be saving 800GB of disk space which can be utilized. Version-Release number of selected component (if applicable): 3.12.2-8.el7rhgs.x86_64 How reproducible: always Steps to Reproduce: =================== 1) Create a gluster volume and start it. 2) mount it on a client. 3) set storage.reserve limits using below command gluster volume set <volname> storage.reserve <number>
Build Version: glusterfs-6.0-7.el7rhgs.x86_64 The storage.reserve volume option takes values in bytes (KB, MB, GB, etc) successfully and the reserve value is enforced on the bricks successfully. Performed all the test cases present in the polarion link attached with the bug. All test cases have passed. Hence, moving this bug to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3249