Description of problem: ======================= fallocate created data set is crossing storage reserve space limits resulting 100% brick full. Note: I created files using dd and the storage reserve space limits are respected as expected. Version-Release number of selected component (if applicable): 3.12.2-4.el7rhgs.x86_64 How reproducible: ================= always Steps to Reproduce: =================== 1) create a x3 volume and start it. 2) FUSE mount on a client and make a note of df -h output. 3) Set storage.reserve volume set option to 50, gluster volume set distrepx3 storage.reserve 50 4) Using fallocate create data on the mount point, for i in {1..10000};do fallocate -l 1G test_file$i.img;done Actual results: =============== fallocate created data set is crossing storage reserve space limits resulting 100% brick full.
Verified this BZ on glusterfs version 3.12.2-8.el7rhgs.x86_64. Now, fallocate is respecting storage.reserve limits. Hence, moving this BZ to Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607