Description of problem: This difference is because FUSE uses a block size and fragment size of 128K instead of using the backend filesystem's block size. When statvfs call is made, this is the statvfs buffer content blocks bfree bavail gfapi 259584, 248433, 251028 brick 259584, 251028, 251028 fuse 8112, 7763, 7844 As you can see, gfapi and brick match in total blocks and bavail. There is a difference in bfree because of posix xlator. Posix xlator deducts 1% of the total blocks from free blocks. The numbers in fuse are the numbers in gfapi row divided by 32(because the numbers obtained from brick are for 4K blocks and fuse wants to communicate in terms of 128K blocks). Here, we are converting into a larger unit and value is rounded off. df on a mount point would get data from fuse and hence the discrepancy.
*** Bug 1566823 has been marked as a duplicate of this bug. ***
REVIEW: https://review.gluster.org/19873 (fuse: retire statvfs tweak) posted (#1) for review on master by Csaba Henk
COMMIT: https://review.gluster.org/19873 committed in master by "Raghavendra G" <rgowdapp> with a commit message- fuse: retire statvfs tweak fuse xlator used to override the filesystem block size of the storage backend to indicate its preferences. Now we retire this tweak and pass on what we get from the backend. This fixes the anomaly reported in the referred BUG. For more background, see the following email, which was sent out to gluster-devel and gluster-users mailing lists to gauge if anyone sees any use of this tweak: http://lists.gluster.org/pipermail/gluster-devel/2018-March/054660.html http://lists.gluster.org/pipermail/gluster-users/2018-March/033775.html Noone vetoed the removal of it but it got endorsement: http://lists.gluster.org/pipermail/gluster-devel/2018-March/054686.html BUG: 1523219 Change-Id: I3b7111d3037a1b91a288c1589f407b2c48d81bfa Signed-off-by: Csaba Henk <csaba>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/