Bug 1560411 - fallocate created data set is crossing storage reserve space limits resulting 100% brick full
Summary: fallocate created data set is crossing storage reserve space limits resulting...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: posix
Version: mainline
Hardware: x86_64
OS: All
medium
medium
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard:
Depends On: 1550991
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-26 05:50 UTC by Mohit Agrawal
Modified: 2018-06-20 18:02 UTC (History)
6 users (show)

Fixed In Version: glusterfs-v4.1.0
Clone Of: 1550991
Environment:
Last Closed: 2018-06-20 18:02:52 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Worker Ant 2018-03-26 07:41:39 UTC
REVIEW: https://review.gluster.org/19771 (posix: reserve option behavior is not correct while using fallocate) posted (#1) for review on master by MOHIT AGRAWAL

Comment 2 Worker Ant 2018-04-11 07:03:56 UTC
COMMIT: https://review.gluster.org/19771 committed in master by "Amar Tumballi" <amarts> with a commit message- posix: reserve option behavior is not correct while using fallocate

Problem: storage.reserve option is not working correctly while
         disk space is allocate throguh fallocate

Solution: In posix_disk_space_check_thread_proc after every 5 sec interval
          it calls posix_disk_space_check to monitor disk space and set the
          flag in posix priv.In 5 sec timestamp user can create big file with
          fallocate that can reach posix reserve limit and no error is shown on
          terminal even limit has reached.
          To resolve the same call posix_disk_space for every fallocate fop
          instead to call by a thread after 5 second

BUG: 1560411
Signed-off-by: Mohit Agrawal <moagrawa>
Change-Id: I39ba9390e2e6d084eedbf3bcf45cd6d708591577

Comment 3 Shyamsundar 2018-06-20 18:02:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.