Bug 1476158

Summary: Posix xlator needs to reserve disk space to prevent the brick from getting full.
Product: [Community] GlusterFS Reporter: Mohit Agrawal <moagrawa>
Component: posixAssignee: Mohit Agrawal <moagrawa>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.12CC: bugs, nbalacha, rhinduja, rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-4.1.3 (or later) Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1464350 Environment:
Last Closed: 2018-08-29 03:34:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1464350    
Bug Blocks:    

Description Mohit Agrawal 2017-07-28 07:41:32 UTC
+++ This bug was initially created as a clone of Bug #1464350 +++

Description of problem:

Once a brick becomes completely (100%) full, the gluster filesystem becomes inconsistent as xattr operations fail and recovery is very difficult.

A rebalance cannot be used to migrate data off the full brick as xattr operations will fail.

DHT uses a min-free-disk option to try to keep some reserve space on all bricks. However, this approach is not fool proof and should ideally be handled by the posix xlator. 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Red Hat Bugzilla Rules Engine on 2017-06-23 03:48:47 EDT ---

This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.3.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2017-06-28 05:09:20 EDT ---

This BZ having been considered, and subsequently not approved to be fixed at the RHGS 3.3.0 release, is being proposed for the next minor release of RHGS

--- Additional comment from Mohit Agrawal on 2017-07-16 21:57:36 EDT ---

Patch is posted on upstream 

https://review.gluster.org/#/c/17780/

Comment 1 Worker Ant 2017-07-28 07:51:19 UTC
REVIEW: https://review.gluster.org/17904 (posix: Needs to reserve disk space to prevent the brick from getting full) posted (#1) for review on release-3.12 by MOHIT AGRAWAL (moagrawa)

Comment 2 Worker Ant 2017-07-28 07:54:13 UTC
REVIEW: https://review.gluster.org/17904 (posix: Needs to reserve disk space to prevent the brick from getting full) posted (#2) for review on release-3.12 by MOHIT AGRAWAL (moagrawa)

Comment 3 Worker Ant 2017-08-09 09:02:06 UTC
REVIEW: https://review.gluster.org/18008 (posix: Needs to reserve disk space to prevent the brick from getting full) posted (#1) for review on release-3.12 by MOHIT AGRAWAL (moagrawa)

Comment 4 Worker Ant 2017-08-09 09:17:13 UTC
REVIEW: https://review.gluster.org/18008 (posix: Needs to reserve disk space to prevent the brick from getting full) posted (#2) for review on release-3.12 by MOHIT AGRAWAL (moagrawa)

Comment 5 Worker Ant 2017-08-09 09:25:33 UTC
REVIEW: https://review.gluster.org/18008 (posix: Needs to reserve disk space to prevent the brick from getting full) posted (#3) for review on release-3.12 by MOHIT AGRAWAL (moagrawa)

Comment 6 Worker Ant 2017-08-10 02:02:28 UTC
REVIEW: https://review.gluster.org/18008 (posix: Needs to reserve disk space to prevent the brick from getting full) posted (#4) for review on release-3.12 by MOHIT AGRAWAL (moagrawa)

Comment 7 Worker Ant 2017-08-29 08:56:08 UTC
REVIEW: https://review.gluster.org/18008 (posix: Needs to reserve disk space to prevent the brick from getting full) posted (#5) for review on release-3.12 by MOHIT AGRAWAL (moagrawa)

Comment 8 Amar Tumballi 2018-08-29 03:34:52 UTC
This update is done in bulk based on the state of the patch and the time since last activity. If the issue is still seen, please reopen the bug.