Bug 1561129 - When storage reserve limit is reached, appending data to an existing file throws EROFS error
Summary: When storage reserve limit is reached, appending data to an existing file thr...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1554291
Blocks: 1597154
TreeView+ depends on / blocked
 
Reported: 2018-03-27 16:37 UTC by Ravishankar N
Modified: 2018-07-02 07:17 UTC (History)
1 user (show)

Fixed In Version: glusterfs-v4.1.0
Clone Of: 1554291
: 1597154 (view as bug list)
Environment:
Last Closed: 2018-06-20 18:03:13 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2018-03-27 16:37:14 UTC
Description of problem:
=======================
When storage reserve limit is reached, appending data to an existing file throws EROFS error.

Version-Release number of selected component (if applicable):
3.12.2-5.el7rhgs.x86_64

How reproducible:
always

Steps to Reproduce:
===================
1) create a x3 volume and start it.
2) Set storage.reserve volume set option to 90,
gluster volume set distrepx3 storage.reserve 90
3) Create data using dd until the backend bricks reaches reserve limit.
4) Now, try appending data to the existing file.

Actual results:
===============
appending data to an existing file throws EROFS error
cat /etc/redhat-release  > 25MB_997
-bash: 25MB_997: Read-only file system

Expected results:
=================
It should throw appropriate error, ENOSPC.

-

Comment 1 Worker Ant 2018-03-28 00:13:23 UTC
REVIEW: https://review.gluster.org/19781 (afr: add quorum checks in pre-op) posted (#1) for review on master by Ravishankar N

Comment 2 Worker Ant 2018-04-05 12:22:41 UTC
COMMIT: https://review.gluster.org/19781 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- afr: add quorum checks in pre-op

Problem:
We seem to be winding the FOP if pre-op did not succeed on quorum bricks
and then failing the FOP with EROFS since the fop did not meet quorum.
This essentially masks the actual error due to which pre-op failed. (See
BZ).

Fix:
Skip FOP phase if pre-op quorum is not met and go to post-op.

Fixes: 1561129

Change-Id: Ie58a41e8fa1ad79aa06093706e96db8eef61b6d9
fixes: bz#1561129
Signed-off-by: Ravishankar N <ravishankar>

Comment 3 Ravishankar N 2018-04-16 10:28:26 UTC
Moving it back to POST since one more patch for eager lock fixes is needed to verify the steps in the bug description without causing the mount to crash.

Comment 4 Worker Ant 2018-04-16 10:29:43 UTC
REVIEW: https://review.gluster.org/19879 (afr: fixes to afr-eager locking) posted (#1) for review on master by Ravishankar N

Comment 5 Worker Ant 2018-04-18 07:49:38 UTC
COMMIT: https://review.gluster.org/19879 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- afr: fixes to afr-eager locking

1. If pre-op fails on all bricks,set lock->release to true in
afr_handle_lock_acquire_failure so that the GF_ASSERT in afr_unlock() does not
crash.

2. Added a missing 'return' after handling pre-op failure in
afr_transaction_perform_fop(), fixing a use-after-free issue.

Change-Id: If0627a9124cb5d6405037cab3f17f8325eed2d83
fixes: bz#1561129
Signed-off-by: Ravishankar N <ravishankar>

Comment 6 Shyamsundar 2018-06-20 18:03:13 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.