Bug 1597154 - When storage reserve limit is reached, appending data to an existing file throws EROFS error
Summary: When storage reserve limit is reached, appending data to an existing file thr...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.12
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1554291 1561129
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-02 07:17 UTC by Ravishankar N
Modified: 2018-08-20 07:01 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.12.12
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1561129
Environment:
Last Closed: 2018-08-20 07:01:24 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
backported patch (3.75 KB, application/mbox)
2018-07-02 07:33 UTC, Ravishankar N
no flags Details

Description Ravishankar N 2018-07-02 07:17:28 UTC
+++ This bug was initially created as a clone of Bug #1561129 +++

Description of problem:
=======================
When storage reserve limit is reached, appending data to an existing file throws EROFS error.

Version-Release number of selected component (if applicable):
3.12.2-5.el7rhgs.x86_64

How reproducible:
always

Steps to Reproduce:
===================
1) create a x3 volume and start it.
2) Set storage.reserve volume set option to 90,
gluster volume set distrepx3 storage.reserve 90
3) Create data using dd until the backend bricks reaches reserve limit.
4) Now, try appending data to the existing file.

Actual results:
===============
appending data to an existing file throws EROFS error
cat /etc/redhat-release  > 25MB_997
-bash: 25MB_997: Read-only file system

Expected results:
=================
It should throw appropriate error, ENOSPC.

-

--- Additional comment from Worker Ant on 2018-03-27 20:13:23 EDT ---

REVIEW: https://review.gluster.org/19781 (afr: add quorum checks in pre-op) posted (#1) for review on master by Ravishankar N

--- Additional comment from Worker Ant on 2018-04-05 08:22:41 EDT ---

COMMIT: https://review.gluster.org/19781 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- afr: add quorum checks in pre-op

Problem:
We seem to be winding the FOP if pre-op did not succeed on quorum bricks
and then failing the FOP with EROFS since the fop did not meet quorum.
This essentially masks the actual error due to which pre-op failed. (See
BZ).

Fix:
Skip FOP phase if pre-op quorum is not met and go to post-op.

Fixes: 1561129

Change-Id: Ie58a41e8fa1ad79aa06093706e96db8eef61b6d9
fixes: bz#1561129
Signed-off-by: Ravishankar N <ravishankar>

--- Additional comment from Ravishankar N on 2018-04-16 06:28:26 EDT ---

Moving it back to POST since one more patch for eager lock fixes is needed to verify the steps in the bug description without causing the mount to crash.

--- Additional comment from Worker Ant on 2018-04-16 06:29:43 EDT ---

REVIEW: https://review.gluster.org/19879 (afr: fixes to afr-eager locking) posted (#1) for review on master by Ravishankar N

--- Additional comment from Worker Ant on 2018-04-18 03:49:38 EDT ---

COMMIT: https://review.gluster.org/19879 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- afr: fixes to afr-eager locking

1. If pre-op fails on all bricks,set lock->release to true in
afr_handle_lock_acquire_failure so that the GF_ASSERT in afr_unlock() does not
crash.

2. Added a missing 'return' after handling pre-op failure in
afr_transaction_perform_fop(), fixing a use-after-free issue.

Change-Id: If0627a9124cb5d6405037cab3f17f8325eed2d83
fixes: bz#1561129
Signed-off-by: Ravishankar N <ravishankar>

--- Additional comment from Shyamsundar on 2018-06-20 14:03:13 EDT ---

This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 1 Ravishankar N 2018-07-02 07:33:10 UTC
Created attachment 1455872 [details]
backported patch

to be sent after the following are merged in 3.12:

afr: don't treat all cases all bricks being blamed as split-brain
afr: capture the correct errno in post-op quorum check
afr: add quorum checks in post-op

Comment 2 Worker Ant 2018-07-06 01:44:09 UTC
REVIEW: https://review.gluster.org/20467 (afr: add quorum checks in pre-op) posted (#1) for review on release-3.12 by Ravishankar N

Comment 3 Worker Ant 2018-07-06 09:13:50 UTC
COMMIT: https://review.gluster.org/20467 committed in release-3.12 by "Ravishankar N" <ravishankar> with a commit message- afr: add quorum checks in pre-op

Backport of https://review.gluster.org/#/c/19781/

Problem:
We seem to be winding the FOP if pre-op did not succeed on quorum bricks
and then failing the FOP with EROFS since the fop did not meet quorum.
This essentially masks the actual error due to which pre-op failed. (See
BZ).

Fix:
Skip FOP phase if pre-op quorum is not met and go to post-op.

Change-Id: Ie58a41e8fa1ad79aa06093706e96db8eef61b6d9
BUG: 1597154
Signed-off-by: Ravishankar N <ravishankar>

Comment 4 Jiffin 2018-08-20 07:01:24 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.12, please open a new bug report.

glusterfs-3.12.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-July/000105.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.