Bug 1805047

Summary: I/O error on writes to a disperse volume when replace-brick is executed
Product: [Community] GlusterFS Reporter: Pranith Kumar K <pkarampu>
Component: disperseAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: 5CC: bugs, jahernan, pasik
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-5.12 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1699866 Environment:
Last Closed: 2020-02-24 07:46:24 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1699866    
Bug Blocks: 1699917, 1732776, 1732793, 1806848    

Description Pranith Kumar K 2020-02-20 07:33:11 UTC
+++ This bug was initially created as a clone of Bug #1699866 +++

Description of problem:

An I/O error happens when files are being created and written to a disperse volume when a replace-brick is executed.

Version-Release number of selected component (if applicable): mainline

How reproducible:

Always


Steps to Reproduce:
1. Create a disperse volume
2. Kill one brick
3. Open fd on a subdirectory
4. Do a replace brick of the killed brick
5. Write on the previous file

Actual results:

The write fails with I/O error

Expected results:

The write should succeed

Additional info:

--- Additional comment from Xavi Hernandez on 2019-04-15 12:22:58 UTC ---

The problem happens because a reopen is attempted on all available bricks and any error it finds is propagated to the main fop.

Basically, when a write fop is sent and ec discovers that there's a brick that has come up again but doesn't have the fd open, it tries to open it. It could happen that the file was created when the brick was down and self-heal has not yet recovered it. In this case the open will fail with ENOENT. This should be ok, since the other bricks are perfectly fine to successfully process the write with enough quorum, but this error is not ignored and it's propagated to the main fop, causing it to fail even before attempting the write.

--- Additional comment from Worker Ant on 2019-04-15 12:24:52 UTC ---

REVIEW: https://review.gluster.org/22558 (cluster/ec: fix fd reopen) posted (#2) for review on master by Xavi Hernandez

--- Additional comment from Worker Ant on 2019-04-16 09:31:09 UTC ---

REVIEW: https://review.gluster.org/22574 (tests: Heal should fail when read/write fails) merged (#2) on master by Pranith Kumar Karampuri

--- Additional comment from Worker Ant on 2019-04-23 11:29:39 UTC ---

REVIEW: https://review.gluster.org/22558 (cluster/ec: fix fd reopen) merged (#7) on master by Pranith Kumar Karampuri

Comment 1 Worker Ant 2020-02-20 08:31:47 UTC
REVIEW: https://review.gluster.org/24144 (cluster/ec: fix fd reopen) posted (#1) for review on release-5 by Pranith Kumar Karampuri

Comment 2 Worker Ant 2020-02-24 07:46:24 UTC
REVIEW: https://review.gluster.org/24144 (cluster/ec: fix fd reopen) merged (#1) on release-5 by Pranith Kumar Karampuri

Comment 3 hari gowtham 2020-03-02 08:36:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.12, please open a new bug report.

glusterfs-5.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/gluster-users/2020-March/037797.html
[2] https://www.gluster.org/pipermail/gluster-users/