Bug 1732776

Summary: I/O error on writes to a disperse volume when replace-brick is executed
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Ashish Pandey <aspandey>
Component: disperseAssignee: Ashish Pandey <aspandey>
Status: CLOSED ERRATA QA Contact: Upasana <ubansal>
Severity: urgent Docs Contact:
Priority: urgent    
Version: rhgs-3.4CC: amukherj, bkunal, bugs, jahernan, pasik, rcyriac, rhs-bugs, sankarshan, sheggodu, storage-qa-internal, ubansal
Target Milestone: ---Keywords: Reopened, ZStream
Target Release: RHGS 3.4.z Async Update   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: 1699866
: 1732793 (view as bug list) Environment:
Last Closed: 2019-08-16 11:04:34 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1699866, 1805047    
Bug Blocks: 1663375, 1732793    

Description Ashish Pandey 2019-07-24 10:35:14 UTC
+++ This bug was initially created as a clone of Bug #1699866 +++

Description of problem:

An I/O error happens when files are being created and written to a disperse volume when a replace-brick is executed.

Version-Release number of selected component (if applicable): mainline

How reproducible:

Always


Steps to Reproduce:
1. Create a disperse volume
2. Kill one brick
3. Open fd on a subdirectory
4. Do a replace brick of the killed brick
5. Write on the previous file

Actual results:

The write fails with I/O error

Expected results:

The write should succeed

Additional info:

--- Additional comment from Xavi Hernandez on 2019-04-15 12:22:58 UTC ---

The problem happens because a reopen is attempted on all available bricks and any error it finds is propagated to the main fop.

Basically, when a write fop is sent and ec discovers that there's a brick that has come up again but doesn't have the fd open, it tries to open it. It could happen that the file was created when the brick was down and self-heal has not yet recovered it. In this case the open will fail with ENOENT. This should be ok, since the other bricks are perfectly fine to successfully process the write with enough quorum, but this error is not ignored and it's propagated to the main fop, causing it to fail even before attempting the write.

--- Additional comment from Worker Ant on 2019-04-15 12:24:52 UTC ---

REVIEW: https://review.gluster.org/22558 (cluster/ec: fix fd reopen) posted (#2) for review on master by Xavi Hernandez

--- Additional comment from Worker Ant on 2019-04-16 09:31:09 UTC ---

REVIEW: https://review.gluster.org/22574 (tests: Heal should fail when read/write fails) merged (#2) on master by Pranith Kumar Karampuri

--- Additional comment from Worker Ant on 2019-04-23 11:29:39 UTC ---

REVIEW: https://review.gluster.org/22558 (cluster/ec: fix fd reopen) merged (#7) on master by Pranith Kumar Karampuri

Comment 13 errata-xmlrpc 2019-08-16 11:04:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2514