Bug 1732776 - I/O error on writes to a disperse volume when replace-brick is executed
Summary: I/O error on writes to a disperse volume when replace-brick is executed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: disperse
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.4.z Async Update
Assignee: Ashish Pandey
QA Contact: Upasana
URL:
Whiteboard:
Depends On: 1699866 1805047
Blocks: 1663375 1732793
TreeView+ depends on / blocked
 
Reported: 2019-07-24 10:35 UTC by Ashish Pandey
Modified: 2020-02-20 07:33 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 1699866
: 1732793 (view as bug list)
Environment:
Last Closed: 2019-08-16 11:04:34 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2514 0 None None None 2019-08-16 11:04:45 UTC

Description Ashish Pandey 2019-07-24 10:35:14 UTC
+++ This bug was initially created as a clone of Bug #1699866 +++

Description of problem:

An I/O error happens when files are being created and written to a disperse volume when a replace-brick is executed.

Version-Release number of selected component (if applicable): mainline

How reproducible:

Always


Steps to Reproduce:
1. Create a disperse volume
2. Kill one brick
3. Open fd on a subdirectory
4. Do a replace brick of the killed brick
5. Write on the previous file

Actual results:

The write fails with I/O error

Expected results:

The write should succeed

Additional info:

--- Additional comment from Xavi Hernandez on 2019-04-15 12:22:58 UTC ---

The problem happens because a reopen is attempted on all available bricks and any error it finds is propagated to the main fop.

Basically, when a write fop is sent and ec discovers that there's a brick that has come up again but doesn't have the fd open, it tries to open it. It could happen that the file was created when the brick was down and self-heal has not yet recovered it. In this case the open will fail with ENOENT. This should be ok, since the other bricks are perfectly fine to successfully process the write with enough quorum, but this error is not ignored and it's propagated to the main fop, causing it to fail even before attempting the write.

--- Additional comment from Worker Ant on 2019-04-15 12:24:52 UTC ---

REVIEW: https://review.gluster.org/22558 (cluster/ec: fix fd reopen) posted (#2) for review on master by Xavi Hernandez

--- Additional comment from Worker Ant on 2019-04-16 09:31:09 UTC ---

REVIEW: https://review.gluster.org/22574 (tests: Heal should fail when read/write fails) merged (#2) on master by Pranith Kumar Karampuri

--- Additional comment from Worker Ant on 2019-04-23 11:29:39 UTC ---

REVIEW: https://review.gluster.org/22558 (cluster/ec: fix fd reopen) merged (#7) on master by Pranith Kumar Karampuri

Comment 13 errata-xmlrpc 2019-08-16 11:04:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2514


Note You need to log in before you can comment on or make changes to this bug.