Bug 1699866 - I/O error on writes to a disperse volume when replace-brick is executed
Summary: I/O error on writes to a disperse volume when replace-brick is executed
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1699917 1732776 1732793 1805047
TreeView+ depends on / blocked
 
Reported: 2019-04-15 12:09 UTC by Xavi Hernandez
Modified: 2020-02-20 07:33 UTC (History)
1 user (show)

Fixed In Version:
Clone Of:
: 1699917 1732776 1805047 (view as bug list)
Environment:
Last Closed: 2019-04-23 11:29:39 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 22558 0 None Merged cluster/ec: fix fd reopen 2019-04-23 11:29:38 UTC
Gluster.org Gerrit 22574 0 None Open tests: Heal should fail when read/write fails 2019-04-16 09:31:07 UTC

Description Xavi Hernandez 2019-04-15 12:09:03 UTC
Description of problem:

An I/O error happens when files are being created and written to a disperse volume when a replace-brick is executed.

Version-Release number of selected component (if applicable): mainline

How reproducible:

Always


Steps to Reproduce:
1. Create a disperse volume
2. Kill one brick
3. Open fd on a subdirectory
4. Do a replace brick of the killed brick
5. Write on the previous file

Actual results:

The write fails with I/O error

Expected results:

The write should succeed

Additional info:

Comment 1 Xavi Hernandez 2019-04-15 12:22:58 UTC
The problem happens because a reopen is attempted on all available bricks and any error it finds is propagated to the main fop.

Basically, when a write fop is sent and ec discovers that there's a brick that has come up again but doesn't have the fd open, it tries to open it. It could happen that the file was created when the brick was down and self-heal has not yet recovered it. In this case the open will fail with ENOENT. This should be ok, since the other bricks are perfectly fine to successfully process the write with enough quorum, but this error is not ignored and it's propagated to the main fop, causing it to fail even before attempting the write.

Comment 2 Worker Ant 2019-04-15 12:24:52 UTC
REVIEW: https://review.gluster.org/22558 (cluster/ec: fix fd reopen) posted (#2) for review on master by Xavi Hernandez

Comment 3 Worker Ant 2019-04-16 09:31:09 UTC
REVIEW: https://review.gluster.org/22574 (tests: Heal should fail when read/write fails) merged (#2) on master by Pranith Kumar Karampuri

Comment 4 Worker Ant 2019-04-23 11:29:39 UTC
REVIEW: https://review.gluster.org/22558 (cluster/ec: fix fd reopen) merged (#7) on master by Pranith Kumar Karampuri


Note You need to log in before you can comment on or make changes to this bug.