Bug 1699917 - I/O error on writes to a disperse volume when replace-brick is executed
Summary: I/O error on writes to a disperse volume when replace-brick is executed
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: 6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
URL:
Whiteboard:
Depends On: 1699866 1805047
Blocks: glusterfs-6.1
TreeView+ depends on / blocked
 
Reported: 2019-04-15 12:31 UTC by Xavi Hernandez
Modified: 2020-02-20 07:33 UTC (History)
2 users (show)

Fixed In Version:
Clone Of: 1699866
Environment:
Last Closed: 2019-05-08 13:55:37 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 22608 0 None Merged cluster/ec: fix fd reopen 2019-05-08 13:55:36 UTC

Description Xavi Hernandez 2019-04-15 12:31:35 UTC
+++ This bug was initially created as a clone of Bug #1699866 +++

Description of problem:

An I/O error happens when files are being created and written to a disperse volume when a replace-brick is executed.

Version-Release number of selected component (if applicable): mainline

How reproducible:

Always


Steps to Reproduce:
1. Create a disperse volume
2. Kill one brick
3. Open fd on a subdirectory
4. Do a replace brick of the killed brick
5. Write on the previous file

Actual results:

The write fails with I/O error

Expected results:

The write should succeed

Additional info:

--- Additional comment from Xavi Hernandez on 2019-04-15 14:22:58 CEST ---

The problem happens because a reopen is attempted on all available bricks and any error it finds is propagated to the main fop.

Basically, when a write fop is sent and ec discovers that there's a brick that has come up again but doesn't have the fd open, it tries to open it. It could happen that the file was created when the brick was down and self-heal has not yet recovered it. In this case the open will fail with ENOENT. This should be ok, since the other bricks are perfectly fine to successfully process the write with enough quorum, but this error is not ignored and it's propagated to the main fop, causing it to fail even before attempting the write.

Comment 1 Worker Ant 2019-04-23 12:08:20 UTC
REVIEW: https://review.gluster.org/22608 (cluster/ec: fix fd reopen) posted (#1) for review on release-6 by Xavi Hernandez

Comment 2 Worker Ant 2019-05-08 13:55:37 UTC
REVIEW: https://review.gluster.org/22608 (cluster/ec: fix fd reopen) merged (#2) on release-6 by Shyamsundar Ranganathan


Note You need to log in before you can comment on or make changes to this bug.