Bug 1167793 - fsync on write-behind doesn't wait for pending writes when an error is encountered
Summary: fsync on write-behind doesn't wait for pending writes when an error is encoun...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: write-behind
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-25 12:37 UTC by Xavi Hernandez
Modified: 2015-05-15 17:09 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-15 17:09:00 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Xavi Hernandez 2014-11-25 12:37:51 UTC
Description of problem:

When multiple writes are sent in parallel and one of them fails, a following fsync should wait until all other writes finish.

Version-Release number of selected component (if applicable): mainline


How reproducible:

Always on NetBSD, though it's not easily visible from an application unless a side effect is checked.

Steps to Reproduce:
1. create a dispersed volume and limit a directory with quota
2. write a file larger than the assigned quota on that directory (it will fail)
3. immediately after failing (it should be done very fast, like in a script), delete the file.
4. glusterfs will crash because file is closed after fsync and some of the pending writes return ENOENT and this is unexpected in DHT (not sure why ENOENT though)

Actual results:

Control is returned to the user before all pending write have finished.

Expected results:

The call should block the application until all pending writes are fully processed.

Additional info:

Comment 1 Niels de Vos 2015-05-15 17:09:00 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.