Bug 1044008

Summary: fclose doesn't cause fsync on fuse mount
Product: [Community] GlusterFS Reporter: Lukas Bezdicka <social>
Component: fuseAssignee: bugs <bugs>
Status: CLOSED EOL QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: pre-releaseCC: bugs, gluster-bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-10-22 15:40:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Lukas Bezdicka 2013-12-17 15:31:14 UTC
Description of problem:
In standard POSIX fsync isn't required on close but it is fairly safe from application view to expect that everything was written to file and that another application can read file right away as the changes are in VFS. With glusterfs fuse mount we call fclose and read the file from another node which fails as the write didn't yet happen. From application this could be solved by calling fsync before fclose but we use quite a lot of standard distro stack and this would mean patching them all. Other way would be the way NFS solves this issue by calling fsync after close by default.

Version-Release number of selected component (if applicable):
3.4.1

How reproducible:
Always.

Steps to Reproduce:
1. Get two nodes mounting gluster.
2. Write something to file and close it.
3. The same moment read the file from another node, it's not there yet.

Actual results:
Data didn't land yet (it'll be there in about 1sec)


Expected results:
close should take a bit longer as underlying fs should call fsync but read from another node should be OK as the write finished fine.

Additional info:

Comment 1 Kaleb KEITHLEY 2015-10-22 15:40:20 UTC
pre-release version is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.