Description of problem: posix_writev performs a) stat (prestat) on the given fd b) requested write, and c) stat (poststat) on the given fd in a lockless manner. When two or more io-threads try to write to the same file in parallel, chances are that the stats gathered per write are not truly reflective of the change (in the number of bytes and blocks) caused by the individual writes. Sometimes it is useful for translators above (like sharding) to know the exact change in the number of bytes per write. The xlator interested in this behavior could pass a flag in xdata to posix, instructing it to give the precise delta bytes and block count. And posix could ensure this happens by first holding inode->lock, performing a), b) and c) and then unlocking the mutex. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/11345 (storage/posix: Introduce flag instructing posix to perform prestat, writev and poststat atomically) posted (#1) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: http://review.gluster.org/11345 (storage/posix: Introduce flag instructing posix to perform prestat, writev and poststat atomically) posted (#2) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: http://review.gluster.org/11345 (storage/posix: Introduce flag instructing posix to perform prestat, writev and poststat atomically) posted (#3) for review on master by Krutika Dhananjay (kdhananj)
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user