Description of problem: Figured while reading code that the function afr_final_errno() does not treat op_ret > 0 as success. What that means is that in inode write fops (where this function gets called by __afr_inode_write_finalize()), op_ret could be > 0 on success. And if the inode write fop failed on one or more subvols, there is a remote possibility that AFR, instead of choosing the most severe errno from the set of errnos returned by the subvolumes that saw a failure, winds up picking junk errno from the subvol where the fop actually succeeded. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/10946 (cluster/afr: Treat op_ret >= 0 as success in afr_final_errno()) posted (#1) for review on master by Krutika Dhananjay (kdhananj)
COMMIT: http://review.gluster.org/10946 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 9e1bb640983f72858aeabd793bbb7fc8b5c71b09 Author: Krutika Dhananjay <kdhananj> Date: Wed May 27 19:03:12 2015 +0530 cluster/afr: Treat op_ret >= 0 as success in afr_final_errno() Change-Id: I7ec29428b7f7ef249014f948a5d616bfb8aaf80d BUG: 1225491 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/10946 Tested-by: NetBSD Build System Reviewed-by: Ravishankar N <ravishankar> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user