Backport https://bugzilla.redhat.com/show_bug.cgi?id=1361678 fix for 3.8.
REVIEW: http://review.gluster.org/15233 (cluster/afr: copy loc before passing to syncop) posted (#1) for review on release-3.8 by Oleksandr Natalenko (oleksandr)
COMMIT: http://review.gluster.org/15233 committed in release-3.8 by Pranith Kumar Karampuri (pkarampu) ------ commit 036c4fcab82d4e69bf3e53d93f7da9b0b1dd900c Author: Pranith Kumar K <pkarampu> Date: Tue Aug 2 15:19:00 2016 +0530 cluster/afr: copy loc before passing to syncop Problem: When io-threads is enabled on the client side, io-threads destroys the call-stub in which the loc is stored as soon as the c-stack unwinds. Because afr is creating a syncop with the address of loc passed in setxattr by the time syncop tries to access it, io-threads would have already freed the call-stub. This will lead to crash. Fix: Copy loc to frame->local and use it's address. > Reviewed-on: http://review.gluster.org/15070 > CentOS-regression: Gluster Build System <jenkins.org> > Smoke: Gluster Build System <jenkins.org> > NetBSD-regression: NetBSD Build System <jenkins.org> > Reviewed-by: Ravishankar N <ravishankar> BUG: 1369042 Change-Id: I16987e491e24b0b4e3d868a6968e802e47c77f7a Signed-off-by: Pranith Kumar K <pkarampu> Signed-off-by: Oleksandr Natalenko <oleksandr> Reviewed-on: http://review.gluster.org/15233 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Ravishankar N <ravishankar> CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org>
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.4, please open a new bug report. glusterfs-3.8.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/announce/2016-September/000060.html [2] https://www.gluster.org/pipermail/gluster-users/