+++ This bug was initially created as a clone of Bug #1420434 +++ In the course of multiplexing development, it was found that the trash translator would crash if more than one instance was present in a single brick process. Accordingly, trash was disabled so other work could continue. The crashes look like this. #0 0x00007f1740887ee2 in trash_dir_lookup_cbk (frame=0x7f16f017a5d0, cookie=0x7f16f017b190, this=0x7f171c002320, op_ret=-1, op_errno=2, inode=0x0, buf=0x7f1706ffb300, xdata=0x0, postparent=0x7f1706ffb290) at trash.c:680 #1 0x00007f17410b68ee in posix_lookup (frame=0x7f16f017b190, this=0x7f171c001020, loc=0x7f1706ffb480, xdata=0x0) at posix.c:257 #2 0x00007f1740888723 in create_or_rename_trash_directory ( this=0x7f171c002320) at trash.c:750 #3 0x00007f1740897561 in reconfigure (this=0x7f171c002320, options=0x7f16f015e080) at trash.c:2286 The problem is that the order of reconfigure vs. notify(CHILD_UP) is different with multiplexing, and only the notify path was allocating priv->trash_itable. Moving that to the translator's init seems to fix the problem, so trash can be reenabled. Patch as soon as I have the bug number. --- Additional comment from Worker Ant on 2017-02-08 10:57:38 EST --- REVIEW: https://review.gluster.org/16567 (trash: fix problem with trash feature under multiplexing) posted (#1) for review on master by Jeff Darcy (jdarcy) --- Additional comment from Worker Ant on 2017-02-09 08:46:49 EST --- COMMIT: https://review.gluster.org/16567 committed in master by Shyamsundar Ranganathan (srangana) ------ commit 1e4f9c58a1b013f3f27d515d72d1e76e1a53436e Author: Jeff Darcy <jdarcy> Date: Wed Feb 8 10:48:55 2017 -0500 trash: fix problem with trash feature under multiplexing With multiplexing, the trash translator gets a reconfigure call before a notify(CHILD_UP). In this case, priv->trash_itable was not yet initialized, so the reconfigure would get a SEGV. Moving the itable allocation to init seems to fix it, so trash can be reenabled. Change-Id: I21ac2d7fc66bac1bc4ec70fbc8bae306d73ac565 BUG: 1420434 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/16567 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Anoop C S <anoopcs> Reviewed-by: jiffin tony Thottan <jthottan> Reviewed-by: Shyamsundar Ranganathan <srangana>
REVIEW: https://review.gluster.org/16581 (trash: fix problem with trash feature under multiplexing) posted (#1) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16582 (trash: fix problem with trash feature under multiplexing) posted (#1) for review on release-3.10 by Jeff Darcy (jdarcy)
COMMIT: https://review.gluster.org/16582 committed in release-3.10 by Shyamsundar Ranganathan (srangana) ------ commit 5c3f113cd8acc341a62245cea60a2249034091c5 Author: Jeff Darcy <jdarcy> Date: Wed Feb 8 10:48:55 2017 -0500 trash: fix problem with trash feature under multiplexing With multiplexing, the trash translator gets a reconfigure call before a notify(CHILD_UP). In this case, priv->trash_itable was not yet initialized, so the reconfigure would get a SEGV. Moving the itable allocation to init seems to fix it, so trash can be reenabled. Backport of: > Change-Id: I21ac2d7fc66bac1bc4ec70fbc8bae306d73ac565 > BUG: 1420434 > Reviewed-on: https://review.gluster.org/16567 Change-Id: I43a6de6ac5070848619c5f905f075e4a4099c1bd BUG: 1420808 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/16582 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Anoop C S <anoopcs> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Shyamsundar Ranganathan <srangana>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/