+++ This bug was initially created as a clone of Bug #1300152 +++ Description of problem: Rebalance process crashed during cleanup_and_exit Version-Release number of selected component (if applicable): How reproducible: Observed during upstream regression tests. Steps to Reproduce: 1. 2. 3. Actual results: Rebalance crashes during cleanup_and_exit. From the core that was generated looks like the frame is corrupted. Expected results: Rebalance should not crash. Additional info:
REVIEW: http://review.gluster.org/13317 (glusterfsd: destroy frame after rebalance callback has completed) posted (#1) for review on release-3.7 by Sakshi Bansal
REVIEW: http://review.gluster.org/13317 (glusterfsd: destroy frame after rebalance callback has completed) posted (#2) for review on release-3.7 by Sakshi Bansal
REVIEW: http://review.gluster.org/13317 (glusterfsd: destroy frame after rebalance callback has completed) posted (#3) for review on release-3.7 by Sakshi Bansal
COMMIT: http://review.gluster.org/13317 committed in release-3.7 by Raghavendra G (rgowdapp) ------ commit d516bed538bc09a77f94292de4bb4861da6ace54 Author: Sakshi Bansal <sabansal> Date: Wed Jan 20 09:31:00 2016 +0530 glusterfsd: destroy frame after rebalance callback has completed Rebalance after sending a status notification immediately destroys the frame. Now in its callback the frame is corrupted. Rebalance crashes when this corrupted frame is accessed. To avoid this we must destroy the frame after the callback is completed. > Backport of http://review.gluster.org/#/c/13262/ > Change-Id: If383017a61f09275256e51c44a1efa28feace87b > BUG: 1300152 > Signed-off-by: Sakshi <sabansal> Change-Id: If383017a61f09275256e51c44a1efa28feace87b BUG: 1302962 Signed-off-by: Sakshi <sabansal> Reviewed-on: http://review.gluster.org/13317 Smoke: Gluster Build System <jenkins.com> CentOS-regression: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report. glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user