Description of problem: When local_subvol_cnt is zero in gf_defrag_process_dir, it goes to out section and tries to free dfmeta members which leads to crash as dfmeta is NULL. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Rebalance crashes Expected results: Additional info:
REVIEW: http://review.gluster.org/10459 (dht: Add a null check before freeing dir_dfmeta and tmp_container) posted (#4) for review on master by Susant Palai (spalai)
REVIEW: http://review.gluster.org/10281 (cluster/dht: change log level of developer logs to DEBUG) posted (#2) for review on master by Vijay Bellur (vbellur)
http://review.gluster.org/#/c/10459/ is merged in master.
COMMIT: http://review.gluster.org/10281 committed in master by Vijay Bellur (vbellur) ------ commit 8812e4f57f2138c159d99432748cf68240241675 Author: Vijay Bellur <vbellur> Date: Fri Apr 17 12:00:48 2015 +0530 cluster/dht: change log level of developer logs to DEBUG A few log messages in dht directory self heal at log level INFO are useful only for developers and these logs tend to casue excessive logs in our log files. Hence moving the log level of such logs to DEBUG. Change-Id: I8a543f4ddeb5c20b2978a0f7b18d8baccc935a54 BUG: 1217949 Signed-off-by: Vijay Bellur <vbellur> Reviewed-on: http://review.gluster.org/10281 Reviewed-by: N Balachandran <nbalacha> Tested-by: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user