Description of problem: In 'glfs_h_poll_cache_invalidation', we create a variable used 'up_inode_arg' using calloc (which will not set any mem_acct info for that variable). But in case of any errors during processing, we free it using 'GF_FREE' which makes the process crash with ESEGV which trying to access mem_acct variables. Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1. Create a 4 node ganesha cluster. 2. Create a volume and enable ganesha on it. 3. Mount the volume using vers=3 or 4 and create nested directories on the mount point. 4. Add bricks to the volume. gluster volume add-brick newvolume replica 2 5. start the rebalance process: gluster v rebalance newvolume start force 6. Observe that while rebalance is in progress, ganesha process on the mounted node gets killed with seg fault error: Actual results: nfs-ganesha process crashes with ESEGV Expected results: nfs-ganesha process shouldn't crash. Additional info: This issue is originally reported in bug1339208
REVIEW: http://review.gluster.org/14521 (gfapi/upcall: Use GF_CALLOC while allocating variables) posted (#1) for review on master by soumya k (skoduri)
COMMIT: http://review.gluster.org/14521 committed in master by Niels de Vos (ndevos) ------ commit ac2fa110ea489ca3d1b81e3872731fa1621a6e39 Author: Soumya Koduri <skoduri> Date: Tue May 24 17:42:06 2016 +0530 gfapi/upcall: Use GF_CALLOC while allocating variables In 'glfs_h_poll_cache_invalidation', use GF_CALLOC to allocate 'up_inode_arg' to set memory accounting which is used/referred when freeing the same variable in case of any erros. Change-Id: I365e114fa6d7abb292dacb6fc702128d046df8f8 BUG: 1339214 Signed-off-by: Soumya Koduri <skoduri> Reviewed-on: http://review.gluster.org/14521 Reviewed-by: Kaleb KEITHLEY <kkeithle> Reviewed-by: jiffin tony Thottan <jthottan> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Niels de Vos <ndevos> CentOS-regression: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report. glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html [2] https://www.gluster.org/pipermail/gluster-users/