REVIEW: http://review.gluster.org/16006 (cluster/ec: Check xdata to avoid memory leak) posted (#2) for review on release-3.9 by Ashish Pandey (aspandey)
REVIEW: http://review.gluster.org/16006 (cluster/ec: Check xdata to avoid memory leak) posted (#3) for review on release-3.9 by Ashish Pandey (aspandey)
COMMIT: http://review.gluster.org/16006 committed in release-3.9 by Pranith Kumar Karampuri (pkarampu) ------ commit 3f63362b6058d13dc51730d7b343fda0384e0091 Author: Ashish Pandey <aspandey> Date: Fri Dec 2 13:15:20 2016 +0530 cluster/ec: Check xdata to avoid memory leak Problem: ec_writev_start calls ec_make_internal_fop_xdata to set "yes" in xdata before ec_readv (an internal fop) is called for head and tail. Second call to this function is overwriting the previous allocated dict_t to "xdata", which results in memory leak. Solution: In ec_make_internal_fop_xdata, check if *xdata is NULL or not to avoid overwriting *xdata. >Change-Id: I49b83923e11aff9b92d002e86424c0c2e1f5f74f >BUG: 1400818 >Signed-off-by: Ashish Pandey <aspandey> >Reviewed-on: http://review.gluster.org/16007 >Reviewed-by: Xavier Hernandez <xhernandez> >Reviewed-by: Pranith Kumar Karampuri <pkarampu> >Tested-by: Pranith Kumar Karampuri <pkarampu> >Smoke: Gluster Build System <jenkins.org> >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> Change-Id: I49b83923e11aff9b92d002e86424c0c2e1f5f74f BUG: 1400833 Signed-off-by: Ashish Pandey <aspandey> Reviewed-on: http://review.gluster.org/16006 NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> Reviewed-by: Xavier Hernandez <xhernandez>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.1, please open a new bug report. glusterfs-3.9.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-January/029725.html [2] https://www.gluster.org/pipermail/gluster-users/