+++ This bug was initially created as a clone of Bug #1412489 +++ Description of problem: In __upcall_inode_ctx_set() if __inode_ctx_set() fails, we should be freeing the allocated inode_ctx. That's not the case now. Thanks Nithya for the pointer. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2017-01-12 03:53:34 EST --- REVIEW: http://review.gluster.org/16381 (Upcall: Fix possible memleak when inode_ctx_set fails) posted (#1) for review on master by soumya k (skoduri) --- Additional comment from Worker Ant on 2017-01-12 10:47:27 EST --- COMMIT: http://review.gluster.org/16381 committed in master by Jeff Darcy (jdarcy) ------ commit 84271e12efb783bfc83133329b0fd18aba729c84 Author: Soumya Koduri <skoduri> Date: Thu Jan 12 14:19:31 2017 +0530 Upcall: Fix possible memleak when inode_ctx_set fails In __upcall_inode_ctx_set(), if inode_ctx_set fails we should free allocated memory for ctx. This patch takes care of the same. Change-Id: Iafb42787151a579caf6f396c9b414ea48d16e6b4 BUG: 1412489 Reported-by: Nithya Balachandran <nbalacha> Signed-off-by: Soumya Koduri <skoduri> Reviewed-on: http://review.gluster.org/16381 Reviewed-by: N Balachandran <nbalacha> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jdarcy>
REVIEW: http://review.gluster.org/16431 (Upcall: Fix possible memleak when inode_ctx_set fails) posted (#1) for review on release-3.8 by soumya k (skoduri)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.9, please open a new bug report. glusterfs-3.8.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2017-February/000066.html [2] https://www.gluster.org/pipermail/gluster-users/