Description of problem: The system resource will not be reclaimed and reused, reducing the future availability of the resource. In svc_readdirp: Leak of memory or pointers to system resources In SVC_STACK_UNWIND svc_local_free is called. But svc_local_free just wipes the contents of local. It is not freeing local. Since frame->local is set to NULL before unwinding, local is not put back into the mempool as part of FRAME_DESTROY also. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/8128 (features/snapview-client: put local back to mempool after unwind) posted (#1) for review on master by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/8128 (features/snapview-client: put local back to mempool after unwind) posted (#2) for review on master by Raghavendra Bhat (raghavendra)
COMMIT: http://review.gluster.org/8128 committed in master by Vijay Bellur (vbellur) ------ commit a6620e3840bad41b84c590116183670cb1819667 Author: Raghavendra Bhat <raghavendra> Date: Fri Jun 20 15:54:57 2014 +0530 features/snapview-client: put local back to mempool after unwind Change-Id: I3a709a835b21edf757ee5a1cd04cd9d1c59201dc BUG: 1111552 Signed-off-by: Raghavendra Bhat <raghavendra> Reviewed-on: http://review.gluster.org/8128 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users