I still see memory-leak in crypt xlator 3.7.14. Server: OS: CentOS 6.3 volume info: Volume Name: hcrypt5 Type: Distribute Volume ID: 54ccd58e-1e4a-4d23-9bbe-e9d5ffd6ab89 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: n4:/srv/export/brick5 Options Reconfigured: features.encryption: on performance.write-behind: off performance.open-behind: off performance.quick-read: off Client:(Virtual Machine) OS: CentOS 6.3 Mem: 889164k total Swap: 8208376k total Total Memory Usage immediately after mount from /proc/PID/smaps : 534MB Run filebench benchmark "Createfiles" workload. after creating 200000 files, Total Memory Usage from /proc/PID/smaps: 2.8GB During filebench running, top command and System Monitor continuously show increase usage of Memory. (but not when crypt xlator is off) After a while when filebench create more than 1000000 files, memory is full and glusterfs killed! and also I see that in stack.h in FRAME_DESTROY when xlator (frame->this) is crypt and "mem_put(local)" done, this ERROR is appearing: [2016-08-03 09:34:31.813473] E [mem-pool.c:554:mem_put] (-->/usr/local/lib/glusterfs//xlator/debug/io-stats.so(io_stats_lookup_cbk+0x167) [0x7fffe7bdedf7] -->/usr/local/lib/glusterfs//xlator/mount/fuse.so(fuse_resolve_entry_cbk+0xfa) [0x7ffff034de8a] -->/usr/local/lib/libglusterfs.so.0(mem_put+0x17e) [0x7ffff7d8c35e] ) 0-mem-pool: mem-pool ptr is NULL This ERROR could be solved if in stack.h in FRAME_DESTROY method, using "GF_FREE(frame->local)" instead "mem_put(local)" when xlator(frame->this) is crypt. Valgrind Output is in this link: pastebin.com/zaD3Z3Tg
Is there a solution or Not?
(In reply to maryam from comment #1) > Is there a solution or Not? Most of the leaks are stemming from not cleaning up local after a STACK_UNWIND_STRICT. Would you be able to test if a patch is provided?
This is still a serious problem in version 3.7.18. Is there a fix?
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.
(In reply to Vijay Bellur from comment #2) > (In reply to maryam from comment #1) > > Is there a solution or Not? > > Most of the leaks are stemming from not cleaning up local after a > STACK_UNWIND_STRICT. Would you be able to test if a patch is provided? sorry for commenting on a closed issue but is there a fix even in latest release ? Vijay, can you provide the patch ? thanks!