Hide Forgot
There is a emory leak in ioc_readv. It is observed in glusterfs-3.0.5rc6. Valgrind report: 125,824 (64 direct, 125,760 indirect) bytes in 1 blocks are definitely lost in loss record 140 of 145 ==4084== at 0x4C2414B: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==4084== by 0x4E674BB: mem_pool_new_fn (mem-pool.c:46) ==4084== by 0x6CD5FC1: ioc_readv (io-cache.c:947) ==4084== by 0x6EE5631: qr_readv (quick-read.c:1143) ==4084== by 0x70F7708: wb_readv_helper (write-behind.c:1995) ==4084== by 0x4E5C49F: call_resume_wind (call-stub.c:2467) ==4084== by 0x4E62525: call_resume (call-stub.c:4304) ==4084== by 0x70F642D: wb_resume_other_requests (write-behind.c:1621) ==4084== by 0x70F64D7: wb_do_ops (write-behind.c:1649) ==4084== by 0x70F6B4E: wb_process_queue (write-behind.c:1817) ==4084== by 0x70F7CCD: wb_readv (write-behind.c:2056) ==4084== by 0x730BAFE: sp_readv (stat-prefetch.c:2503)
This is not memory leak as such, since the mem_pool is allocated only once. Neverthless, I am submitting a patch to destroy mem_pool in fini.
PATCH: http://patches.gluster.com/patch/3429 in master (performance/io-cache: destroy table->mem_pool in fini.)
PATCH: http://patches.gluster.com/patch/3428 in release-3.0 (performance/io-cache: free table->mem_pool in fini.)
Raghu - please mark a target release.