Bug 763849 (GLUSTER-2117)

Summary: [glusterfs-3.1.1qa7]: memleak in glusterfsd
Product: [Community] GlusterFS Reporter: Raghavendra Bhat <rabhat>
Component: glusterdAssignee: shishir gowda <sgowda>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: low    
Version: 3.1.0CC: gluster-bugs, nsathyan, rahulcs, vijay
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
valgrind log file
none
glusterfs client valgrind log file none

Description Raghavendra Bhat 2010-11-16 15:57:35 UTC
there is a memleak in glusterd. This is the valgrind report indicating the leak.




==18038== LEAK SUMMARY:
==18038==    definitely lost: 100,623 bytes in 525 blocks
==18038==    indirectly lost: 0 bytes in 0 blocks
==18038==      possibly lost: 17,456,525 bytes in 1,691 blocks
==18038==    still reachable: 3,372,663 bytes in 127 blocks
==18038==         suppressed: 0 bytes in 0 blocks
==18038== Reachable blocks (those to which a pointer was found) are not shown.
==18038== To see them, rerun with: --leak-check=full --show-reachable=yes
==18038== 


the volgrind log file is attached

Comment 1 Raghavendra Bhat 2010-11-17 01:32:09 UTC
Created attachment 386

Comment 2 Raghavendra Bhat 2010-11-17 04:29:07 UTC
I repeated the tests and these are the new valgrind logs indicating the leak. I cleaned up the mount point after the tests (posix compliance, dbench 100, areual /etc) and then unmounted it.

Comment 3 Raghavendra Bhat 2010-11-17 04:29:37 UTC
Created attachment 387

Comment 4 Anand Avati 2010-11-18 10:56:08 UTC
PATCH: http://patches.gluster.com/patch/5739 in master (Remove spurious inode_ref call on parent dir in fuse_create_cbk)

Comment 5 Amar Tumballi 2010-11-19 03:20:55 UTC
*** Bug 2114 has been marked as a duplicate of this bug. ***

Comment 6 shishir gowda 2010-11-19 03:26:56 UTC
some of the leaks are due to fini not being invoked.
Once that is fixed, we can revisit those leaks

Comment 7 Raghavendra Bhat 2011-02-21 04:44:32 UTC
Repeated the same tests (posix_compliance, arequal of /etc, dbench 100). This is the output of the valgrind.




CLIENT:
==6991== LEAK SUMMARY:
==6991==    definitely lost: 180 bytes in 2 blocks
==6991==    indirectly lost: 152 bytes in 1 blocks
==6991==      possibly lost: 20,111,663 bytes in 116 blocks
==6991==    still reachable: 3,366,649 bytes in 118 blocks
==6991==         suppressed: 0 bytes in 0 blocks



SERVER:

==6962== LEAK SUMMARY:
==6962==    definitely lost: 221 bytes in 3 blocks
==6962==    indirectly lost: 0 bytes in 0 blocks
==6962==      possibly lost: 2,857,856 bytes in 53 blocks
==6962==    still reachable: 8,995,178 bytes in 185 blocks
==6962==         suppressed: 0 bytes in 0 blocks