Bug 1065312

Summary: nfs+nlm: memory leak with locks
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Saurabh <saujain>
Component: gluster-nfsAssignee: Niels de Vos <ndevos>
Status: CLOSED EOL QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.1CC: mzywusko, nlevinki, vagarwal, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-03 17:14:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
valgrind none

Description Saurabh 2014-02-14 10:42:21 UTC
Created attachment 863214 [details]
valgrind

Description of problem:
tried to collet memory leak information using valgrind tool
on the mount point executed cthon lock tests.

Version-Release number of selected component (if applicable):
glusterfs-3.4.0.59rhs-1.el6rhs.x86_64

How reproducible:
seen on this build

Steps to Reproduce:
1. put the nfs process with valgrind, to check the memory leak
2. mount the volume
3. start cthon lock tests

Actual results:
==3788== LEAK SUMMARY:
==3788==    definitely lost: 517,304 bytes in 15,244 blocks
==3788==    indirectly lost: 3,088 bytes in 45 blocks
==3788==      possibly lost: 12,476 bytes in 72 blocks
==3788==    still reachable: 247,439,092 bytes in 2,261 blocks
==3788==         suppressed: 0 bytes in 0 blocks


Expected results:
no leaks.

Additional info:

Comment 2 Vivek Agarwal 2015-12-03 17:14:40 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.