Bug 1507361

Summary: [GSS] glusterfsd processes consuming high memory on all gluster nodes from trusted pool
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Prashant Dhange <pdhange>
Component: locksAssignee: Xavi Hernandez <jahernan>
Status: CLOSED ERRATA QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: high Docs Contact:
Priority: high    
Version: rhgs-3.2CC: abhishku, amukherj, atumball, avishwan, bkunal, hgowtham, jahernan, nchilaka, pdhange, rabhat, rgowdapp, rhinduja, rhs-bugs, sankarshan, srmukher, ssaha, vbellur
Target Milestone: ---   
Target Release: RHGS 3.4.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.12.2-2 Doc Type: Bug Fix
Doc Text:
Earlier, the 'gluster volume clear-locks' command slipped to release the memory completely. This caused increasingly high memory utilization on the brick processes over time. With this fix, the associated memories are released when the clear-locks command is executed.
Story Points: ---
Clone Of:
: 1526377 (view as bug list) Environment:
Last Closed: 2018-09-04 06:38:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1495161    
Bug Blocks: 1503135, 1526377    
Attachments:
Description Flags
locks program none

Comment 37 Nag Pavan Chilakam 2018-08-09 13:49:28 UTC
Xavi,
Can you help with steps/pointers to validate the fix?

regards,
nag

Comment 38 Raghavendra Bhat 2018-08-09 20:48:52 UTC
Nag Pavan,

Based on previous comments in the bug I understand that memory leak is caused due to the posix locks being acquired by the application (via fcntl call). 

So, one way to test whether the patches have fixed the memory leak would be to try taking posix locks (i.e. fcntl calls) and keep checking the memory usage of the bricks. 

With the patch, the memory usage should not be as much as without the patches. I have attached a program called locks.c to this bug. Please check whether it can help you to verify this bug. The program is a old one which I do not exactly recall when and why it was used. But it tries to take locks. 

See if it can help you. 

Regards,
Raghavendra

Comment 39 Raghavendra Bhat 2018-08-09 20:49:40 UTC
Created attachment 1474815 [details]
locks program

Comment 40 Nag Pavan Chilakam 2018-08-10 10:04:26 UTC
(In reply to Raghavendra Bhat from comment #38)
> Nag Pavan,
> 
> Based on previous comments in the bug I understand that memory leak is
> caused due to the posix locks being acquired by the application (via fcntl
> call). 
> 
> So, one way to test whether the patches have fixed the memory leak would be
> to try taking posix locks (i.e. fcntl calls) and keep checking the memory
> usage of the bricks. 
> 
> With the patch, the memory usage should not be as much as without the
> patches. I have attached a program called locks.c to this bug. Please check
> whether it can help you to verify this bug. The program is a old one which I
> do not exactly recall when and why it was used. But it tries to take locks. 
> 
> See if it can help you. 
> 
> Regards,
> Raghavendra

thanks Raghavendra, I had a locks script, which I was running, but the one you shared is better, will make use of it.

Comment 45 errata-xmlrpc 2018-09-04 06:38:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607