Bug 1507361 - [GSS] glusterfsd processes consuming high memory on all gluster nodes from trusted pool
Summary: [GSS] glusterfsd processes consuming high memory on all gluster nodes from tr...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: locks
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.4.0
Assignee: Xavi Hernandez
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1495161
Blocks: 1503135 1526377
TreeView+ depends on / blocked
 
Reported: 2017-10-30 02:49 UTC by Prashant Dhange
Modified: 2021-03-11 16:08 UTC (History)
17 users (show)

Fixed In Version: glusterfs-3.12.2-2
Doc Type: Bug Fix
Doc Text:
Earlier, the 'gluster volume clear-locks' command slipped to release the memory completely. This caused increasingly high memory utilization on the brick processes over time. With this fix, the associated memories are released when the clear-locks command is executed.
Clone Of:
: 1526377 (view as bug list)
Environment:
Last Closed: 2018-09-04 06:38:02 UTC
Embargoed:


Attachments (Terms of Use)
locks program (2.54 KB, text/x-csrc)
2018-08-09 20:49 UTC, Raghavendra Bhat
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:39:45 UTC

Comment 37 Nag Pavan Chilakam 2018-08-09 13:49:28 UTC
Xavi,
Can you help with steps/pointers to validate the fix?

regards,
nag

Comment 38 Raghavendra Bhat 2018-08-09 20:48:52 UTC
Nag Pavan,

Based on previous comments in the bug I understand that memory leak is caused due to the posix locks being acquired by the application (via fcntl call). 

So, one way to test whether the patches have fixed the memory leak would be to try taking posix locks (i.e. fcntl calls) and keep checking the memory usage of the bricks. 

With the patch, the memory usage should not be as much as without the patches. I have attached a program called locks.c to this bug. Please check whether it can help you to verify this bug. The program is a old one which I do not exactly recall when and why it was used. But it tries to take locks. 

See if it can help you. 

Regards,
Raghavendra

Comment 39 Raghavendra Bhat 2018-08-09 20:49:40 UTC
Created attachment 1474815 [details]
locks program

Comment 40 Nag Pavan Chilakam 2018-08-10 10:04:26 UTC
(In reply to Raghavendra Bhat from comment #38)
> Nag Pavan,
> 
> Based on previous comments in the bug I understand that memory leak is
> caused due to the posix locks being acquired by the application (via fcntl
> call). 
> 
> So, one way to test whether the patches have fixed the memory leak would be
> to try taking posix locks (i.e. fcntl calls) and keep checking the memory
> usage of the bricks. 
> 
> With the patch, the memory usage should not be as much as without the
> patches. I have attached a program called locks.c to this bug. Please check
> whether it can help you to verify this bug. The program is a old one which I
> do not exactly recall when and why it was used. But it tries to take locks. 
> 
> See if it can help you. 
> 
> Regards,
> Raghavendra

thanks Raghavendra, I had a locks script, which I was running, but the one you shared is better, will make use of it.

Comment 45 errata-xmlrpc 2018-09-04 06:38:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.