Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1507361 - [GSS] glusterfsd processes consuming high memory on all gluster nodes from trusted pool
[GSS] glusterfsd processes consuming high memory on all gluster nodes from tr...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: locks (Show other bugs)
3.2
x86_64 Linux
high Severity high
: ---
: RHGS 3.4.0
Assigned To: Xavi Hernandez
nchilaka
:
Depends On: 1495161
Blocks: 1503135 1526377
  Show dependency treegraph
 
Reported: 2017-10-29 22:49 EDT by Prashant Dhange
Modified: 2018-09-17 05:03 EDT (History)
17 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-2
Doc Type: Bug Fix
Doc Text:
Earlier, the 'gluster volume clear-locks' command slipped to release the memory completely. This caused increasingly high memory utilization on the brick processes over time. With this fix, the associated memories are released when the clear-locks command is executed.
Story Points: ---
Clone Of:
: 1526377 (view as bug list)
Environment:
Last Closed: 2018-09-04 02:38:02 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
locks program (2.54 KB, text/x-csrc)
2018-08-09 16:49 EDT, Raghavendra Bhat
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:39 EDT

  None (edit)
Comment 37 nchilaka 2018-08-09 09:49:28 EDT
Xavi,
Can you help with steps/pointers to validate the fix?

regards,
nag
Comment 38 Raghavendra Bhat 2018-08-09 16:48:52 EDT
Nag Pavan,

Based on previous comments in the bug I understand that memory leak is caused due to the posix locks being acquired by the application (via fcntl call). 

So, one way to test whether the patches have fixed the memory leak would be to try taking posix locks (i.e. fcntl calls) and keep checking the memory usage of the bricks. 

With the patch, the memory usage should not be as much as without the patches. I have attached a program called locks.c to this bug. Please check whether it can help you to verify this bug. The program is a old one which I do not exactly recall when and why it was used. But it tries to take locks. 

See if it can help you. 

Regards,
Raghavendra
Comment 39 Raghavendra Bhat 2018-08-09 16:49 EDT
Created attachment 1474815 [details]
locks program
Comment 40 nchilaka 2018-08-10 06:04:26 EDT
(In reply to Raghavendra Bhat from comment #38)
> Nag Pavan,
> 
> Based on previous comments in the bug I understand that memory leak is
> caused due to the posix locks being acquired by the application (via fcntl
> call). 
> 
> So, one way to test whether the patches have fixed the memory leak would be
> to try taking posix locks (i.e. fcntl calls) and keep checking the memory
> usage of the bricks. 
> 
> With the patch, the memory usage should not be as much as without the
> patches. I have attached a program called locks.c to this bug. Please check
> whether it can help you to verify this bug. The program is a old one which I
> do not exactly recall when and why it was used. But it tries to take locks. 
> 
> See if it can help you. 
> 
> Regards,
> Raghavendra

thanks Raghavendra, I had a locks script, which I was running, but the one you shared is better, will make use of it.
Comment 45 errata-xmlrpc 2018-09-04 02:38:02 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.