Bug 1749168 - [OCS 3.11.3] possible memory leak as fuse mount process consuming too much memory
Summary: [OCS 3.11.3] possible memory leak as fuse mount process consuming too much me...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: fuse
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.4.z Async Update
Assignee: Csaba Henk
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
: 1786327 (view as bug list)
Depends On: 1737674
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-05 05:04 UTC by Atin Mukherjee
Modified: 2023-03-24 15:23 UTC (History)
18 users (show)

Fixed In Version: glusterfs-3.12.2-47.5
Doc Type: Bug Fix
Doc Text:
Previously, dynamically allocated memory was not freed correctly, which led to an increase in memory consumption and out-of-memory management on gluster clients. Memory is now freed correctly so that memory overruns do not occur.
Clone Of: 1737674
Environment:
Last Closed: 2019-09-19 09:12:34 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2812 0 None None None 2019-09-19 09:12:40 UTC

Comment 2 Atin Mukherjee 2019-09-05 05:08:26 UTC
Upstream patch : https://review.gluster.org/23285

Comment 6 Rinku 2019-09-12 05:26:55 UTC
This bug is fixed in : glusterfs-3.12.2-47.5

Comment 10 Bala Konda Reddy M 2019-09-16 15:13:15 UTC
Build : glusterfs-3.12.2-47.5

As the fix is to free memory when Interrupt happens. Utilized the script mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1728047#c0
1. Created a 1X3 replica volume, started it and mounted it
2. Compiled open_and_sleep.c from "rhs-glusterfs/tests/features" downstream source  
3. Ran i=1; while :; do echo -en "\r$i  "; (./open_and_sleep a1 | { sleep 0.1; xargs -n1 kill -INT; }); i=$(($i+1)); done
4. Took statedumps for every 1 minute and top output of glusterfs process.

############### 2019-09-12 20:58:01.491998  #######################
top - 20:58:01 up 1 day,  3:59,  3 users,  load average: 0.00, 0.01, 0.05
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16266308 total, 14989736 free,   288680 used,   987892 buff/cache
KiB Swap:  5242876 total,  5242876 free,        0 used. 15672048 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
20506 root      20   0  690308  14784   3912 S   0.0  0.1   0:30.05 glusterfs

############### 2019-09-13 11:15:07.661709  #######################
top - 11:15:07 up 1 day, 18:17,  1 user,  load average: 0.33, 0.14, 0.08
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  4.9 us,  9.8 sy,  0.0 ni, 85.2 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16266308 total, 14946580 free,   285168 used,  1034560 buff/cache
KiB Swap:  5242876 total,  5242876 free,        0 used. 15630936 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
20506 root      20   0  755844  14816   3912 S   0.0  0.1   5:28.54 glusterfs
####################


For 4,69,110 iterations, seen a slight increase of 32kB which is ran for 15 hours.
No increase in size and num_allocs seen for "gf_fuse_mt_iov_base" and fix is working as expected.

Scenario 2:
1. Created 1X3 volume, started and mounted it.
2. Used the compiled code to run from 4 screens 
3. Ran the script from 4 screens for 70,000 iterations
4. Took statdumps for every 10 mins and top output of glusterfs process

No memory increase of glusterfs process
No increase in size and num_allocs seen for "gf_fuse_mt_iov_base" and fix is working as expected.

Moving this bug to verified.

Comment 16 errata-xmlrpc 2019-09-19 09:12:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2812

Comment 17 nravinas 2020-01-31 13:21:04 UTC
*** Bug 1786327 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.