Bug 1011313 - Memory leak in gluster samba vfs / libgfapi
Memory leak in gluster samba vfs / libgfapi
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: samba (Show other bugs)
Unspecified Unspecified
high Severity high
: ---
: RHGS 2.1.2
Assigned To: Poornima G
: ZStream
Depends On:
Blocks: 1018176
  Show dependency treegraph
Reported: 2013-09-24 00:48 EDT by Poornima G
Modified: 2015-05-13 12:28 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, any application that used the gfapi and performed excessive I/O operations encountered an out of memory issue because of memory leak. With this fix, the memory leak issue is fixed.
Story Points: ---
Clone Of:
: 1018176 (view as bug list)
Last Closed: 2014-02-25 02:39:30 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Reproducer program (5.01 KB, text/x-c++src)
2013-09-24 00:48 EDT, Poornima G
no flags Details

  None (edit)
Description Poornima G 2013-09-24 00:48:16 EDT
Created attachment 802042 [details]
Reproducer program

Description of problem:

When the attached program(oom_test.c) is run for longer, the SMBD process gets oom killed

Output of top command for the smbd processes;
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND          
22658 root      20   0 2414m 1.7g 1980 D  0.0 84.6   8:39.59 smbd             
22563 root      20   0  151m 1272  488 D  0.0  0.1   0:00.07 smbd             
22562 root      20   0  151m 1608  828 S  0.0  0.1   0:00.02 smbd  

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.gcc -pthread -o oom_test oom_test.c

2.do a cifs mount of gluster volume on /mnt/vfs (as this path is hard coded in the test program)
./oom_test <file size in GB>
Try giving the file size more than the volume size initially, this increases the chances of hitting memory leak.
3. Run multiple instances of the test program and multiple times, with this the memory used by SMBD increases and approaches its limit and gets oom killed.

Actual results:
SMBD process consumes more memory and gets oom killed

Expected results:
SMBD process should not get oom killed

Additional info:
Comment 2 Christopher R. Hertel 2013-10-02 15:03:53 EDT
Is it the master SMBD process that is leaking memory, or is it a child process?  Like most daemons, SMBD runs an initial instance that listens for connections, and then spawns child processes to handle those connections.  It appears that it is the main process is leaking memory, but the main process doesn't actually do file I/O so it doesn't pass through the VFS layer.
Comment 3 Poornima G 2013-10-02 23:33:13 EDT
Its the child process that is leaking the memory. It looks to be in iobuf_pool.
I new iobuf_arena is allocated but not freed, this looks to be the cause for the leak.
Comment 6 surabhi 2013-11-28 04:50:13 EST
Executed oom_test with multiple instances on smb mount, memory leaks are not observed and even running it multiple times with multiple instances it did not reach to a level that causes oom killed.

Verified in version: 
[root@dhcp159-76 ~]# rpm -qa | grep glusterfs
Output of top command as follows:

 8769 root      20   0  503m  76m 2996 S  0.0  1.0   0:02.42 smbd                  
 9411 root      20   0  508m  76m 3052 S  0.0  1.0   1:35.89 smbd                  
 9449 root      20   0  508m  73m 2720 S  0.0  0.9   0:00.69 smbd                  
 9454 root      20   0  508m  73m 2608 S  0.0  0.9   0:00.73 smbd                  
 9551 root      20   0  227m 3032 1692 S  0.0  0.0   0:00.16 smbd                  
 9587 root      20   0  227m 2564 1048 S  0.0  0.0   0:11.65 smbd
Comment 7 Pavithra 2014-01-03 05:34:45 EST
Can you please verify the doc text for technical accuracy?
Comment 8 Poornima G 2014-01-17 05:16:50 EST
A comma after "performed I/O operations" would be better, other than that doc text looks fine.
Comment 10 errata-xmlrpc 2014-02-25 02:39:30 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.