Bug 1535281 - possible memleak in glusterfsd process with brick multiplexing on
Summary: possible memleak in glusterfsd process with brick multiplexing on
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: cns-3.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 3.4.0
Assignee: Mohit Agrawal
QA Contact: Bala Konda Reddy M
URL:
Whiteboard: brick-multiplexing
: 1619369 (view as bug list)
Depends On: 1544090 1549501
Blocks: 1503137 1535784 1549473
TreeView+ depends on / blocked
 
Reported: 2018-01-17 03:20 UTC by krishnaram Karthick
Modified: 2021-12-10 15:33 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.12.2-8
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1535784 1544090 1549473 (view as bug list)
Environment:
Last Closed: 2018-09-04 06:40:51 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:42:22 UTC

Description krishnaram Karthick 2018-01-17 03:20:21 UTC
Description of problem:
With brick multiplexing on, when volume creation and deletion was run continuously for ~12 hours, glusterfsd process on each of the three nodes consumes close to 14gb of memory with a single volume in the system. This is quite high.

Please note that throughout the test, heketidb volume is not deleted and hence the same brick process remain throughout the test.

Version-Release number of selected component (if applicable):
sh-4.2# rpm -qa  | grep 'gluster'
glusterfs-libs-3.8.4-54.el7rhgs.x86_64
glusterfs-3.8.4-54.el7rhgs.x86_64
glusterfs-api-3.8.4-54.el7rhgs.x86_64
glusterfs-cli-3.8.4-54.el7rhgs.x86_64
glusterfs-fuse-3.8.4-54.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-54.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-54.el7rhgs.x86_64
glusterfs-server-3.8.4-54.el7rhgs.x86_64
gluster-block-0.2.1-14.el7rhgs.x86_64


How reproducible:
Always

Steps to Reproduce:
1. on a CNS setup, run the following script for 12 hours.

while true; do for i in {1..5}; do heketi-cli volume create --size=1; done; heketi-cli volume list | awk '{print $1}' | cut -c 4- >> vollist; while read i; do heketi-cli volume delete $i; sleep 2; done<vollist; rm vollist; done

Actual results:
glusterfsd process consumes ~14 gb with 1 volume

Expected results:
typically, glusterfsd would consume < 1gb for a volume

Additional info:

Comment 12 Bala Konda Reddy M 2018-05-08 10:04:35 UTC
Build : 3.12.2-8

On a three node brick-mux enabled setup. Created a base 2X3 volume. In a loop created two volumes, started, stopped and deleted. Did it for 3500 iterations.
At the end glusterfsd memory increased to 2.2G earlier it went to 14G. Compared to it memleak reduced very much.
As discussed with mohit, it is acceptable. Here is the output of the memory consumption.

###############1 iteration  #############
              total        used        free      shared  buff/cache   available
Mem:           7.6G        246M        7.2G        8.8M        207M        7.1G
Swap:          2.0G          0B        2.0G
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1614 root      20   0  606860   9040   4112 S   0.0  0.1   0:00.44 glusterd
 2325 root      20   0 1869420  21684   3684 S   0.0  0.3   0:00.09 glusterfsd

###############3613 iteration  #############
              total        used        free      shared  buff/cache   available
Mem:           7.6G        2.5G        4.1G         88M        1.0G        4.7G
Swap:          2.0G          0B        2.0G
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1614 root      20   0  615440  43916   4404 S   0.0  0.5  11:38.95 glusterd
 2325 root      20   0   86.9g   2.2g   4256 S   0.0 28.8  11:38.02 glusterfsd
  

Hence marking it as verified.

Comment 13 Atin Mukherjee 2018-08-22 13:58:54 UTC
*** Bug 1619369 has been marked as a duplicate of this bug. ***

Comment 15 errata-xmlrpc 2018-09-04 06:40:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.