Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1535281 - possible memleak in glusterfsd process with brick multiplexing on
possible memleak in glusterfsd process with brick multiplexing on
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core (Show other bugs)
cns-3.6
Unspecified Unspecified
high Severity high
: ---
: RHGS 3.4.0
Assigned To: Mohit Agrawal
Bala Konda Reddy M
brick-multiplexing
:
: 1619369 (view as bug list)
Depends On: 1544090 1549501
Blocks: 1503137 1535784 1549473
  Show dependency treegraph
 
Reported: 2018-01-16 22:20 EST by krishnaram Karthick
Modified: 2018-09-04 02:42 EDT (History)
11 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-8
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1535784 1544090 1549473 (view as bug list)
Environment:
Last Closed: 2018-09-04 02:40:51 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:42 EDT

  None (edit)
Description krishnaram Karthick 2018-01-16 22:20:21 EST
Description of problem:
With brick multiplexing on, when volume creation and deletion was run continuously for ~12 hours, glusterfsd process on each of the three nodes consumes close to 14gb of memory with a single volume in the system. This is quite high.

Please note that throughout the test, heketidb volume is not deleted and hence the same brick process remain throughout the test.

Version-Release number of selected component (if applicable):
sh-4.2# rpm -qa  | grep 'gluster'
glusterfs-libs-3.8.4-54.el7rhgs.x86_64
glusterfs-3.8.4-54.el7rhgs.x86_64
glusterfs-api-3.8.4-54.el7rhgs.x86_64
glusterfs-cli-3.8.4-54.el7rhgs.x86_64
glusterfs-fuse-3.8.4-54.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-54.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-54.el7rhgs.x86_64
glusterfs-server-3.8.4-54.el7rhgs.x86_64
gluster-block-0.2.1-14.el7rhgs.x86_64


How reproducible:
Always

Steps to Reproduce:
1. on a CNS setup, run the following script for 12 hours.

while true; do for i in {1..5}; do heketi-cli volume create --size=1; done; heketi-cli volume list | awk '{print $1}' | cut -c 4- >> vollist; while read i; do heketi-cli volume delete $i; sleep 2; done<vollist; rm vollist; done

Actual results:
glusterfsd process consumes ~14 gb with 1 volume

Expected results:
typically, glusterfsd would consume < 1gb for a volume

Additional info:
Comment 12 Bala Konda Reddy M 2018-05-08 06:04:35 EDT
Build : 3.12.2-8

On a three node brick-mux enabled setup. Created a base 2X3 volume. In a loop created two volumes, started, stopped and deleted. Did it for 3500 iterations.
At the end glusterfsd memory increased to 2.2G earlier it went to 14G. Compared to it memleak reduced very much.
As discussed with mohit, it is acceptable. Here is the output of the memory consumption.

###############1 iteration  #############
              total        used        free      shared  buff/cache   available
Mem:           7.6G        246M        7.2G        8.8M        207M        7.1G
Swap:          2.0G          0B        2.0G
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1614 root      20   0  606860   9040   4112 S   0.0  0.1   0:00.44 glusterd
 2325 root      20   0 1869420  21684   3684 S   0.0  0.3   0:00.09 glusterfsd

###############3613 iteration  #############
              total        used        free      shared  buff/cache   available
Mem:           7.6G        2.5G        4.1G         88M        1.0G        4.7G
Swap:          2.0G          0B        2.0G
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1614 root      20   0  615440  43916   4404 S   0.0  0.5  11:38.95 glusterd
 2325 root      20   0   86.9g   2.2g   4256 S   0.0 28.8  11:38.02 glusterfsd
  

Hence marking it as verified.
Comment 13 Atin Mukherjee 2018-08-22 09:58:54 EDT
*** Bug 1619369 has been marked as a duplicate of this bug. ***
Comment 15 errata-xmlrpc 2018-09-04 02:40:51 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.