Bug 1529563 - Possible leak in glusterfsd when volume info is run repeatedly.
Summary: Possible leak in glusterfsd when volume info is run repeatedly.
Keywords:
Status: CLOSED DUPLICATE of bug 1529249
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: rhgs-3.3
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: Milind Changire
QA Contact: Ambarish
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-28 17:37 UTC by Ambarish
Modified: 2019-01-09 14:58 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-29 05:36:54 UTC
Embargoed:


Attachments (Terms of Use)

Description Ambarish 2017-12-28 17:37:32 UTC
Description of problem:
------------------------

Created 33 volumes.

Bricks are multiplexed.

Triggered "gluster v info" in loop.

glusterfsd leaked ~300 MB after a couple fo iterations of gluster v info.


**BEFORE RUNNING VOL INFO/AFTER CREATES ** :

root     31569  1.9  3.5 40246600 1731392 ?    Ssl  10:55   0:28 /usr/sbin/glusterfsd -s gqas013.sbu.lab.eng.bos.redhat.com --volfile-id butcher1.gqas013.sbu.lab.eng.bos.redhat.com.bricks1-brickA1 -p /var/run/gluster/vols/butcher1/gqas013.sbu.lab.eng.bos.redhat.com-bricks1-brickA1.pid -S /var/run/gluster/8a138d4b6e1f56d561d6cef751a49305.socket --brick-name /bricks1/brickA1 -l /var/log/glusterfs/bricks/bricks1-brickA1.log --xlator-option *-posix.glusterd-uuid=723deb80-b98a-489e-9f2f-a887d3878d88 --brick-port 49153 --xlator-option butcher1-server.listen-port=49153
[root@gqas013 ~]# 


**AFTER RUNNING VOL INFO ** :

root     31569  2.3  4.1 40312168 2024016 ?    Ssl  10:55   2:15 /usr/sbin/glusterfsd -s gqas013.sbu.lab.eng.bos.redhat.com --volfile-id butcher1.gqas013.sbu.lab.eng.bos.redhat.com.bricks1-brickA1 -p /var/run/gluster/vols/butcher1/gqas013.sbu.lab.eng.bos.redhat.com-bricks1-brickA1.pid -S /var/run/gluster/8a138d4b6e1f56d561d6cef751a49305.socket --brick-name /bricks1/brickA1 -l /var/log/glusterfs/bricks/bricks1-brickA1.log --xlator-option *-posix.glusterd-uuid=723deb80-b98a-489e-9f2f-a887d3878d88 --brick-port 49153 --xlator-option butcher1-server.listen-port=49153
[root@gqas013 ~]# 


Memory consumption by the brick process increased from 1.7G to 2G.

Statedumps will be shared in comments.


Version-Release number of selected component (if applicable):
--------------------------------------------------------------

glusterfs-server-3.8.4-52.3.el7rhgs.x86_64

How reproducible:
-----------------

1/1


Actual results:
---------------

Leaks during volume info.

Expected results:
-----------------

No leaks during vol info.

Comment 2 Ambarish 2017-12-28 17:44:39 UTC
(In reply to Ambarish from comment #0)
> Description of problem:
> ------------------------
> 
> Created 33 volumes.
> 
> Bricks are multiplexed.
> 
> Triggered "gluster v info" in loop.
> 
> glusterfsd leaked ~300 MB after a couple fo iterations of gluster v info.
> 
> 
> **BEFORE RUNNING VOL INFO/AFTER CREATES ** :
> 
> root     31569  1.9  3.5 40246600 1731392 ?    Ssl  10:55   0:28
> /usr/sbin/glusterfsd -s gqas013.sbu.lab.eng.bos.redhat.com --volfile-id
> butcher1.gqas013.sbu.lab.eng.bos.redhat.com.bricks1-brickA1 -p
> /var/run/gluster/vols/butcher1/gqas013.sbu.lab.eng.bos.redhat.com-bricks1-
> brickA1.pid -S /var/run/gluster/8a138d4b6e1f56d561d6cef751a49305.socket
> --brick-name /bricks1/brickA1 -l
> /var/log/glusterfs/bricks/bricks1-brickA1.log --xlator-option
> *-posix.glusterd-uuid=723deb80-b98a-489e-9f2f-a887d3878d88 --brick-port
> 49153 --xlator-option butcher1-server.listen-port=49153
> [root@gqas013 ~]# 
> 
> 
> **AFTER RUNNING VOL INFO ** :
> 
> root     31569  2.3  4.1 40312168 2024016 ?    Ssl  10:55   2:15
> /usr/sbin/glusterfsd -s gqas013.sbu.lab.eng.bos.redhat.com --volfile-id
> butcher1.gqas013.sbu.lab.eng.bos.redhat.com.bricks1-brickA1 -p
> /var/run/gluster/vols/butcher1/gqas013.sbu.lab.eng.bos.redhat.com-bricks1-
> brickA1.pid -S /var/run/gluster/8a138d4b6e1f56d561d6cef751a49305.socket
> --brick-name /bricks1/brickA1 -l
> /var/log/glusterfs/bricks/bricks1-brickA1.log --xlator-option
> *-posix.glusterd-uuid=723deb80-b98a-489e-9f2f-a887d3878d88 --brick-port
> 49153 --xlator-option butcher1-server.listen-port=49153
> [root@gqas013 ~]# 
> 
> 
> Memory consumption by the brick process increased from 1.7G to 2G.
> 
> Statedumps will be shared in comments.
> 
> 
> Version-Release number of selected component (if applicable):
> --------------------------------------------------------------
> 
> glusterfs-server-3.8.4-52.3.el7rhgs.x86_64
> 
> How reproducible:
> -----------------
> 
> 1/1
> 
> 
> Actual results:
> ---------------
> 
> Leaks during volume info.
> 
> Expected results:
> -----------------
> 
> No leaks during vol info.



Typo.

I meant I created 300 volumes.

All of this is coming as a part of validating https://bugzilla.redhat.com/show_bug.cgi?id=1526363.

Comment 4 Atin Mukherjee 2017-12-28 18:50:58 UTC
FYI..gluster v info is a local glusterd command and doesn’t inolve any brick ops and has nothing to do with brick process. The leak should be and has to be unrelated. What I suspect is this is is the general leak what Nag reported in an idle setup. I will point to the bug id tomorrow.

Comment 5 Atin Mukherjee 2017-12-29 04:47:22 UTC
I just ran the same steps in my local setup and didn't observe any memory spike in brick process. The setup what you have shared in inaccessible. You'd have to provide me a setup where this is observed otherwise I tend to close this bug.

Comment 6 Atin Mukherjee 2017-12-29 04:53:16 UTC
(In reply to Atin Mukherjee from comment #4)
> FYI..gluster v info is a local glusterd command and doesn’t inolve any brick
> ops and has nothing to do with brick process. The leak should be and has to
> be unrelated. What I suspect is this is is the general leak what Nag
> reported in an idle setup. I will point to the bug id tomorrow.

https://bugzilla.redhat.com/show_bug.cgi?id=1529249 is the bug what I was referring to.

Comment 7 Ambarish 2017-12-29 05:33:00 UTC
This looks same as  https://bugzilla.redhat.com/show_bug.cgi?id=1529249.

After the loop finished for vol info , glusterfsd's mem footprint kept on increasing even when I was doing nothing at all.

Can be closed a s a DUPE of  https://bugzilla.redhat.com/show_bug.cgi?id=1529249.

Comment 8 Atin Mukherjee 2017-12-29 05:36:54 UTC

*** This bug has been marked as a duplicate of bug 1529249 ***


Note You need to log in before you can comment on or make changes to this bug.