Description of problem: ------------------------ Created 33 volumes. Bricks are multiplexed. Triggered "gluster v info" in loop. glusterfsd leaked ~300 MB after a couple fo iterations of gluster v info. **BEFORE RUNNING VOL INFO/AFTER CREATES ** : root 31569 1.9 3.5 40246600 1731392 ? Ssl 10:55 0:28 /usr/sbin/glusterfsd -s gqas013.sbu.lab.eng.bos.redhat.com --volfile-id butcher1.gqas013.sbu.lab.eng.bos.redhat.com.bricks1-brickA1 -p /var/run/gluster/vols/butcher1/gqas013.sbu.lab.eng.bos.redhat.com-bricks1-brickA1.pid -S /var/run/gluster/8a138d4b6e1f56d561d6cef751a49305.socket --brick-name /bricks1/brickA1 -l /var/log/glusterfs/bricks/bricks1-brickA1.log --xlator-option *-posix.glusterd-uuid=723deb80-b98a-489e-9f2f-a887d3878d88 --brick-port 49153 --xlator-option butcher1-server.listen-port=49153 [root@gqas013 ~]# **AFTER RUNNING VOL INFO ** : root 31569 2.3 4.1 40312168 2024016 ? Ssl 10:55 2:15 /usr/sbin/glusterfsd -s gqas013.sbu.lab.eng.bos.redhat.com --volfile-id butcher1.gqas013.sbu.lab.eng.bos.redhat.com.bricks1-brickA1 -p /var/run/gluster/vols/butcher1/gqas013.sbu.lab.eng.bos.redhat.com-bricks1-brickA1.pid -S /var/run/gluster/8a138d4b6e1f56d561d6cef751a49305.socket --brick-name /bricks1/brickA1 -l /var/log/glusterfs/bricks/bricks1-brickA1.log --xlator-option *-posix.glusterd-uuid=723deb80-b98a-489e-9f2f-a887d3878d88 --brick-port 49153 --xlator-option butcher1-server.listen-port=49153 [root@gqas013 ~]# Memory consumption by the brick process increased from 1.7G to 2G. Statedumps will be shared in comments. Version-Release number of selected component (if applicable): -------------------------------------------------------------- glusterfs-server-3.8.4-52.3.el7rhgs.x86_64 How reproducible: ----------------- 1/1 Actual results: --------------- Leaks during volume info. Expected results: ----------------- No leaks during vol info.
(In reply to Ambarish from comment #0) > Description of problem: > ------------------------ > > Created 33 volumes. > > Bricks are multiplexed. > > Triggered "gluster v info" in loop. > > glusterfsd leaked ~300 MB after a couple fo iterations of gluster v info. > > > **BEFORE RUNNING VOL INFO/AFTER CREATES ** : > > root 31569 1.9 3.5 40246600 1731392 ? Ssl 10:55 0:28 > /usr/sbin/glusterfsd -s gqas013.sbu.lab.eng.bos.redhat.com --volfile-id > butcher1.gqas013.sbu.lab.eng.bos.redhat.com.bricks1-brickA1 -p > /var/run/gluster/vols/butcher1/gqas013.sbu.lab.eng.bos.redhat.com-bricks1- > brickA1.pid -S /var/run/gluster/8a138d4b6e1f56d561d6cef751a49305.socket > --brick-name /bricks1/brickA1 -l > /var/log/glusterfs/bricks/bricks1-brickA1.log --xlator-option > *-posix.glusterd-uuid=723deb80-b98a-489e-9f2f-a887d3878d88 --brick-port > 49153 --xlator-option butcher1-server.listen-port=49153 > [root@gqas013 ~]# > > > **AFTER RUNNING VOL INFO ** : > > root 31569 2.3 4.1 40312168 2024016 ? Ssl 10:55 2:15 > /usr/sbin/glusterfsd -s gqas013.sbu.lab.eng.bos.redhat.com --volfile-id > butcher1.gqas013.sbu.lab.eng.bos.redhat.com.bricks1-brickA1 -p > /var/run/gluster/vols/butcher1/gqas013.sbu.lab.eng.bos.redhat.com-bricks1- > brickA1.pid -S /var/run/gluster/8a138d4b6e1f56d561d6cef751a49305.socket > --brick-name /bricks1/brickA1 -l > /var/log/glusterfs/bricks/bricks1-brickA1.log --xlator-option > *-posix.glusterd-uuid=723deb80-b98a-489e-9f2f-a887d3878d88 --brick-port > 49153 --xlator-option butcher1-server.listen-port=49153 > [root@gqas013 ~]# > > > Memory consumption by the brick process increased from 1.7G to 2G. > > Statedumps will be shared in comments. > > > Version-Release number of selected component (if applicable): > -------------------------------------------------------------- > > glusterfs-server-3.8.4-52.3.el7rhgs.x86_64 > > How reproducible: > ----------------- > > 1/1 > > > Actual results: > --------------- > > Leaks during volume info. > > Expected results: > ----------------- > > No leaks during vol info. Typo. I meant I created 300 volumes. All of this is coming as a part of validating https://bugzilla.redhat.com/show_bug.cgi?id=1526363.
FYI..gluster v info is a local glusterd command and doesn’t inolve any brick ops and has nothing to do with brick process. The leak should be and has to be unrelated. What I suspect is this is is the general leak what Nag reported in an idle setup. I will point to the bug id tomorrow.
I just ran the same steps in my local setup and didn't observe any memory spike in brick process. The setup what you have shared in inaccessible. You'd have to provide me a setup where this is observed otherwise I tend to close this bug.
(In reply to Atin Mukherjee from comment #4) > FYI..gluster v info is a local glusterd command and doesn’t inolve any brick > ops and has nothing to do with brick process. The leak should be and has to > be unrelated. What I suspect is this is is the general leak what Nag > reported in an idle setup. I will point to the bug id tomorrow. https://bugzilla.redhat.com/show_bug.cgi?id=1529249 is the bug what I was referring to.
This looks same as https://bugzilla.redhat.com/show_bug.cgi?id=1529249. After the loop finished for vol info , glusterfsd's mem footprint kept on increasing even when I was doing nothing at all. Can be closed a s a DUPE of https://bugzilla.redhat.com/show_bug.cgi?id=1529249.
*** This bug has been marked as a duplicate of bug 1529249 ***