Bug 761756 (GLUSTER-24) - connection memory leak in io-threads with 2.0.1 tag (migrated from rt #1022)
Summary: connection memory leak in io-threads with 2.0.1 tag (migrated from rt #1022)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-24
Product: GlusterFS
Classification: Community
Component: core
Version: 2.0.1
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Anand Avati
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-06-17 05:18 UTC by Raghavendra G
Modified: 2015-09-01 23:04 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Raghavendra G 2009-06-17 05:18:13 UTC

Comment 1 Shehjar Tikoo 2009-06-18 02:16:32 UTC
Hi Raghu,

Any pointers to what might be causing this leak apart from the high austoscaling limit?

Thanks

Comment 2 Basavanagowda Kanur 2009-07-06 07:09:48 UTC
Please copy the history of the bug also from RT or savannah, while migrating to bugzilla.

--
Gowda

Comment 3 Shehjar Tikoo 2009-07-06 07:13:36 UTC
Here it is:

Mon May 11 11:08:56 2009  	 pixar - Ticket created 
I think we're running into some memory leaks with gluster. It appears
that there's a leak with initial connections when using io-threads with
autoscaling turned on. With this server.vol:

volume posix
type storage/posix
option directory /tmp/gluster
end-volume

volume locks
type features/locks
subvolumes posix
end-volume

volume io-threads
type performance/io-threads
option autoscaling yes
subvolumes locks
end-volume

volume server
type protocol/server
option transport-type tcp
option auth.addr.io-threads.allow *
subvolumes io-threads
end-volume


And this client.vol:

volume client
type protocol/client
option transport-type tcp
option remote-host server
option remote-subvolume io-threads
end-volume


I can consistently grow the rss of the glusterfsd process by 24KB every
time I run:

umount /mnt/glusterfs
glutsterfs -f client.vol /mnt/glusterfs

If I remove the io-threads translator, gluster appears to keep constant
memory usage over multiple connects and disconnects. When autoscaling is
turned off, the memory usage occasionally grows by 4KB, but not
consistently. It's possible in that situation the leak is tiny so malloc
only occasionally needs to allocate a page.


This doesn't account for the 100MB-1GB rss/vsize that we're seeing in
our servers and clients but I haven't figured out how to reproduce that
yet. If I do, I'll file another bug.


Tue Jun 16 22:16:00 2009  	 raghavendra 
Hi,

The High memory usage may not be actually leak. There was a bug-report
saying io-threads consuming high memory with autoscaling turned on.
There was also a fix in 62a920642a54eac6e4b24a3590f17d628202a210 which
reduces the thread count to avoid high memory usage.

The tests I did also seemed to point to high memory usage but not leak.
Can you confirm whether you are facing the problem still (after the
above said fix. Latest code can be pulled from git).

regards,
Raghavendra.

Comment 4 Shehjar Tikoo 2009-07-06 07:15:19 UTC
Any particular reason why we should not close this?
Just cursory look suggests the problem could've been because of the high autoscaling limit in the initial 2.0.x days.

-Shehjar

Comment 5 Basavanagowda Kanur 2009-07-06 07:17:10 UTC
The leak is not related to autoscaling configurations. Leak is somewhere in libglusterfs or protocol/server. (i am able to reproduce on my laptop).

We cannot close this till we fix it. 

--
Gowda


Note You need to log in before you can comment on or make changes to this bug.