Hide Forgot
Hi Raghu, Any pointers to what might be causing this leak apart from the high austoscaling limit? Thanks
Please copy the history of the bug also from RT or savannah, while migrating to bugzilla. -- Gowda
Here it is: Mon May 11 11:08:56 2009 pixar - Ticket created I think we're running into some memory leaks with gluster. It appears that there's a leak with initial connections when using io-threads with autoscaling turned on. With this server.vol: volume posix type storage/posix option directory /tmp/gluster end-volume volume locks type features/locks subvolumes posix end-volume volume io-threads type performance/io-threads option autoscaling yes subvolumes locks end-volume volume server type protocol/server option transport-type tcp option auth.addr.io-threads.allow * subvolumes io-threads end-volume And this client.vol: volume client type protocol/client option transport-type tcp option remote-host server option remote-subvolume io-threads end-volume I can consistently grow the rss of the glusterfsd process by 24KB every time I run: umount /mnt/glusterfs glutsterfs -f client.vol /mnt/glusterfs If I remove the io-threads translator, gluster appears to keep constant memory usage over multiple connects and disconnects. When autoscaling is turned off, the memory usage occasionally grows by 4KB, but not consistently. It's possible in that situation the leak is tiny so malloc only occasionally needs to allocate a page. This doesn't account for the 100MB-1GB rss/vsize that we're seeing in our servers and clients but I haven't figured out how to reproduce that yet. If I do, I'll file another bug. Tue Jun 16 22:16:00 2009 raghavendra Hi, The High memory usage may not be actually leak. There was a bug-report saying io-threads consuming high memory with autoscaling turned on. There was also a fix in 62a920642a54eac6e4b24a3590f17d628202a210 which reduces the thread count to avoid high memory usage. The tests I did also seemed to point to high memory usage but not leak. Can you confirm whether you are facing the problem still (after the above said fix. Latest code can be pulled from git). regards, Raghavendra.
Any particular reason why we should not close this? Just cursory look suggests the problem could've been because of the high autoscaling limit in the initial 2.0.x days. -Shehjar
The leak is not related to autoscaling configurations. Leak is somewhere in libglusterfs or protocol/server. (i am able to reproduce on my laptop). We cannot close this till we fix it. -- Gowda