From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040510 Firefox/0.8 Description of problem: In long running programs, the heap oftentimes grows and gets 'stuck' in an extremely large state, while whole pages and ranges of pages aren't really needed to be allocated. It is possible to have a 'virtual' heap that's scattered accross VM space, while actually being able to take advantage of it with the same exact efficiency (in terms of memory used versus memory unused) as you would one gargantuant heap. This would be accomplished by using mmap()ed segments to construct a heap, and mmap()ing adjacent physical pages to multiple virtual pages when it isn't possible to virtually map a page between two other virtual pages. This would allow any empty area touching or crossing a page boundary to be used in any allocation; and it would also allow unused, allocated areas to be freed back to the system. Details can be seen at: http://sources.redhat.com/bugzilla/show_bug.cgi?id=167 and also on my blog at: http://bluefoxicy.blogspot.com/2004_05_01_bluefoxicy_archive.html#108507154064758877 or just http://bluefoxicy.blogspot.com/ The kernel developers (particularly William Lee Irwin) inform me that this is feasable, and that performance will degrade with excessive amounts of mmap()ed segments but not by a significant degree. The heap can still be used as a fallback if you run over your resource limit for mmap() segments as well. I suggest you read the blog, read sources bug 167, consider it, and decide what to do. I'll tell you now that my hopes are that this will prove to be a more effective, more efficient memory managment scheme; and that it will lead the way to universal huge pages for applications, which would reduce TLB misses and MMU walking, thus increasing system performance overall on top of the extra amounts of free system memory. Version-Release number of selected component (if applicable): How reproducible: Sometimes Steps to Reproduce: 1. Load up gnome-terminal, firefox, mozilla, gaim, a peer to peer of your choice, nautilus, kde, konqueror, gnome, or some other large, long running application of your choice. 2. Watch it start at 20-40MiB memory usage 3. After several hours of usage without closing the program, notice that it's grown to around 80-100MiB of memory usage. 4. Start closing tabs or loading blank pages or closing windows or whatever. 5. Watch it not free memory back to the system. Actual Results: Applications hold memory in many situations, depending on what happens during their run. Expected Results: Freeing memory should have freed memory; that is, it should be returned to the system, not trapped in the middle of a big, contiguous heap. Additional info: I've noticed that doing certain things can spike ram usage. This is an inherant flaw in the heap's design, not in any application or in glibc. Particularly, deleting a list of 6000 files in nautilus can grow its heap to >300MiB from 30MiB; and closing that window after the fact will leave you with a 300MiB nautilus that's just using 30MiB to draw your desktop icons. In light of the potential memory gains, this may be more than an enhancement; it may be an alternative that provides a workaround for a design flaw in the heap's design that is out of the control of the glibc developers. Increase the severity and/or the priority as you see fit.
See bug 167 in the glibc bugzilla. *You* have to do some work if you want the code to change. Or you need to convince somebody else to do it. Convincing me to do the work is almost impossible so you better find somebody interested in your ideas before opening/reopening bugs.