Bug 115154 - Possible memory leak with kernel 2.4.21-4 when using Resin-2.1.6
Summary: Possible memory leak with kernel 2.4.21-4 when using Resin-2.1.6
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: kernel
Version: 3.0
Hardware: i686
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Dave Anderson
QA Contact: Brian Brock
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2004-02-07 06:50 UTC by gian
Modified: 2008-08-02 23:40 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-10-19 19:30:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description gian 2004-02-07 06:50:32 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.5) Gecko/20031007

Description of problem:
When running Resin-2.1.6, the swap space utilization of the system
seems to get larger and larger.   This is being run on the
kernel-2.4.21-4.  This could be a possible kernel leak.  This does not
happen when running this software of Solaris systems.

Version-Release number of selected component (if applicable):
kernel-2.4.21-4

How reproducible:
Always

Steps to Reproduce:
1.run software
2.check swap space with the "top" command
3.
    

Actual Results:  Swap Space utilization keeps increasing

Expected Results:  Swap space utilization should not keep rising

Additional info:

Comment 2 Dave Anderson 2004-02-09 20:41:07 UTC
Right -- there's absolutely nothing to work with here...

Please re-run the application, and while it is running and presumably
taking up swap space, give us the output of several Alt-Sysrq-m
entries.





Comment 3 Rick Fisk 2006-09-30 04:54:32 UTC
I can reproduce this on 2.4.21.47EL.smp thusly:

I run tomcat 5.5.16 JAVA 1.5.0_6-b5 VM with a couple of deployed webapp.

The Box is a 2 cpu 32 bit box with 2GB of memory. 

What made me start searching for clues was that Garbage collection in the VM 
would consistently take longer and longer each time a full GC was executed in 
the Java VM. My first suspicion was the Java VM itself and the GC routines. 
When I setup the VM to log GC information I saw very odd behavior.

When in Parallel GC mode, Full GC's would continue to take longer upon each 
subsequent run. the first under 10 seconds and the last - when tomcat had to 
be shutdown finally would be upwards of 20 minutes! When the Full GC's would 
run, the Java VM process would be in the "uninteruptable sleep" mode making it 
impossible to get a thread dump.

I modified the settings to switch to ConcurrentMarksweep and the first Full GC 
completely freezes the VM though other tools do not seem to suffer. 

I noticed that the swap and memory utilization just continues to climb until 
all physical memory and swap become completely allocated in spite of the fact 
that I have only allocated 1GB to the java vm. Killing the Java VM will free 
up some memory and some swap but not nearly what I would expect. Only a reboot 
allows used memory to return to expected levels. Rinse and repeat. It is 
always reproducable. 

As I said, I originally thought this was a tomcat or java issue until I 
discovered this article on the java developer list:

http://forum.java.sun.com/thread.jspa?threadID=678905&start=0&tstart=0

I believe my problem and the above are related. His problem was solved by 
upgrading to a 4.x release which included a 2.6.x kernel. 

Do you have an alternative to "Alt-Sysrq-m"? This does not result in anything 
other han a beep in a ssh term window. All of my servers are headless and 
accessed via ssh or terminal server.



Comment 4 Dave Anderson 2006-10-02 14:43:22 UTC
echo m > /proc/sysrq-trigger


Comment 5 RHEL Program Management 2007-10-19 19:30:25 UTC
This bug is filed against RHEL 3, which is in maintenance phase.
During the maintenance phase, only security errata and select mission
critical bug fixes will be released for enterprise products. Since
this bug does not meet that criteria, it is now being closed.
 
For more information of the RHEL errata support policy, please visit:
http://www.redhat.com/security/updates/errata/
 
If you feel this bug is indeed mission critical, please contact your
support representative. You may be asked to provide detailed
information on how this bug is affecting you.


Note You need to log in before you can comment on or make changes to this bug.