From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.5) Gecko/20031007 Description of problem: When running Resin-2.1.6, the swap space utilization of the system seems to get larger and larger. This is being run on the kernel-2.4.21-4. This could be a possible kernel leak. This does not happen when running this software of Solaris systems. Version-Release number of selected component (if applicable): kernel-2.4.21-4 How reproducible: Always Steps to Reproduce: 1.run software 2.check swap space with the "top" command 3. Actual Results: Swap Space utilization keeps increasing Expected Results: Swap space utilization should not keep rising Additional info:
Right -- there's absolutely nothing to work with here... Please re-run the application, and while it is running and presumably taking up swap space, give us the output of several Alt-Sysrq-m entries.
I can reproduce this on 2.4.21.47EL.smp thusly: I run tomcat 5.5.16 JAVA 1.5.0_6-b5 VM with a couple of deployed webapp. The Box is a 2 cpu 32 bit box with 2GB of memory. What made me start searching for clues was that Garbage collection in the VM would consistently take longer and longer each time a full GC was executed in the Java VM. My first suspicion was the Java VM itself and the GC routines. When I setup the VM to log GC information I saw very odd behavior. When in Parallel GC mode, Full GC's would continue to take longer upon each subsequent run. the first under 10 seconds and the last - when tomcat had to be shutdown finally would be upwards of 20 minutes! When the Full GC's would run, the Java VM process would be in the "uninteruptable sleep" mode making it impossible to get a thread dump. I modified the settings to switch to ConcurrentMarksweep and the first Full GC completely freezes the VM though other tools do not seem to suffer. I noticed that the swap and memory utilization just continues to climb until all physical memory and swap become completely allocated in spite of the fact that I have only allocated 1GB to the java vm. Killing the Java VM will free up some memory and some swap but not nearly what I would expect. Only a reboot allows used memory to return to expected levels. Rinse and repeat. It is always reproducable. As I said, I originally thought this was a tomcat or java issue until I discovered this article on the java developer list: http://forum.java.sun.com/thread.jspa?threadID=678905&start=0&tstart=0 I believe my problem and the above are related. His problem was solved by upgrading to a 4.x release which included a 2.6.x kernel. Do you have an alternative to "Alt-Sysrq-m"? This does not result in anything other han a beep in a ssh term window. All of my servers are headless and accessed via ssh or terminal server.
echo m > /proc/sysrq-trigger
This bug is filed against RHEL 3, which is in maintenance phase. During the maintenance phase, only security errata and select mission critical bug fixes will be released for enterprise products. Since this bug does not meet that criteria, it is now being closed. For more information of the RHEL errata support policy, please visit: http://www.redhat.com/security/updates/errata/ If you feel this bug is indeed mission critical, please contact your support representative. You may be asked to provide detailed information on how this bug is affecting you.