Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 900427 (JBPAPP6-967)

Summary: Memory leaks when deploying message driven beans
Product: [JBoss] JBoss Enterprise Application Platform 6 Reporter: Jan Martiska <jmartisk>
Component: JMSAssignee: Jeff Mesnil <jmesnil>
Status: CLOSED NEXTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: 6.0.0CC: bgeorges, brian.stansberry, clebert.suconic, dimitris, jmartisk, jmesnil, jpai, mharvey, rajesh.rajasekaran
Target Milestone: ---   
Target Release: EAP 6.0.1   
Hardware: Unspecified   
OS: Unspecified   
URL: http://jira.jboss.org/jira/browse/JBPAPP6-967
Whiteboard: eap601candidate
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-11-06 05:50:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
threads.png
none
memoryleak.png none

Description Jan Martiska 2012-05-14 10:58:25 UTC
project_key: JBPAPP6

After 1000 deployments of helloworld-mdb quickstart, the memory footprint of EAP raises by 100 or more megabytes even after forcing any number of GCs.

Testing is performed this way:
- run EAP
- deploy/undeploy application so lazy loading proceeds
- force garbage collector 10 times
- measure START memory footprint
- deploy/undeploy application 1000 times
- force garbage collector 10 times
- measure END memory footprint

Hudson run: https://hudson.qa.jboss.com/hudson/job/eap6-deployment-soak-test-quickstart-jms/15/jdk=java16_default,label=RHEL6_x86_64/console
the results say:
{noformat}
Used memory before soak test: 83916888
Used memory after soak test: 253433752
{noformat}

This varies a bit depending on used JVM, on 64bit sun jvm the difference is the highest.

Also, after some number of deployments, even if you undeploy the MDB and EAP runs in idle, hornetq threads seem to emerge and get stuck in "WAITING" state
{noformat}
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
{noformat}

See attached screenshots showing JConsole - the memory footprint graph during the test clearly shows that if the test ran longer, it would sooner or later get to OutOfMemoryError, because the footprint after GC (values of local minima) increases over time - it should look about the same way as in the "idle EAP" part.

For other types of applications, this issue DOES NOT occur (tested osgi, servlets, ejb3, hibernate, cdi applications). 

*UPDATE* I put heap+thread dumps here: http://download.eng.brq.redhat.com/scratch/jmartisk/jbpapp9005/

Comment 1 Jan Martiska 2012-05-14 10:59:00 UTC
Screenshot - threads emerging after test

Comment 2 Jan Martiska 2012-05-14 10:59:00 UTC
Attachment: Added: threads.png


Comment 3 Jan Martiska 2012-05-14 10:59:23 UTC
Screenshot - memory footprint graph

Comment 4 Jan Martiska 2012-05-14 10:59:24 UTC
Attachment: Added: memoryleak.png


Comment 5 Brian Stansberry 2012-05-14 13:10:41 UTC
Can you take a heap dump when you "measure START memory footprint" and "measure END memory footprint"?

Comment 6 Jan Martiska 2012-05-15 06:24:36 UTC
thread&heap dumps added http://download.eng.brq.redhat.com/scratch/jmartisk/jbpapp9005/

Comment 7 Jeff Mesnil 2012-05-15 06:31:12 UTC
Jan, I have a 403 error when I try to download the heapdump_{before, after} files

Comment 8 Jan Martiska 2012-05-15 06:33:58 UTC
Oh yeah, sorry. It should be fixed now.

Comment 9 Jeff Mesnil 2012-05-15 09:44:20 UTC
I have analyzed your dumps and reproduce it on a few deployments/undeployments with the helloworld-mdb quickstart

There seems to be a leakage on the client session factories created by HornetQ Resource Adapter during the MDB deployment that are not properly clean up on the undeployment.

Using jmap dumps and jhat, I collected these figures:

{noformat}
0) AS7 boot:
   1 instance of ClientSessionImpl (CSI) (used by HornetXAResourceWrapper)
   2 instances of ClientSessionFactoryImpl (CSFI) (1 for HornetQXAResourceWrapper & 1 for the in-vn connector)
1) 1st MDB deployment
   16 CSI ( 1 + 15 new created by HornetQ RA (corresponds to MAX_SESSION))
   17 CSFI (15 new for each session)
2) 1st MDB undeployment
   16 CSI
   17 CSFI
   => nothing has been cleaned up... not sure it is normal here...
3) 2nd MDB deployment
   16 CSI
   32 CSFI (15 new instances have been created by the RA but the others instances are still there
4) 2nd MDB deployment
   16 CSI
   32 CSFI 
   => nothing cleaned up
5) 3rd MDB deployment
   16 CSI
   47 CSFI (15 new instances again)
6) 3rd MDB undeployment
   16 CSI
   47 CSFI
7) 7 deployment/undeployment cycles
   106 CSI => did not understand yet what it was bumped up to 16... race conditions?
   152 CSFI (2 + 15 x 10)
      => at each deployment, we create 15 CSFI, none are cleaned up at undeployment
{noformat}

I continue to investigate the issue and find it in the HornetQ RA

Comment 10 Jeff Mesnil 2012-05-15 14:35:46 UTC
Link: Added: This issue depends HORNETQ-927


Comment 11 Jeff Mesnil 2012-05-15 14:35:47 UTC
the leak occurs in HornetQ RA code

Comment 12 Mike Harvey 2012-05-15 16:19:03 UTC
We want to fix this, but not right now.  We don't want to risk regression and given the scenario "deploy/undeploy application 1000 times", it would be unlikely to hit this problem in a "production" environment any time soon.  Target 6.0.1.

Comment 13 Clebert Suconic 2012-05-16 01:58:35 UTC
If there isn't any other leaks, I have just made a PR: https://github.com/jbossas/jboss-as/pull/2319

Comment 14 Clebert Suconic 2012-05-16 01:59:00 UTC
I meant: if there aren't any other leaks, you may resolve this issue

Comment 15 Jeff Mesnil 2012-05-16 09:10:28 UTC
I tested locally that the memory leaks does not occur with HornetQ 2.2.17.Final

Comment 16 Rajesh Rajasekaran 2012-05-18 20:28:09 UTC
Labels: Added: eap6_need_triage


Comment 17 Anne-Louise Tangring 2012-05-21 17:52:31 UTC
This is not a blocker for EAP 6. This is not a realistic use case.

Comment 18 Misty Stanley-Jones 2012-06-19 03:16:31 UTC
Release Notes Docs Status: Added: Documented as Known Issue
Release Notes Text: Added: Memory leaks have occurred during testing of message-driven beans (MDBs). After 1000 deployments of a quickstart application, the memory footprint of the application server was increased by 100 or more megabytes. The performance differed depending on which JVM was used. The highest memory footprint was seen with the 64-bit Oracle JVM. The leakage occurs in the client session factories created by the HornetQ resource adapter during MDB deployment, but not cleaned up properly during undeployment.
                    
This memory leak will be fixed by a future upgrade to the HornetQ component. However, the testing scenario is not considered a risk for a production environment.


Comment 19 Rajesh Rajasekaran 2012-07-11 19:50:11 UTC
Labels: Removed: eap6_need_triage Added: eap601candidate


Comment 22 Jeff Mesnil 2012-08-17 08:02:05 UTC
A later version of HornetQ (2.2.19.Final) has gone into AS7 upstream with the fix for this issue

Comment 23 Jan Martiska 2012-09-26 13:12:30 UTC
Tested with EAP 6.0.1.ER2. No memory leaks found. Good job, thanks!

Comment 24 Dana Mison 2012-11-06 05:50:11 UTC
Release Notes Text: Removed: Memory leaks have occurred during testing of message-driven beans (MDBs). After 1000 deployments of a quickstart application, the memory footprint of the application server was increased by 100 or more megabytes. The performance differed depending on which JVM was used. The highest memory footprint was seen with the 64-bit Oracle JVM. The leakage occurs in the client session factories created by the HornetQ resource adapter during MDB deployment, but not cleaned up properly during undeployment.
                    
This memory leak will be fixed by a future upgrade to the HornetQ component. However, the testing scenario is not considered a risk for a production environment. Added: A small memory leak was identified in the HornetQ component when deploying and undeploying message-driven beans.  This was caused by the client session factories created by the HornetQ resource adapter during MDB deployment not being cleaned up properly during undeployment. The HornetQ component has been updated to resolve this issue.  This memory leak no longer occurs.


Comment 25 Dana Mison 2012-11-06 05:50:18 UTC
Release Notes Docs Status: Removed: Documented as Known Issue Added: Documented as Resolved Issue


Comment 26 Dana Mison 2012-11-06 05:50:37 UTC
Writer: Added: Darrin


Comment 27 Anne-Louise Tangring 2012-11-13 20:07:55 UTC
Release Notes Docs Status: Removed: Documented as Resolved Issue 
Writer: Removed: Darrin 
Release Notes Text: Removed: A small memory leak was identified in the HornetQ component when deploying and undeploying message-driven beans.  This was caused by the client session factories created by the HornetQ resource adapter during MDB deployment not being cleaned up properly during undeployment. The HornetQ component has been updated to resolve this issue.  This memory leak no longer occurs. 
Docs QE Status: Removed: NEW