Bug 676035

Summary: Possible OutOfMemory situation
Product: [Other] RHQ Project Reporter: Heiko W. Rupp <hrupp>
Component: Core ServerAssignee: RHQ Project Maintainer <rhq-maint>
Status: CLOSED DUPLICATE QA Contact: Mike Foley <mfoley>
Severity: unspecified Docs Contact:
Priority: medium    
Version: 4.0.0CC: ian.springer
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-08-03 16:42:45 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Heiko W. Rupp 2011-02-08 16:59:52 UTC
Reported by Bala Nair on rhq-devel

https://fedorahosted.org/pipermail/rhq-devel/2011-February/000570.html

Following up a post from about a month ago.  We were seeing a persistent slow memory leak in the rhq server in tenured gen space that eventually led to an out of memory exception after running the server for about a week.  I captured a heap dump and found hundreds of thousands of stateless session beans in memory.  Here's a snapshot from my profiler of a table of classes with greatest number of instances.  

Name	Objects	Shallow Size	Retained Size
java.util.HashMap$Entry	1939755	93108240	189082696
java.util.HashMap$Entry[]	1090957	167796768	340273520
java.util.HashMap	1084265	69392960	408521632
java.util.LinkedList$Entry	860965	34438600	727956072
org.jboss.ejb3.BaseSessionContext	856281	34251240	34251240
org.rhq.enterprise.server.authz.RequiredPermissionsInterceptor	856281	13700496	13700496
org.rhq.enterprise.server.common.TransactionInterruptInterceptor	856281	13700496	13700496
org.jboss.ejb3.stateless.StatelessBeanContext	856265	68501200	490959040
java.lang.String	429025	17161000	48902064
char[]	379454	37897872	37897872
java.lang.Integer	171633	4119192	4119192
java.util.Hashtable$Entry	157623	7565904	34980432
java.util.TreeMap$Entry	105496	6751744	14950816
java.lang.String[]	98401	4340480	6555536
org.rhq.enterprise.server.auth.SubjectManagerBean	91116	6560352	49567104
org.rhq.enterprise.server.auth.TemporarySessionPasswordGenerator	91116	3644640	43006752
org.rhq.enterprise.server.authz.AuthorizationManagerBean	91115	2186760	2186760
org.rhq.enterprise.server.alert.AlertConditionManagerBean	91084	2914688	2914688
org.rhq.enterprise.server.alert.AlertManagerBean	90914	9455056	9455056
org.rhq.enterprise.server.alert.AlertDefinitionManagerBean	90911	4363728	4363728
org.rhq.enterprise.server.alert.AlertConditionLogManagerBean	90903	5090568	5090568
org.rhq.enterprise.server.alert.CachedConditionManagerBean	90903	4363344	4363344
org.rhq.enterprise.server.alert.AlertDampeningManagerBean	90903	3636120	3636120
org.jboss.security.SecurityAssociation$SubjectContext	49229	2362992	2362992
org.rhq.enterprise.server.cloud.instance.ServerManagerBean	39354	3463152	3463152
org.rhq.enterprise.server.cloud.CloudManagerBean	39354	2833488	2833488
Here are the merged paths from the SubjectManagerBean to GCRoot:

<All the objects>
org.jboss.ejb3.stateless.StatelessBeanContext
java.util.LinkedList$Entry
java.util.LinkedList$Entry
java.util.LinkedList
org.jboss.ejb3.InfinitePool
org.jboss.ejb3.ThreadlocalPool
org.jboss.ejb3.stateless.StatelessContainer
All the other manager beans have similar merged paths.  So I started to wonder why there were so many slsb's in the ThreadlocalPools and after some digging found this (http://community.jboss.org/message/363520) thread that sort of describes what I'm seeing.  I still don't know why it's happening but it gave me something to try.  I changed the Stateless Bean pool class in ejb3-interceptors-aop.xml from ThreadlocalPool to StrictMaxPool.  Now when I run the server and watch it with my profiler I see at max 3 SubjectManagerBeans in memory.  Same appears to be true for other slsb's.  This isn't a solution to the problem but I'm hoping someone can shed light on what's really going on.  I would be happy to upload the heap dump to somewhere public but it's almost a GB in size.

Comment 1 Ian Springer 2011-08-03 16:42:45 UTC

*** This bug has been marked as a duplicate of bug 693232 ***