Description of problem: Web UI: Subscription Management -> Virtualization Entitlements -> Guests Consuming Regular Entitlements Navigating to the above path gives me a page error and a TRACEBACK with one of my orgs. This page returns a result in our org #1 that has approx 70 virtual guest systems; the failure is with org #2 that has over 500 guest systems. free -mt returns total used free shared buffers cached Mem: 7985 7823 161 0 371 4870 -/+ buffers/cache: 2581 5403 Swap: 2000 33 1966 Total: 9985 7857 2128 Version-Release number of selected component (if applicable): Satellite 5.4 How reproducible: 100% Steps to Reproduce: Additional info: We suggested customer to increase the increase the heap size from -Xmx1024m to -Xmx2048m and customer got success using heap size for 2048M. The returned page contains a list of 2,347 systems under one custom base channel.
Catalina out from customer Nov 19, 2010 9:09:41 AM redstone.xmlrpc.XmlRpcDispatcher writeError WARNING: redstone.xmlrpc.XmlRpcFault: Either the password or username is incorrect. JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait. JVMDUMP032I JVM requested Snap dump using '/usr/share/tomcat5/Snap.20101119.125211.30177.0001.trc' in response to an event JVMDUMP030W Cannot write dump to file /usr/share/tomcat5/Snap.20101119.125211.30177.0001.trc: Permission denied JVMDUMP010I Snap dump written to /tmp/Snap.20101119.125211.30177.0001.trc JVMDUMP030W Cannot write dump to file /usr/share/tomcat5/heapdump.20101119.125211.30177.0002.phd: Permission denied JVMDUMP032I JVM requested Heap dump using '/tmp/heapdump.20101119.125211.30177.0002.phd' in response to an event JVMDUMP010I Heap dump written to /tmp/heapdump.20101119.125211.30177.0002.phd JVMDUMP030W Cannot write dump to file /usr/share/tomcat5/javacore.20101119.125211.30177.0003.txt: Permission denied JVMDUMP032I JVM requested Java dump using '/tmp/javacore.20101119.125211.30177.0003.txt' in response to an event JVMDUMP010I Java dump written to /tmp/javacore.20101119.125211.30177.0003.txt JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError". 2010-11-19 12:52:13,135 [TP-Processor6] ERROR org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/rhn].[org.apache.jsp.WEB_002dINF.pages.systems.entitlements.flexguests_jsp] - Servlet.service() for servlet org.apache.jsp.WEB_002dINF.pages.systems.entitlements.flexguests_jsp threw exception java.lang.OutOfMemoryError
0f515e171b973861e9bea3c5e9de1bb5c95047c9 Changed page to use the normal list tag instead of the new tree tag. The page is a little less 'fancy' now, but it should work much quicker. You can still select a specific channel family and just convert systems based on that. Will be easily backported to 5.4 -Justin
Found an issue that would cause duplicates to show up. Note that this was an issue with the original page as well: 854234f99a5d391e9601e1d518a7ab8c524eaa12
# VERIFIED Packages fixing the issue are: --- spacewalk-java-1.2.39-34.el5sat spacewalk-java-oracle-1.2.39-34.el5sat spacewalk-java-config-1.2.39-34.el5sat spacewalk-java-lib-1.2.39-34.el5sat spacewalk-taskomatic-1.2.39-34.el5sat --- the case with OutOfMemoryError is not appearing any more. for the case of having 600 guests consuming regular entitlements and + 100 more consuming flex guest entitlements: both are browsing fine.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2010-0991.html
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: Previously, the "Guests Consuming Regular Entitlements" page could encounter a page error when more than 500 systems were listed. This update corrects this error.