Created attachment 496877 [details] YSlow showing 91 HTTP requests to load a page Description of problem: To load a typical page in RHQ4.0 page typically more than 90 seperate HTTP requests are being made (see attached image which documents this). The RHQ 4.0 server has less than 15 HTTP client threads in the default configuration to handle this. This is a performance constraint on page load. Consider tweaking the default configuration of RHQ to increase the number of http client threads to remove this constraint. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Review pages with FF's YSlow plugin 2. Review the number of HTTP client threads available with a profiling tool 3. Actual results: Expected results: Additional info: 2 attachments: 1) an image documenting > 90 HTTP requests per page. 2) an image documenting < 15 HTTP Client threads available in the default RHQ server
Created attachment 496878 [details] YourKit profiling showing only 15 HTTP Client threads in RHQ Server
The maxThreads option for the RHQ Server's HTTP connector is configured in RHQ_SERVER_HOME/jbossas/server/default/deploy/jboss-web.deployer/server.xml and is templatized to use the value of the rhq.server.startup.web.max-connections property in RHQ_SERVER_HOME/bin/rhq-server.properties. rhq.server.startup.web.max-connections is set to 200 out of box, which is also the Tomcat default if the maxThreads option is not specified. Mike, the fact that there are 15 HTTP connector threads doesn't mean that's the maximum value; it just means that's all Tomcat needed to create to handle incoming requests. Note, another way to check the # of threads currently in the pool is to check the value of the Threads Allocated metric on Connector Resources in RHQ. When Tomcat runs out of HTTP connector threads, it logs a message similar to the following: org.apache.tomcat.util.threads.ThreadPool logFull SEVERE: All threads (200) are currently busy, waiting. Increase maxThreads (200) or check the servlet status I have never seen this error, and I don't recall any users ever reporting the error, so I think we can keep the value at 200 for now. If we start getting reports of users hitting that error, we'll know what we need to do.