This is a tracking issue for improving the scalability of RHQ. The effort should cover a number of distinct aspects 1) Backend service scalability testing (agents, metrics, events, calltime) Similar to what we've done for previous JON releases, but with a focus on automation. The goal here is two fold, make sure JON3.0 performs better than previous releases and build out a testing platform which can be used if in the future we need to test alternative implementations for metric, event and calltime storage and retrieval. 2) UI scalability testing (performance of standard setup, concurrent users, large inventory). Again the goal is to improve upon the load footprint and concurrent users offered by previous JON versions. 3) Scalability of our JBAS7/EAP6 integration in terms of: a) number of concurrent instances we can monitor/managed on a given host b) total number of instances which can be monitor/managed by a given cluster of JON servers There is obviously some intersection here with the JBAS7/EAP6 work (https://bugzilla.redhat.com/show_bug.cgi?id=707223) 4) With an eye to the future (post JON3) investigate what techniques/technologies would be suitable to enable JON to scale 2x, 5x, 10x beyond where it is currently. My expectation is that during the above evaluation for JON3, architectural bottlenecks will be found in 1) versus 2) or 3), in which case we should investigate what options are available for improving our backend service scalability, in particular the metric storage system. With respect to the metric subsystem in particular, any investigation should include a review of how this data will be displayed to the user through any new metric charting implementation (e.g. as we change/optimize the metric storage its ok to change how/what precisely we display if that helps provide a more scalable architecture)