Bug 535237 - (RHQ-1955) group rendering is slow
group rendering is slow
Status: CLOSED CURRENTRELEASE
Product: RHQ Project
Classification: Other
Component: Performance (Show other bugs)
unspecified
All All
high Severity medium (vote)
: ---
: ---
Assigned To: Charles Crouch
Heiko W. Rupp
http://jira.rhq-project.org/browse/RH...
: Improvement
Depends On:
Blocks: rhq-perf
  Show dependency treegraph
 
Reported: 2009-04-08 17:51 EDT by John Mazzitelli
Modified: 2015-07-07 19:14 EDT (History)
3 users (show)

See Also:
Fixed In Version: 1.4
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-02 03:22:22 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description John Mazzitelli 2009-04-08 17:51:00 EDT
Create a compatible group of 200 linux platforms. non-recursive.

Traverse to that group and notice it is really slow to render -  i think its the left nav that is slow.

Also, go to a platform that has alot of CPUs (say, 8). traverse to the CPU auto-group - notice that it is slow too.
Comment 1 John Mazzitelli 2009-04-08 17:52:29 EDT
the CPU auto-group is slow only when going to the monitor>graphs subtab - the other tabs are snappy.
Comment 2 Greg Hinkle 2009-04-09 14:05:01 EDT
Slow spots

Small autogroup pages (with only 8 CPUs) are slow (monitoring view) at least in perftest env

Single resource views - the tree is slow
  - test large single platform views because the tree loads every resource
  - N+1 problem on resources (testing resource query through hibernate.jsp) (3 round trips to do: select res from Resource res where res.id = 500050)
  - Tree has to be completely reloaded for any ajax stuff
  - Security impact on perf (the separate query to load what resource ids you can see)

Autogroups
   - Autogroup monitoring tab is very slow


Group views
  - Trees in group views are slow because they load every resource and descendent in those groups
  - N+1 problem multiplies with more resources in view


Autogroup and compatible group metric views are real slow
  - I remember that I accidentally made one of the UNIONS just a regular UNION when it should be a UNION ALL



Comment 3 Joseph Marques 2009-04-22 16:39:44 EDT
agent to server sync performance:

instead of syncing measurement schedules O(N) times where N is the number of nodes in the tree under a platform, we only sync O(D) times where D is the depth of the tree; each sync now does a bulk insert...select to create all schedules for all resources at that level; alert template creation was moved to the server side; got rid of one whole set of round trips that were unnecessary due to incorrect synchronization logic when dealing with merging a resource; improved safety of InventoryManager by always ensuring that the resource container map is access via its synchronized wrapper

performance of tree loads:

got rid of the N(N(N+1)) problem here where a single tree load might be hitting the database thousands of times; using reporting queries to bypass the hibernate layer and build the in-memory tree structure by hand

recursive groups:

changed this from a fly weight object-layer solution to a native sql-based solution; recursive computations now take a fraction of the time because the roundtrips were reduced from potentially thousands to exactly 2; this speeds up setting/unsetting recursive bits on groups, adding resources to a recursive group, and all dynagroup calculations

other misc perf fixes:

* eager fetch XXXToOne relationships of resources, groups, and a few other objects to prevent unnecessary round trips
* fixed N+1 problem when loading favorite resources and groups in the nav bar
* fixed N+1 as well as N(N+1) problems on various resource and groups tabs, notably the alerts>history and operations>new subtabs
* fixed several instances of loading the dataModel for tabular data displays multiple times
* fixed favorite resource icon loading so it only makes one call per page instead of a dozen

facets caching:

all mica icons across the entire system are loaded once now at startup, and are only loaded when a plugin is deployed; not only did this hit the database for EVERY row that needed MICA icons, but it also had the N+1 problem causing us to perform hundreds of queries per page just to decide whether or not to render icons; this was changed to load ALL facets for all resource types in a single query, and then cache that in a static in-memory map

pagination:

get rid of loading the data for each table multiple times due to faulty logic in the PagedListDataModel that backs all tables

graphs:

serialized generation of graphs by using a SingleThreadedServlet model, this was necessary because the graph units were being corrupted; upon investigation it was determined that the HighLowChartServlet is not stateless (as required by spec) and were causing collisions amongst multiple charts on the same page
Comment 4 Red Hat Bugzilla 2009-11-10 15:50:14 EST
This bug was previously known as http://jira.rhq-project.org/browse/RHQ-1955
This bug relates to RHQ-1002
Comment 5 Joseph Marques 2010-08-26 09:16:03 EDT
perf analysis was done as part of the new GWT-based trees, using at least two entirely different tree strategies for both resources and groups to see which out-preformed the other in each case.  all 4 of those solutions should be checked in soon, after which heiko should start his foray into aggregation of perf data by including render time for these trees in it.

as for group graphs, the query strategy for all of the group-wise pages has been changed (from long in-clauses to filtering sub-queries).  we need to capture perf data on the render time once the new graphs are included in the GWT UI.

i'm going to resolve this, and make heiko the tester.
Comment 9 Heiko W. Rupp 2013-09-02 03:22:22 EDT
Bulk closing of issues that were VERIFIED, had no target release and where the status changed more than a year ago.

Note You need to log in before you can comment on or make changes to this bug.