Bug 553561 - Increase work_mem before running baseline computation on postgres
Summary: Increase work_mem before running baseline computation on postgres
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: RHQ Project
Classification: Other
Component: Core Server
Version: 1.3
Hardware: All
OS: All
high
medium
Target Milestone: ---
: ---
Assignee: Heiko W. Rupp
QA Contact:
URL: http://community.jboss.org/thread/146172
Whiteboard:
Depends On:
Blocks: rhq_triage jon24-perf rhq-perf
TreeView+ depends on / blocked
 
Reported: 2010-01-08 09:23 UTC by Heiko W. Rupp
Modified: 2014-06-09 16:43 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2014-02-23 19:17:27 UTC
Embargoed:


Attachments (Terms of Use)

Description Heiko W. Rupp 2010-01-08 09:23:33 UTC
Description of problem:

By default work_mem is 1MB or 2 in postgres. When computing baselines, it can happen with much data, that this uses more than this work_mem and thus postgres is using temporary files to "swap", which very much limits performance - up to the point where the computation runs into transaction timeouts.

It coule be possible to globally increase work_mem, but this new value would be for every connection (50 configured) and may have other bad effects.

It is possible to set the work_mem per connection in postgres:

rhq=> set work_mem=32768;
SET
rhq=> show work_mem;
 work_mem 
----------
 32MB
(1 row)

So we should in the case of postgres increase the work_mem before starting the computation:

MeasurementBaselineManagerBean - around line 210:
            conn = dataSource.getConnection();
            DatabaseType dbType = DatabaseTypeFactory.getDatabaseType(conn);

            if (dbType instanceof PostgresqlDatabaseType || dbType instanceof H2DatabaseType) {

+                Statement stm = conn.createStatement();
+                stm.execute("set work_mem=32768");
+                stm.close();


                insertQuery = conn.prepareStatement(MeasurementBaseline.NATIVE_QUERY_CALC_FIRST_AUTOBASELINE_POSTGRES);
                insertQuery.setLong(1, computeTime);


If it is only this one connection, we could even up this to 64MB

Comment 3 wes hayutin 2010-02-16 16:57:36 UTC
Temporarily adding the keyword "SubBug" so we can be sure we have accounted for all the bugs.

keyword:
new = Tracking + FutureFeature + SubBug

Comment 4 wes hayutin 2010-02-16 17:02:34 UTC
making sure we're not missing any bugs in rhq_triage

Comment 6 Heiko W. Rupp 2014-02-23 19:17:27 UTC
I think this is no longer relevant since we have the mass data in StorageNodes now

Comment 7 Mark Wagner 2014-06-09 16:43:41 UTC
clearing needinfo on a closed bug


Note You need to log in before you can comment on or make changes to this bug.