Bug 553561

Summary: Increase work_mem before running baseline computation on postgres
Product: [Other] RHQ Project Reporter: Heiko W. Rupp <hrupp>
Component: Core ServerAssignee: Heiko W. Rupp <hrupp>
Status: CLOSED WONTFIX QA Contact:
Severity: medium Docs Contact:
Priority: high    
Version: 1.3CC: cwelton, mwagner
Target Milestone: ---Keywords: SubBug
Target Release: ---   
Hardware: All   
OS: All   
URL: http://community.jboss.org/thread/146172
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-02-23 14:17:27 EST Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
Bug Depends On:    
Bug Blocks: 565628, 577041, 620933    

Description Heiko W. Rupp 2010-01-08 04:23:33 EST
Description of problem:

By default work_mem is 1MB or 2 in postgres. When computing baselines, it can happen with much data, that this uses more than this work_mem and thus postgres is using temporary files to "swap", which very much limits performance - up to the point where the computation runs into transaction timeouts.

It coule be possible to globally increase work_mem, but this new value would be for every connection (50 configured) and may have other bad effects.

It is possible to set the work_mem per connection in postgres:

rhq=> set work_mem=32768;
SET
rhq=> show work_mem;
 work_mem 
----------
 32MB
(1 row)

So we should in the case of postgres increase the work_mem before starting the computation:

MeasurementBaselineManagerBean - around line 210:
            conn = dataSource.getConnection();
            DatabaseType dbType = DatabaseTypeFactory.getDatabaseType(conn);

            if (dbType instanceof PostgresqlDatabaseType || dbType instanceof H2DatabaseType) {

+                Statement stm = conn.createStatement();
+                stm.execute("set work_mem=32768");
+                stm.close();


                insertQuery = conn.prepareStatement(MeasurementBaseline.NATIVE_QUERY_CALC_FIRST_AUTOBASELINE_POSTGRES);
                insertQuery.setLong(1, computeTime);


If it is only this one connection, we could even up this to 64MB
Comment 3 wes hayutin 2010-02-16 11:57:36 EST
Temporarily adding the keyword "SubBug" so we can be sure we have accounted for all the bugs.

keyword:
new = Tracking + FutureFeature + SubBug
Comment 4 wes hayutin 2010-02-16 12:02:34 EST
making sure we're not missing any bugs in rhq_triage
Comment 6 Heiko W. Rupp 2014-02-23 14:17:27 EST
I think this is no longer relevant since we have the mass data in StorageNodes now
Comment 7 Mark Wagner 2014-06-09 12:43:41 EDT
clearing needinfo on a closed bug