Description of problem: By default work_mem is 1MB or 2 in postgres. When computing baselines, it can happen with much data, that this uses more than this work_mem and thus postgres is using temporary files to "swap", which very much limits performance - up to the point where the computation runs into transaction timeouts. It coule be possible to globally increase work_mem, but this new value would be for every connection (50 configured) and may have other bad effects. It is possible to set the work_mem per connection in postgres: rhq=> set work_mem=32768; SET rhq=> show work_mem; work_mem ---------- 32MB (1 row) So we should in the case of postgres increase the work_mem before starting the computation: MeasurementBaselineManagerBean - around line 210: conn = dataSource.getConnection(); DatabaseType dbType = DatabaseTypeFactory.getDatabaseType(conn); if (dbType instanceof PostgresqlDatabaseType || dbType instanceof H2DatabaseType) { + Statement stm = conn.createStatement(); + stm.execute("set work_mem=32768"); + stm.close(); insertQuery = conn.prepareStatement(MeasurementBaseline.NATIVE_QUERY_CALC_FIRST_AUTOBASELINE_POSTGRES); insertQuery.setLong(1, computeTime); If it is only this one connection, we could even up this to 64MB
Temporarily adding the keyword "SubBug" so we can be sure we have accounted for all the bugs. keyword: new = Tracking + FutureFeature + SubBug
making sure we're not missing any bugs in rhq_triage
I think this is no longer relevant since we have the mass data in StorageNodes now
clearing needinfo on a closed bug