Bug 534550 - (RHQ-1336) Purges from RHQ_MEASUREMENT_DATA_NUM_* are not robust in the face of server outage
Purges from RHQ_MEASUREMENT_DATA_NUM_* are not robust in the face of server o...
Product: RHQ Project
Classification: Other
Component: Monitoring (Show other bugs)
All All
high Severity medium (vote)
: ---
: ---
Assigned To: RHQ Project Maintainer
: Improvement
Depends On:
  Show dependency treegraph
Reported: 2009-01-08 21:23 EST by Charles Crouch
Modified: 2015-02-01 18:24 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Charles Crouch 2009-01-08 21:23:00 EST
The data purges from the RHQ_MEASUREMENT_DATA_NUM_* tables are of the form 
   "delete where timestamp < (now - X)" 
where X is 14/30/365days etc
The purge job runs on one server in the cloud every hour so when everything is going data will build up for say 14days and then each hour the oldest hour of data will be deleted. 

The problem with this approach from a performance perspective is that its possible for one delete statement to end up trying to delete a huge amount of data. e.g. if a table has already passed its purge date, e.g. more than 14days of data, then each day the purge doesnt run is another days worth of data which will be deleted the very next time the purge runs. Ultimately if no servers are running for 14days then the entire 1H table will be attempted to be deleted the first time the data purge job runs again. This can be such a large amount of data (113m rows in our perf env) that the delete statement doesn't actually complete in any reasonable timeframe. 

Fortunately this situation shouldn't occur very frequently since it requires all the servers in the cloud to be off for a long time, in an environment that has previously generated a large volume of data.

The solution would be to purge data based on number of rows in each slice to be deleted rather than just the age of the data.
Comment 1 Charles Crouch 2009-01-12 16:11:52 EST
An alternative proposed by Joseph would be to have the purge jobs stick to the same "delete in one hour chunks" regardless of whether this is the first time the purge has run after a long outage or not. This should ensure the amount each purge "bites" off is proportional to the amount of data written in one hour vs. being proportional to the length of server outage. This would help even in times of relatively brief JON server outage, e.g just 12hrs. Obviously if "too much" data is written in any one hour slot then this won't help. You only option is to calculate a timestamp which leave your a reasonable number of rows to delete, e.g. maybe just 15mins worth, or try to avoid writing that much data into the DB in the first place.
Comment 2 Joseph Marques 2009-09-04 18:06:49 EDT
dup of RHQ-2372, which is already resolved.
Comment 3 Red Hat Bugzilla 2009-11-10 15:30:50 EST
This bug was previously known as http://jira.rhq-project.org/browse/RHQ-1336
This bug relates to RHQ-1354
This bug relates to RHQ-1355
This bug relates to RHQ-1703
Comment 4 wes hayutin 2010-02-16 16:10:12 EST
Mass move to component = Monitoring

Note You need to log in before you can comment on or make changes to this bug.