Bug 534549 (RHQ-1335)

Summary: Events subsystem needs to support throttling when adding events
Product: [Other] RHQ Project Reporter: Charles Crouch <ccrouch>
Component: PerformanceAssignee: RHQ Project Maintainer <rhq-maint>
Status: CLOSED NOTABUG QA Contact:
Severity: medium Docs Contact:
Priority: high    
Version: unspecifiedCC: hbrock
Target Milestone: ---Keywords: Improvement
Target Release: ---   
Hardware: All   
OS: All   
URL: http://jira.rhq-project.org/browse/RHQ-1335
Whiteboard:
Fixed In Version: 1.2 Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Charles Crouch 2009-01-09 00:45:00 UTC
Right now there doesn't appear to any upper limit on the rate at which we can add rows to the RHQ_EVENT table. Unlike metric data for which collection intervals and number of resource in the inventory can be used to influence the rate at which that data comes in. For example, having multiple agents monitoring log files for which log entries are being written every millisecond ended up producing >17gb worth of data in the RHQ_EVENT table. Inserting this amount of data puts an excessive strain on the DB and most likely also makes subsequently purging the data impossible. 

The solution to this issue would be to ensure that "event floods" are not allowed to crush the database and that our event purge algorithm can support the rate at which event rows are added and also the total table size its possible to end up with.

Comment 1 John Mazzitelli 2009-01-09 01:16:21 UTC
this is a duplicate of RHQ-1122

Comment 2 Red Hat Bugzilla 2009-11-10 20:30:49 UTC
This bug was previously known as http://jira.rhq-project.org/browse/RHQ-1335
This bug duplicates RHQ-1122