Bug 878224

Summary: Updated alert defs may not fire in an HA environment
Product: [Other] RHQ Project Reporter: Jay Shaughnessy <jshaughn>
Component: Alerts, High AvailabilityAssignee: Jay Shaughnessy <jshaughn>
Status: CLOSED CURRENTRELEASE QA Contact: Mike Foley <mfoley>
Severity: high Docs Contact:
Priority: high    
Version: 4.4CC: hrupp, loleary
Target Milestone: ---   
Target Release: RHQ 4.6   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 878246 (view as bug list) Environment:
Last Closed: 2013-09-03 14:43:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 878246    

Description Jay Shaughnessy 2012-11-19 21:28:13 UTC
This is a longstanding but subtle problem that may be becoming more prevalent now that Availability Duration alerting makes availability recovery alert pairings more useful.

In an HA (high availability/multi-server) environment, alert definitions being updated did not have certain condition types updated on every server.  This included the following condition types:
 - Availability
 - Availablity Duration
 - Resource Operation Execution
 - Resource Configuration Execution

Relevant updates involved any condition changes, the condition policy (all/any), alert definition enable and disable, and possibly others.  This implicitly affects recovery alerting which disables and enables alert definitions, if those alert definitions contained condition types as listed above.

The condition caches are properly updated on the HA server node evaluating the alert def update, but not others. So, the problem only occurs when subsequent condition matches would have occurred on the servers that were not properly updated.

In short, stale alert definitions are possible and may fire or not fire as expected.

Comment 1 Jay Shaughnessy 2012-11-19 22:12:14 UTC
Here is a fairly simple example that reproduce the problem:

1) Create an HA env like:

Server A
 - Agent A connected
   - RHQ Server resource imported
     - some webapp (e.g. ROOT.war. jconsole.war), call it War A
 - GUI A connected
Server B
 - Agent B connected
 - GUI B connected

2) Using GUI A, create a GOES DOWN availability alert on WAR A

   - set it to Disable when fired

3) Wait 30s and then execute the Stop operation on WAR A (any gui)

   - You should see the alert fire and the alert def disable.
   - In the Server A log you should see something like:

   INFO [CacheConsistencyManagerBean] ServerA took [28]ms to reload global cache

4) Execute the Start operation on WAR A (any gui)

5) Using GUI B enable the alert definition. Wait 30s.

   - In the Server B log you should see something like:

   INFO [CacheConsistencyManagerBean] ServerB took [28]ms to reload global cache

   - You will not see this message in the Server A log.

6) Execute the Stop operation on WAR A (any gui)

   - You will see the avail change to DOWN
   - You will not see an alert fire
   - The alert def will not disable

Comment 2 Jay Shaughnessy 2012-11-19 22:56:52 UTC
commit 8ab939690aefbb6316aca6336c41804f728d290e
Author: Jay Shaughnessy <jshaughn>
Date:   Mon Nov 19 17:49:07 2012 -0500

When setting the server status dirty to notify the need for global condition cache refresh, update *all* servers.  The global condition cache is supposed to be replicated across HA servers.  Otherwise, different servers will have different condition sets generating unexpected results.

Comment 3 Heiko W. Rupp 2013-09-03 14:43:16 UTC
Bulk closing of issues in old RHQ releases that are in production for a while now.

Please open a new issue when running into an issue.