Bug 878224 - Updated alert defs may not fire in an HA environment
Updated alert defs may not fire in an HA environment
Status: CLOSED CURRENTRELEASE
Product: RHQ Project
Classification: Other
Component: Alerts, High Availability (Show other bugs)
4.4
All All
high Severity high (vote)
: ---
: RHQ 4.6
Assigned To: Jay Shaughnessy
Mike Foley
:
Depends On:
Blocks: 878246
  Show dependency treegraph
 
Reported: 2012-11-19 16:28 EST by Jay Shaughnessy
Modified: 2013-09-03 10:43 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 878246 (view as bug list)
Environment:
Last Closed: 2013-09-03 10:43:16 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jay Shaughnessy 2012-11-19 16:28:13 EST
This is a longstanding but subtle problem that may be becoming more prevalent now that Availability Duration alerting makes availability recovery alert pairings more useful.

In an HA (high availability/multi-server) environment, alert definitions being updated did not have certain condition types updated on every server.  This included the following condition types:
 - Availability
 - Availablity Duration
 - Resource Operation Execution
 - Resource Configuration Execution

Relevant updates involved any condition changes, the condition policy (all/any), alert definition enable and disable, and possibly others.  This implicitly affects recovery alerting which disables and enables alert definitions, if those alert definitions contained condition types as listed above.

The condition caches are properly updated on the HA server node evaluating the alert def update, but not others. So, the problem only occurs when subsequent condition matches would have occurred on the servers that were not properly updated.

In short, stale alert definitions are possible and may fire or not fire as expected.
Comment 1 Jay Shaughnessy 2012-11-19 17:12:14 EST
Here is a fairly simple example that reproduce the problem:

1) Create an HA env like:

Server A
 - Agent A connected
   - RHQ Server resource imported
     - some webapp (e.g. ROOT.war. jconsole.war), call it War A
 - GUI A connected
Server B
 - Agent B connected
 - GUI B connected

2) Using GUI A, create a GOES DOWN availability alert on WAR A

   - set it to Disable when fired

3) Wait 30s and then execute the Stop operation on WAR A (any gui)

   - You should see the alert fire and the alert def disable.
   - In the Server A log you should see something like:

   INFO [CacheConsistencyManagerBean] ServerA took [28]ms to reload global cache

4) Execute the Start operation on WAR A (any gui)

5) Using GUI B enable the alert definition. Wait 30s.

   - In the Server B log you should see something like:

   INFO [CacheConsistencyManagerBean] ServerB took [28]ms to reload global cache

   - You will not see this message in the Server A log.

6) Execute the Stop operation on WAR A (any gui)

   - You will see the avail change to DOWN
   - You will not see an alert fire
   - The alert def will not disable
Comment 2 Jay Shaughnessy 2012-11-19 17:56:52 EST
commit 8ab939690aefbb6316aca6336c41804f728d290e
Author: Jay Shaughnessy <jshaughn@redhat.com>
Date:   Mon Nov 19 17:49:07 2012 -0500

When setting the server status dirty to notify the need for global condition cache refresh, update *all* servers.  The global condition cache is supposed to be replicated across HA servers.  Otherwise, different servers will have different condition sets generating unexpected results.
Comment 3 Heiko W. Rupp 2013-09-03 10:43:16 EDT
Bulk closing of issues in old RHQ releases that are in production for a while now.

Please open a new issue when running into an issue.

Note You need to log in before you can comment on or make changes to this bug.