Bug 878246 - Updated alert defs may not fire in an HA environment
Updated alert defs may not fire in an HA environment
Product: JBoss Operations Network
Classification: JBoss
Component: High Availability (Show other bugs)
JON 3.1.1
All All
urgent Severity urgent
: ---
: JON 3.1.2
Assigned To: Jay Shaughnessy
Mike Foley
: NeedsTestCase
Depends On: 878224
  Show dependency treegraph
Reported: 2012-11-19 17:49 EST by Larry O'Leary
Modified: 2013-09-11 07:04 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 878224
Last Closed: 2013-09-11 07:04:41 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 69800 None None None Never

  None (edit)
Description Larry O'Leary 2012-11-19 17:49:33 EST
+++ This bug was initially created as a clone of upstream Bug #878224 +++

This is a longstanding but subtle problem that may be becoming more prevalent now that Availability Duration alerting makes availability recovery alert pairings more useful.

In an HA (high availability/multi-server) environment, alert definitions being updated did not have certain condition types updated on every server.  This included the following condition types:
 - Availability
 - Availablity Duration
 - Resource Operation Execution
 - Resource Configuration Execution

Relevant updates involved any condition changes, the condition policy (all/any), alert definition enable and disable, and possibly others.  This implicitly affects recovery alerting which disables and enables alert definitions, if those alert definitions contained condition types as listed above.

The condition caches are properly updated on the HA server node evaluating the alert def update, but not others. So, the problem only occurs when subsequent condition matches would have occurred on the servers that were not properly updated.

In short, stale alert definitions are possible and may fire or not fire as expected.

--- Additional comment from Jay Shaughnessy on 2012-11-19 17:12:14 EST ---

Here is a fairly simple example that reproduce the problem:

1) Create an HA env like:

Server A
 - Agent A connected
   - RHQ Server resource imported
     - some webapp (e.g. ROOT.war. jconsole.war), call it War A
 - GUI A connected
Server B
 - Agent B connected
 - GUI B connected

2) Using GUI A, create a GOES DOWN availability alert on WAR A

   - set it to Disable when fired

3) Wait 30s and then execute the Stop operation on WAR A (any gui)

   - You should see the alert fire and the alert def disable.
   - In the Server A log you should see something like:

   INFO [CacheConsistencyManagerBean] ServerA took [28]ms to reload global cache

4) Execute the Start operation on WAR A (any gui)

5) Using GUI B enable the alert definition. Wait 30s.

   - In the Server B log you should see something like:

   INFO [CacheConsistencyManagerBean] ServerB took [28]ms to reload global cache

   - You will not see this message in the Server A log.

6) Execute the Stop operation on WAR A (any gui)

   - You will see the avail change to DOWN
   - You will not see an alert fire
   - The alert def will not disable
Comment 1 Jay Shaughnessy 2012-12-05 16:25:39 EST
commit b79e1c1ce301f4f65ad62b32a14be40988a4e090
Author: Jay Shaughnessy <jshaughn@redhat.com>
Date:   Wed Dec 5 16:23:50 2012 -0500

When setting the server status dirty to notify the need for global condition cache
refresh, update *all* servers.  The global condition cache is supposed to be
replicated across HA servers.  Otherwise, different servers will have different
condition sets generating unexpected results.

    Cherry pick of master 8ab939690aefbb6316aca6336c41804f728d290e
Comment 2 Simeon Pinder 2012-12-10 07:40:45 EST
Moving to ON_QA as available for test in 3.1.2.ER4 or greater: https://brewweb.devel.redhat.com//buildinfo?buildID=246861
Comment 3 Filip Brychta 2012-12-14 11:58:38 EST
Verified on 3.1.2.ER4

Note You need to log in before you can comment on or make changes to this bug.