Bug 1247303 - rabbitmq-cluster agent needs to forget stopped cluster nodes
rabbitmq-cluster agent needs to forget stopped cluster nodes
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: resource-agents (Show other bugs)
7.2
Unspecified Unspecified
urgent Severity urgent
: rc
: ---
Assigned To: Oyvind Albrigtsen
Leonid Natapov
: ZStream
: 1299923 (view as bug list)
Depends On: 1311025
Blocks: 1311180
  Show dependency treegraph
 
Reported: 2015-07-27 14:09 EDT by David Vossel
Modified: 2016-11-03 19:57 EDT (History)
16 users (show)

See Also:
Fixed In Version: resource-agents-3.9.5-60.el7
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1311180 (view as bug list)
Environment:
Last Closed: 2016-11-03 19:57:47 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description David Vossel 2015-07-27 14:09:58 EDT
Description of problem:

After fencing, a rabbitmq-cluster resource instance may not be able to rejoin the rabbitmq cluster.

Version-Release number of selected component (if applicable):


How reproducible:
We have seen this in customer deployments. This is not something that has a clear set of steps to reproduce.

Steps to Reproduce:
1. fence a osp cluster node.

Actual results:
rabbitmq-cluster instance on fenced node can not join rabbitmq cluster once node comes back online.

Expected results:
rabbitmq-cluster instance on fenced node should be able to start successfully after the node comes back online.
Comment 5 Fabio Massimo Di Nitto 2015-08-31 23:20:37 EDT
John,

did you have a chance to test this patch again? is it still required?
Comment 6 John Eckersberg 2015-09-01 08:47:40 EDT
I still need to go back and test this.  The libvirt fence agent wasn't working for me, but David realized it was because the hostname and the libvirt domain name didn't match.  So I need to try that and see if I can get it to fence properly.

Leaving NEEDINFO for now to remind me to actually try it :)
Comment 7 Fabio Massimo Di Nitto 2015-09-02 23:35:31 EDT
(In reply to John Eckersberg from comment #6)
> I still need to go back and test this.  The libvirt fence agent wasn't
> working for me, but David realized it was because the hostname and the
> libvirt domain name didn't match.  So I need to try that and see if I can
> get it to fence properly.
> 
> Leaving NEEDINFO for now to remind me to actually try it :)

That´s fine, but we don´t have heaps of time to get this into 7.2. If you need help to configure fencing or other bits, please just contact me or #cluster.
Comment 9 Oyvind Albrigtsen 2015-11-20 07:37:36 EST
Works as expected.

Tested with:
# service corosync stop &
# killall -9 corosync

on the node that should be fenced.

First notify:
rabbitmq-cluster(rmq)[15072]:   2015/11/20_13:30:10 NOTICE: Forgetting stopped node rabbit@rhel7-1
rabbitmq-cluster(rmq)[15072]:   2015/11/20_13:30:11 WARNING: Unable to forget offline node rabbit@rhel7-1.

Second notify:
rabbitmq-cluster(rmq)[15284]:   2015/11/20_13:30:26 NOTICE: Forgetting stopped node rabbit@rhel7-1
Comment 13 Fabio Massimo Di Nitto 2016-02-23 09:08:09 EST
*** Bug 1299923 has been marked as a duplicate of this bug. ***
Comment 21 Leonid Natapov 2016-03-09 04:30:57 EST
resource-agents-3.9.5-67.el7

Looks good:

After fencing controller-0 I see the following on controller-1 and controller-2:

overcloud-controller-1 
-------------------------
rabbitmq-cluster(rabbitmq)[32276]:	2016/03/09_09:19:35 NOTICE: Forgetting stopped node rabbit@overcloud-controller-0
rabbitmq-cluster(rabbitmq)[32276]:	2016/03/09_09:19:35 WARNING: Unable to forget offline node rabbit@overcloud-controller-0.

overcloud-controller-2
-------------------------
rabbitmq-cluster(rabbitmq)[16925]:	2016/03/09_09:19:35 NOTICE: Forgetting stopped node rabbit@overcloud-controller-0


Works as expected.
Comment 22 Peter Lemenkov 2016-06-24 09:33:09 EDT
(In reply to Leonid Natapov from comment #21)
> resource-agents-3.9.5-67.el7
> 
> Looks good:
> 
> After fencing controller-0 I see the following on controller-1 and
> controller-2:
> 
> overcloud-controller-1 
> -------------------------
> rabbitmq-cluster(rabbitmq)[32276]:	2016/03/09_09:19:35 NOTICE: Forgetting
> stopped node rabbit@overcloud-controller-0
> rabbitmq-cluster(rabbitmq)[32276]:	2016/03/09_09:19:35 WARNING: Unable to
> forget offline node rabbit@overcloud-controller-0.
> 
> overcloud-controller-2
> -------------------------
> rabbitmq-cluster(rabbitmq)[16925]:	2016/03/09_09:19:35 NOTICE: Forgetting
> stopped node rabbit@overcloud-controller-0
> 
> 
> Works as expected.

If someone still sees this issue, then please test this package:

resource-agents-3.9.5-76.el7
Comment 24 errata-xmlrpc 2016-11-03 19:57:47 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2174.html

Note You need to log in before you can comment on or make changes to this bug.