Hide Forgot
Description of problem: There is a regression in resource-agents-4.1.1-12.el7_6.6.x86_64.rpm (the issue is not seen in resource-agents-4.1.1-12.el7_6.4.x86_64.rpm) when stopping the OCF rabbitmq resource inside a bundle. To reproduce this issue simply trigger a restart of the OCF inside the rabbitmq-bundle. We did this by tweaking the following line: <nvpair id="rabbitmq-instance_attributes-set_policy" name="set_policy" value="ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":2}"/> (Just changing ha-params from 2 to 3 and viceversa is enough). Once we inject a CIB with a change in the rabbitmq ocf resource pacemaker will try a restart of the internal resource only and the restart will fail. If we use the old resource-agents-4.1.1-12.el7_6.4.x86_64.rpm it all works correctly. We tried adding a few 'killall -9 epmd' in the rmq_stop action (and correctly observed that epmd was not around any longer) but it did not help. Meaning that this is likely due to some attributes not being cleaned up.
Damien and I ran this down this morning. We discovered a few places where the stop action might not remove the rabbitmq node attribute from pacemaker. So what ends up happening is: - change ocf resource, triggers restart - nodes 3 and 2 stop, but do *not* delete their attribute - node 1 errors out in some fashion[1] during monitor/notify/stop and the node attr *is* deleted and the service stopped - node 1 starts back up but attempts to join cluster with nodes 2+3 because the attributes are still present. This fails and thus the cluster does not bootstrap properly. I will submit a PR with the two minor tweaks we did that seems to address this. [1] When you start trying to do too much in the middle of a failover... exact results are less-than-predictable. What *is* important is that it gets marked as down.
Another note, I think this may only be a problem with bundles. The attributes have a "reboot" lifetime. I think in the non-bundle case, stopping the resource may be enough to cause the attributes to be cleaned up. However with bundles, the resource stop only stops the service inside of the bundle, but the bundle itself stays up the entire time so the attribute remains.
https://github.com/ClusterLabs/resource-agents/pull/1274
*** Bug 1655764 has been marked as a duplicate of this bug. ***
Hi folks, do you think this BZ [0] could be a duplicate? [0] https://bugzilla.redhat.com/show_bug.cgi?id=1661806
Verified , tested in : https://bugzilla.redhat.com/show_bug.cgi?id=1657138#c3
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2012