Bug 1342376 - The expiration policy will be removed after restarting the rabbitmq cluster via pacemaker
Summary: The expiration policy will be removed after restarting the rabbitmq cluster v...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: resource-agents
Version: 7.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: pre-dev-freeze
: 7.4
Assignee: Peter Lemenkov
QA Contact: Asaf Hirshberg
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-03 06:19 UTC by Chen
Modified: 2017-08-01 14:55 UTC (History)
16 users (show)

Fixed In Version: resource-agents-3.9.5-105.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 14:55:11 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ClusterLabs resource-agents pull 896 0 'None' closed Dump/restore users even in case of RabbitMQ 3.6.x and also dump/restore user permissions during restarts 2020-09-08 22:24:29 UTC
Github ClusterLabs resource-agents pull 963 0 'None' closed [RFE] Backup and restore policies 2020-09-08 22:24:29 UTC
Github ClusterLabs resource-agents pull 983 0 'None' closed [rabbitmq] Typo fix 2020-09-08 22:24:29 UTC
Red Hat Product Errata RHBA-2017:1844 0 normal SHIPPED_LIVE resource-agents bug fix and enhancement update 2017-08-01 17:49:20 UTC

Description Chen 2016-06-03 06:19:27 UTC
Description of problem:

The expiration policy will be removed after restarting the rabbitmq cluster via pacemaker

Version-Release number of selected component (if applicable):

OSP6 HA

How reproducible:

100%

Steps to Reproduce:
1. Create a expiration policy to a queue 
2. Restart the rabbitmq-server-clone via pacemaker
3. Check the policy

Actual results:

The policy is removed

Expected results:

The policy should persist

Additional info:

Comment 4 Asaf Hirshberg 2017-02-27 12:17:38 UTC
I tested resource-agents-3.9.5-86.el7.x86_64 on osp-10 latest and it failed, should i check with a different version of rabbitmq?(or on later osp versions?)

[root@overcloud-controller-0 ~]# rabbitmqctl list_policies
Listing policies ...
/	ha-all	all	^(?!amq\\.).*	{"ha-mode":"all"}	0
/	expiry	queues	.*	{"expires":60000}	0
[root@overcloud-controller-0 ~]# pcs resource disable rabbitmq-clone all;sleep 60;pcs resource enable rabbitmq-clone all
[root@overcloud-controller-0 ~]# rabbitmqctl list_policies
Listing policies ...
/	ha-all	all	^(?!amq\\.).*	{"ha-mode":"all"}	0
[root@overcloud-controller-0 ~]# rpm -qa |grep resource-agents-
resource-agents-3.9.5-86.el7.x86_64
[root@overcloud-controller-0 ~]# rpm -qa |grep rabbit
puppet-rabbitmq-5.6.0-1.057a013git.el7ost.noarch
rabbitmq-server-3.6.3-6.el7ost.noarch

Comment 5 Peter Lemenkov 2017-03-20 18:22:44 UTC
Ok, I've found what went wrong. We should dump/restore another one table - rabbit_runtime_parameters.

// rabbitmqctl eval "ets:tab2list(rabbit_runtime_parameters)."

Comment 7 Asaf Hirshberg 2017-05-07 04:41:26 UTC
Failed. Tested using osp11.
resource-agents-3.9.5-95.el7.x86_64
puppet-rabbitmq-5.6.0-3.03b8592git.el7ost.noarch
rabbitmq-server-3.6.5-1.el7ost.noarch

[root@puma04 ~]# rabbitmqctl list_policies
Listing policies ...
/	ha-all	all	^(?!amq\\.).*	{"ha-mode":"exactly","ha-params":2}	0
[root@puma04 ~]# rabbitmqctl set_policy expiry ".*" '{"expires":60000}' --apply-to queues
Setting policy "expiry" for pattern ".*" to "{\"expires\":60000}" with priority "0" ...
[root@puma04 ~]# rabbitmqctl list_policies
Listing policies ...
/	ha-all	all	^(?!amq\\.).*	{"ha-mode":"exactly","ha-params":2}	0
/	expiry	queues	.*	{"expires":60000}	0
[root@puma04 ~]# pcs resource disable rabbitmq-clone all;sleep 60;pcs resource enable rabbitmq-clone all
[root@puma04 ~]# rabbitmqctl list_policies
Listing policies ...
/	ha-all	all	^(?!amq\\.).*	{"ha-mode":"exactly","ha-params":2}	0
[root@puma04 ~]#

Comment 9 Peter Lemenkov 2017-05-30 11:51:58 UTC
(In reply to Asaf Hirshberg from comment #7)
> Failed. Tested using osp11.
> resource-agents-3.9.5-95.el7.x86_64
> puppet-rabbitmq-5.6.0-3.03b8592git.el7ost.noarch
> rabbitmq-server-3.6.5-1.el7ost.noarch
> 
> [root@puma04 ~]# rabbitmqctl list_policies
> Listing policies ...
> /	ha-all	all	^(?!amq\\.).*	{"ha-mode":"exactly","ha-params":2}	0
> [root@puma04 ~]# rabbitmqctl set_policy expiry ".*" '{"expires":60000}'
> --apply-to queues
> Setting policy "expiry" for pattern ".*" to "{\"expires\":60000}" with
> priority "0" ...
> [root@puma04 ~]# rabbitmqctl list_policies
> Listing policies ...
> /	ha-all	all	^(?!amq\\.).*	{"ha-mode":"exactly","ha-params":2}	0
> /	expiry	queues	.*	{"expires":60000}	0
> [root@puma04 ~]# pcs resource disable rabbitmq-clone all;sleep 60;pcs
> resource enable rabbitmq-clone all
> [root@puma04 ~]# rabbitmqctl list_policies
> Listing policies ...
> /	ha-all	all	^(?!amq\\.).*	{"ha-mode":"exactly","ha-params":2}	0
> [root@puma04 ~]#


We've found an error in script. We'll provide a test build shortly.

https://github.com/ClusterLabs/resource-agents/pull/983

Comment 15 Marian Krcmarik 2017-06-23 14:00:06 UTC
Verified on resource-agents-3.9.5-105.el7.

[heat-admin@messaging-0 ~]$ sudo rabbitmqctl set_policy expiry ".*" '{"expires":60000}' --apply-to queues
Setting policy "expiry" for pattern ".*" to "{\"expires\":60000}" with priority "0" ...
[heat-admin@messaging-0 ~]$ sudo pcs resource disable rabbitmq-clone all;sleep 80;sudo pcs resource enable rabbitmq-clone all
[heat-admin@messaging-0 ~]$ sudo rabbitmqctl list_policies
Listing policies ...
/       ha-all  all     ^(?!amq\\.).*   {"ha-mode":"all"}       0
/       expiry  queues  .*      {"expires":60000}       0

Comment 16 errata-xmlrpc 2017-08-01 14:55:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1844


Note You need to log in before you can comment on or make changes to this bug.