RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1545449 - Resource in Failed followed by Stopped status after a node failover
Summary: Resource in Failed followed by Stopped status after a node failover
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.5
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 7.6
Assignee: Ken Gaillot
QA Contact: pkomarov
URL:
Whiteboard:
Depends On:
Blocks: 1563272 1563345
TreeView+ depends on / blocked
 
Reported: 2018-02-14 23:46 UTC by Marian Krcmarik
Modified: 2018-10-30 07:59 UTC (History)
10 users (show)

Fixed In Version: pacemaker-1.1.18-12.el7
Doc Type: Bug Fix
Doc Text:
Cause: Pacemaker could schedule notifications for clone actions that are ultimately not run but implied by fencing of the underlying node. Consequence: The notifications would be mistakenly sent. In a situation with a large number of such actions (only observed when using pacemaker's "bundle" feature), the mistaken notifications could overwhelm the crmd's connection to the CIB, leading to long recovery times. Fix: Pacemaker now avoids scheduling notifications for implied events. Result: Recovery proceeds quickly after failure of a node hosting bundles with clone resources.
Clone Of:
: 1563272 1563345 (view as bug list)
Environment:
Last Closed: 2018-10-30 07:57:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
cibadmin -Q (144.55 KB, text/plain)
2018-02-14 23:46 UTC, Marian Krcmarik
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3055 0 None None None 2018-10-30 07:59:04 UTC

Description Marian Krcmarik 2018-02-14 23:46:55 UTC
Created attachment 1396190 [details]
cibadmin -Q

Description of problem:
Resource of rabbitmq service fails to start and ends up in Stopped status after a node which hosted the service, failover. Constantly I am hitting a situation that rabbitmq resource cannot start when a node rejoins the cluster after resetting it ungracefully - the env is Openstack 3 node pacemaker based cluster which managed containers in bundles - I have not seen the behaviour outside of scope of bundles and I am failing the bug against pacemaker because my env consists of RHEL 7.4 nodes with upgrade pacemaker/corosync/resource-agents packages only to newer RHL7.5 packages. I have not seen the problem with RHEL7.4 pacemaker version.

I am attaching some logs:
Sosreport from reset node -> controller-1
Sosrepport for DC ode after the reset -> controller-2 (Not sure which one was DC before failover, maybe controller-1 and it's the pattern)
cib once reste node (controller-1) rejoined cluster with rabbitmq resource in Stopped status

The reset was triggered at 14.02.2018 18:17:15 of logs time.

I observed sometimes similar problem even with galera or redis bundles.

Version-Release number of selected component (if applicable):
corosynclib-2.4.3-2.el7.x86_64
pacemaker-1.1.18-11.el7.x86_64
pacemaker-cli-1.1.18-11.el7.x86_64
pacemaker-remote-1.1.18-11.el7.x86_64
resource-agents-3.9.5-121.el7.x86_64
pacemaker-cluster-libs-1.1.18-11.el7.x86_64
pacemaker-libs-1.1.18-11.el7.x86_64
corosync-2.4.3-2.el7.x86_64

How reproducible:
Very often

Steps to Reproduce:
1. Get a pacemaker based cluster with rabbitmq resource in bundle running in container -> OSP12 standard deployment
2. Reset ungracefully one of the nodes

Actual results:
rabbitmq resource in Stopped status once the reste node joins cluster

Expected results:
Resource in Started status

Additional info:
Performing "pcs resource cleanup" seems to help often

Comment 4 Andrew Beekhof 2018-02-19 09:19:17 UTC
A little confused:

> The reset was triggered at 14.02.2018 18:17:15 of logs time.

But the last log I see is:

Feb 13 03:25:50 controller-2 dockerd-current[2705]: 2018-02-13 03:25:50.382264 7fe2c5a1a700  0 mon.controller-2@0(leader).data_health(48) update_stats avail 73% total 40947 MB, used 10899 MB, avail 30048 MB

Comment 6 Andrew Beekhof 2018-02-27 06:11:02 UTC
There are a craptonne of pending notify actions:

Feb 14 18:21:14 controller-2 crmd[478696]:   error: 1465 pending LRM operations at shutdown
...
Feb 14 18:21:14 controller-2 crmd[478696]:   error: Pending action: rabbitmq:4714 (rabbitmq_notify_0)


Which is presumably a result of this (tight) loop:

Feb 14 18:19:32 controller-2 crmd[478696]:  notice: Transition 1824 (Complete=30, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-548.bz2): Complete
Feb 14 18:19:32 controller-2 crmd[478696]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Result of notify operation for redis on redis-bundle-2: 0 (ok)
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Initiating notify operation rabbitmq_pre_notify_start_0 on rabbitmq-bundle-0
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Initiating notify operation rabbitmq_pre_notify_start_0 locally on rabbitmq-bundle-2
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Initiating notify operation redis_pre_notify_start_0 on redis-bundle-0
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Initiating notify operation redis_pre_notify_start_0 locally on redis-bundle-2
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Initiating notify operation rabbitmq_post_notify_start_0 on rabbitmq-bundle-0
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Initiating notify operation rabbitmq:1_post_notify_start_0 locally on rabbitmq-bundle-1
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Result of notify operation for rabbitmq on rabbitmq-bundle-1: 0 (ok)
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Initiating notify operation rabbitmq_post_notify_start_0 locally on rabbitmq-bundle-2
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Initiating notify operation redis_post_notify_start_0 on redis-bundle-0
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Initiating notify operation redis:1_post_notify_start_0 locally on redis-bundle-1
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Result of notify operation for redis on redis-bundle-1: 0 (ok)
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Initiating notify operation redis_post_notify_start_0 locally on redis-bundle-2
Feb 14 18:19:33 controller-2 crmd[478696]:  notice: Transition 1826 (Complete=30, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-549.bz2): Complete


And explain why the crmd borked:

Feb 14 18:21:07 controller-2 crmd[478696]:   error: Query resulted in an error: Timer expired

....

Feb 14 18:21:13 controller-2 crmd[478696]:   error: Connection to cib_shm failed
Feb 14 18:21:13 controller-2 crmd[478696]:   error: Connection to cib_shm[0x5622cde54b80] closed (I/O condition=1)
Feb 14 18:21:13 controller-2 crmd[478696]:   error: Connection to the CIB terminated...
Feb 14 18:21:13 controller-2 crmd[478696]:   error: Input I_ERROR received in state S_POLICY_ENGINE from crmd_cib_connection_destroy



Back to the original report, it seems from the logs that rabbit eventually comes up:

Feb 14 18:32:34 controller-2 crmd[763767]:  notice: Result of start operation for rabbitmq on rabbitmq-bundle-2: 0 (ok)

however not before:

Feb 14 18:29:38 controller-2 crmd[763767]: warning: Timer popped (timeout=200000, abort_level=1000000, complete=false)
Feb 14 18:29:38 controller-2 crmd[763767]:   error: [Action   15]: In-flight rsc op rabbitmq_stop_0                   on rabbitmq-bundle-2 (priority: 0, waiting: none)
Feb 14 18:29:38 controller-2 crmd[763767]: warning: rsc_op 15: rabbitmq_stop_0 on rabbitmq-bundle-2 timed out
Feb 14 18:29:38 controller-2 crmd[763767]:  notice: Transition 1840 (Complete=86, Pending=0, Fired=0, Skipped=7, Incomplete=117, Source=/var/lib/pacemaker/pengine/pe-input-556.bz2): Stopped
Feb 14 18:29:39 controller-2 crmd[763767]:  notice: Initiating stop operation rabbitmq-bundle-2_stop_0 locally on controller-2

it does turn up though because some time later we see (in corosync.log):

Feb 14 18:29:39 [763767] controller-2       crmd:     info: abort_transition_graph:	Transition aborted by operation rabbitmq_stop_0 'modify' on rabbitmq-bundle-2: Inactive graph | magic=2:1;15:1840:0:e2632460-0590-4630-9fdd-3d8b62c384b1 cib=0.113.2680 source=process_graph_event:503 complete=true


The stop appears to have been triggered by:

   Replica[2]
      rabbitmq-bundle-docker-2	(ocf::heartbeat:docker):	Started controller-2
      rabbitmq-bundle-2	(ocf::pacemaker:remote):	Started controller-2
      rabbitmq	(ocf::heartbeat:rabbitmq-cluster):	FAILED rabbitmq-bundle-2




So in summary, 
1. the PE is incorrectly scheduling far too many notify actions
2. which is resulting in the too many cib updates
3. which is causing the crmd/ipc queue to be saturated
4. which is causing action updates to be "lost" and eventually the crmd to bork

Suggestion would be to use pe-input-554.bz2 as a test case to determine how the notifications can be inhibited

Comment 7 Marian Krcmarik 2018-02-27 11:04:42 UTC
Andrew,
Thanks a lot for looking at the logs, is there any follow up action item for me to do?

Comment 8 Andrew Beekhof 2018-02-27 22:13:05 UTC
No, the RHEL folks should be able to take it from here

Comment 9 Andrew Beekhof 2018-04-03 04:11:19 UTC
Fixed in https://github.com/beekhof/pacemaker/commit/1b2266f
This blocks the OSP controller replacement procedure, we need a build as soon as possible

Comment 10 Andrew Beekhof 2018-04-03 04:17:47 UTC
Urgh, wrong url...
Fixed in https://github.com/beekhof/pacemaker/commit/1b2266f

> we need a build as soon as possible

Specifically we need a z-stream preview build to unblock OSP13 testing.

Comment 14 Ken Gaillot 2018-04-05 18:19:42 UTC
There already is a 7.5.z clone, Bug 1563345.

The holdup right now is that the fix from Comment 9 is not sufficient. Further work is needed.

Comment 15 Andrew Beekhof 2018-04-05 23:17:17 UTC
I don't believe any additional changes are required.

Comment 16 Ken Gaillot 2018-04-06 22:56:29 UTC
Test packages with the fix are available at https://people.redhat.com/kgaillot/bz1563345/

Feedback is appreciated.

Comment 18 pkomarov 2018-06-18 08:22:13 UTC
Verified, after node ungracefull failover rabbitmq regains the cluster succesfully: 


[stack@undercloud-0 ~]$ ansible overcloud -b -mshell -a'rpm -qa|grep pacemaker'
 [WARNING]: Found both group and host with same name: undercloud

 [WARNING]: Consider using yum, dnf or zypper module rather than running rpm

compute-1 | SUCCESS | rc=0 >>
pacemaker-cli-1.1.18-11.el7_5.2.x86_64
puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost.noarch
pacemaker-1.1.18-11.el7_5.2.x86_64
pacemaker-cluster-libs-1.1.18-11.el7_5.2.x86_64
pacemaker-remote-1.1.18-11.el7_5.2.x86_64
pacemaker-libs-1.1.18-11.el7_5.2.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch

compute-0 | SUCCESS | rc=0 >>
pacemaker-cli-1.1.18-11.el7_5.2.x86_64
puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost.noarch
pacemaker-1.1.18-11.el7_5.2.x86_64
pacemaker-cluster-libs-1.1.18-11.el7_5.2.x86_64
pacemaker-remote-1.1.18-11.el7_5.2.x86_64
pacemaker-libs-1.1.18-11.el7_5.2.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch

controller-1 | SUCCESS | rc=0 >>
pacemaker-cli-1.1.18-11.el7_5.2.x86_64
puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost.noarch
pacemaker-1.1.18-11.el7_5.2.x86_64
pacemaker-cluster-libs-1.1.18-11.el7_5.2.x86_64
pacemaker-remote-1.1.18-11.el7_5.2.x86_64
pacemaker-libs-1.1.18-11.el7_5.2.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch

controller-0 | SUCCESS | rc=0 >>
pacemaker-cli-1.1.18-11.el7_5.2.x86_64
puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost.noarch
pacemaker-1.1.18-11.el7_5.2.x86_64
pacemaker-cluster-libs-1.1.18-11.el7_5.2.x86_64
pacemaker-remote-1.1.18-11.el7_5.2.x86_64
pacemaker-libs-1.1.18-11.el7_5.2.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch

controller-2 | SUCCESS | rc=0 >>
pacemaker-cli-1.1.18-11.el7_5.2.x86_64
puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost.noarch
pacemaker-1.1.18-11.el7_5.2.x86_64
pacemaker-cluster-libs-1.1.18-11.el7_5.2.x86_64
pacemaker-remote-1.1.18-11.el7_5.2.x86_64
pacemaker-libs-1.1.18-11.el7_5.2.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch


[root@controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: controller-2 (version 1.1.18-11.el7_5.2-2b07d5c5a9) - partition with quorum
Last updated: Mon Jun 18 08:10:32 2018
Last change: Sun Jun 17 14:58:53 2018 by root via cibadmin on controller-0

12 nodes configured
37 resources configured

Online: [ controller-0 controller-1 controller-2 ]
GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ]

Full list of resources:

 Docker container set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]
   rabbitmq-bundle-0	(ocf::heartbeat:rabbitmq-cluster):	Started controller-0
   rabbitmq-bundle-1	(ocf::heartbeat:rabbitmq-cluster):	Started controller-1
   rabbitmq-bundle-2	(ocf::heartbeat:rabbitmq-cluster):	Started controller-2
 Docker container set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Master controller-0
   galera-bundle-1	(ocf::heartbeat:galera):	Master controller-1
   galera-bundle-2	(ocf::heartbeat:galera):	Master controller-2
 Docker container set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]
   redis-bundle-0	(ocf::heartbeat:redis):	Master controller-0
   redis-bundle-1	(ocf::heartbeat:redis):	Slave controller-1
   redis-bundle-2	(ocf::heartbeat:redis):	Slave controller-2
 ip-192.168.24.10	(ocf::heartbeat:IPaddr2):	Started controller-0
 ip-10.0.0.102	(ocf::heartbeat:IPaddr2):	Started controller-1
 ip-172.17.1.12	(ocf::heartbeat:IPaddr2):	Started controller-2
 ip-172.17.1.10	(ocf::heartbeat:IPaddr2):	Started controller-0
 ip-172.17.3.11	(ocf::heartbeat:IPaddr2):	Started controller-1
 ip-172.17.4.15	(ocf::heartbeat:IPaddr2):	Started controller-2
 Docker container set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]
   haproxy-bundle-docker-0	(ocf::heartbeat:docker):	Started controller-0
   haproxy-bundle-docker-1	(ocf::heartbeat:docker):	Started controller-1
   haproxy-bundle-docker-2	(ocf::heartbeat:docker):	Started controller-2
 Docker container: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]
   openstack-cinder-volume-docker-0	(ocf::heartbeat:docker):	Started controller-0




[root@controller-0 ~]# echo 'b'>/proc/sysrq-trigger Connection to undercloud-0 closed.


[stack@undercloud-0 ~]$ ansible controller-1 -b -mshell -a'pcs status'
 [WARNING]: Found both group and host with same name: undercloud

controller-1 | SUCCESS | rc=0 >>
Cluster name: tripleo_cluster
Stack: corosync
Current DC: controller-2 (version 1.1.18-11.el7_5.2-2b07d5c5a9) - partition with quorum
Last updated: Mon Jun 18 08:18:13 2018
Last change: Mon Jun 18 08:11:47 2018 by redis-bundle-1 via crm_attribute on controller-1

12 nodes configured
37 resources configured

Online: [ controller-0 controller-1 controller-2 ]
GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ]

Full list of resources:

 Docker container set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]
   rabbitmq-bundle-0	(ocf::heartbeat:rabbitmq-cluster):	Started controller-0
   rabbitmq-bundle-1	(ocf::heartbeat:rabbitmq-cluster):	Started controller-1
   rabbitmq-bundle-2	(ocf::heartbeat:rabbitmq-cluster):	Started controller-2
 Docker container set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Master controller-0
   galera-bundle-1	(ocf::heartbeat:galera):	Master controller-1
   galera-bundle-2	(ocf::heartbeat:galera):	Master controller-2
 Docker container set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]
   redis-bundle-0	(ocf::heartbeat:redis):	Slave controller-0
   redis-bundle-1	(ocf::heartbeat:redis):	Master controller-1
   redis-bundle-2	(ocf::heartbeat:redis):	Slave controller-2
 ip-192.168.24.10	(ocf::heartbeat:IPaddr2):	Started controller-2
 ip-10.0.0.102	(ocf::heartbeat:IPaddr2):	Started controller-1
 ip-172.17.1.12	(ocf::heartbeat:IPaddr2):	Started controller-2
 ip-172.17.1.10	(ocf::heartbeat:IPaddr2):	Started controller-1
 ip-172.17.3.11	(ocf::heartbeat:IPaddr2):	Started controller-1
 ip-172.17.4.15	(ocf::heartbeat:IPaddr2):	Started controller-2
 Docker container set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]
   haproxy-bundle-docker-0	(ocf::heartbeat:docker):	Started controller-0
   haproxy-bundle-docker-1	(ocf::heartbeat:docker):	Started controller-1
   haproxy-bundle-docker-2	(ocf::heartbeat:docker):	Started controller-2
 Docker container: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]

Comment 19 pkomarov 2018-06-19 06:48:21 UTC
Verification addition : retested on pacemaker 1.1.18-13

(undercloud) [stack@undercloud-0 ~]$  ansible overcloud -m shell -b -a 'rpm -qa|grep pace'
 [WARNING]: Found both group and host with same name: undercloud

 [WARNING]: Consider using yum, dnf or zypper module rather than running rpm

compute-1 | SUCCESS | rc=0 >>
puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost.noarch
pacemaker-remote-1.1.18-13.el7.x86_64
userspace-rcu-0.7.16-1.el7cp.x86_64
pacemaker-cli-1.1.18-13.el7.x86_64
pacemaker-cluster-libs-1.1.18-13.el7.x86_64
pacemaker-1.1.18-13.el7.x86_64
pacemaker-libs-1.1.18-13.el7.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch

compute-0 | SUCCESS | rc=0 >>
puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost.noarch
pacemaker-remote-1.1.18-13.el7.x86_64
userspace-rcu-0.7.16-1.el7cp.x86_64
pacemaker-cli-1.1.18-13.el7.x86_64
pacemaker-cluster-libs-1.1.18-13.el7.x86_64
pacemaker-1.1.18-13.el7.x86_64
pacemaker-libs-1.1.18-13.el7.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch

controller-1 | SUCCESS | rc=0 >>
puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost.noarch
pacemaker-remote-1.1.18-13.el7.x86_64
userspace-rcu-0.7.16-1.el7cp.x86_64
pacemaker-cli-1.1.18-13.el7.x86_64
pacemaker-cluster-libs-1.1.18-13.el7.x86_64
pacemaker-1.1.18-13.el7.x86_64
pacemaker-libs-1.1.18-13.el7.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch

controller-2 | SUCCESS | rc=0 >>
puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost.noarch
pacemaker-remote-1.1.18-13.el7.x86_64
userspace-rcu-0.7.16-1.el7cp.x86_64
pacemaker-cli-1.1.18-13.el7.x86_64
pacemaker-cluster-libs-1.1.18-13.el7.x86_64
pacemaker-1.1.18-13.el7.x86_64
pacemaker-libs-1.1.18-13.el7.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch

controller-0 | SUCCESS | rc=0 >>
puppet-pacemaker-0.7.2-0.20180423212248.fee47ee.el7ost.noarch
pacemaker-remote-1.1.18-13.el7.x86_64
userspace-rcu-0.7.16-1.el7cp.x86_64
pacemaker-cli-1.1.18-13.el7.x86_64
pacemaker-cluster-libs-1.1.18-13.el7.x86_64
pacemaker-1.1.18-13.el7.x86_64
pacemaker-libs-1.1.18-13.el7.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch

Comment 21 errata-xmlrpc 2018-10-30 07:57:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3055


Note You need to log in before you can comment on or make changes to this bug.