RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1575095 - [OSP13] After a staggered OC reboot rabbitmq failes to get clustered
Summary: [OSP13] After a staggered OC reboot rabbitmq failes to get clustered
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: resource-agents
Version: 7.6
Hardware: All
OS: Linux
high
medium
Target Milestone: pre-dev-freeze
: 7.7
Assignee: Oyvind Albrigtsen
QA Contact: pkomarov
URL:
Whiteboard:
: 1574980 (view as bug list)
Depends On:
Blocks: 1656733 1666824
TreeView+ depends on / blocked
 
Reported: 2018-05-04 18:25 UTC by Michele Baldessari
Modified: 2020-03-27 09:43 UTC (History)
20 users (show)

Fixed In Version: resource-agents-4.1.1-15.el7
Doc Type: Bug Fix
Doc Text:
Previously, the rabbitmq server sometimes failed to start on some cluster nodes. This happened because rabbitmq failed to get clustered if nodes from the node list became unavailable while another node was joining the cluster. With this update, the resource is given more time to start by retrying the entire start action until it succeeds or until the start timeout is reached. As a result, the described problem no longer occurs.
Clone Of:
: 1656733 1666824 (view as bug list)
Environment:
Last Closed: 2019-08-06 12:01:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3990761 0 Troubleshoot None Regression in resource-agents 4.1.1-12.el7_6.6 causes rabbitmq-bundle failures after certain restarts 2019-04-23 16:52:41 UTC
Red Hat Product Errata RHBA-2019:2012 0 None None None 2019-08-06 12:02:01 UTC

Description Michele Baldessari 2018-05-04 18:25:35 UTC
Description of problem:
According to a QE report, after a full reboot of the overcloud, starting new VMs is not working.

Looked at the environment and saw the following reboot times for the overcloud nodes:
compute-0    May 03 18:23:06 
ceph-1       May 03 18:24:27 
ceph-0       May 03 18:26:04 
compute-2    May 03 18:27:17 
compute-1    May 03 18:28:29 
controller-2 May 03 18:30:54 
controller-1 May 03 18:34:43 
controller-0 May 03 18:38:57
ceph-2       May 03 18:40:59 
                                                      
So the OC was rebooted in a somewhat staggered fashion in an interval starting from 18:23:06 and ending at 18:40:59. It seems that all compute nodes are somewhat stuck in trying to talk to rabbit on controller-2:
- Going on the compute nodes and it seems that all of them just keep hitting controller-2 for whatever reason:
(undercloud) [stack@undercloud-0 ~]$ ansible -i inv.yaml Compute --become -m shell -a 'tail -n5 /var/log/containers/nova/nova-compute.log'
 [WARNING]: Skipping unexpected key (hostvars) in group (_meta), only "vars", "children" and "hosts" are valid
          
192.168.24.13 | SUCCESS | rc=0 >>
2018-05-04 16:54:19.421 1 INFO oslo.messaging._drivers.impl_rabbit [req-9f9ca2f8-7c38-4e3f-83e5-8f20ab4b1f60 fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [9c664997-f0f9-4a7f-8b9e-4de5ce8bb4d5] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 45176.
2018-05-04 16:56:19.423 1 ERROR oslo.messaging._drivers.impl_rabbit [req-9f9ca2f8-7c38-4e3f-83e5-8f20ab4b1f60 fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [9c664997-f0f9-4a7f-8b9e-4de5ce8bb4d5] AMQP server on controller-2.internalapi.localdomain:5672 is unreachable: timed out. Trying again in 1 seconds. Client port: None: timeout: timed out
2018-05-04 16:56:20.445 1 INFO oslo.messaging._drivers.impl_rabbit [req-9f9ca2f8-7c38-4e3f-83e5-8f20ab4b1f60 fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [9c664997-f0f9-4a7f-8b9e-4de5ce8bb4d5] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 45186.
2018-05-04 16:59:20.451 1 ERROR oslo.messaging._drivers.impl_rabbit [req-9f9ca2f8-7c38-4e3f-83e5-8f20ab4b1f60 fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [9c664997-f0f9-4a7f-8b9e-4de5ce8bb4d5] AMQP server controller-2.internalapi.localdomain:5672 closed the connection. Check login credentials: Socket closed: IOError: Socket closed
2018-05-04 16:59:21.466 1 INFO oslo.messaging._drivers.impl_rabbit [req-9f9ca2f8-7c38-4e3f-83e5-8f20ab4b1f60 fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [9c664997-f0f9-4a7f-8b9e-4de5ce8bb4d5] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 45202.
          
192.168.24.15 | SUCCESS | rc=0 >>
2018-05-04 16:54:21.749 1 INFO oslo.messaging._drivers.impl_rabbit [req-73458c2e-fdd5-4e3e-9139-264dda190dbd fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [1f4957d5-0282-4661-8861-2b3494b325e6] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 41116.
2018-05-04 16:56:21.752 1 ERROR oslo.messaging._drivers.impl_rabbit [req-73458c2e-fdd5-4e3e-9139-264dda190dbd fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [1f4957d5-0282-4661-8861-2b3494b325e6] AMQP server on controller-2.internalapi.localdomain:5672 is unreachable: timed out. Trying again in 1 seconds. Client port: None: timeout: timed out
2018-05-04 16:56:22.769 1 INFO oslo.messaging._drivers.impl_rabbit [req-73458c2e-fdd5-4e3e-9139-264dda190dbd fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [1f4957d5-0282-4661-8861-2b3494b325e6] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 41126.
2018-05-04 16:58:22.770 1 ERROR oslo.messaging._drivers.impl_rabbit [req-73458c2e-fdd5-4e3e-9139-264dda190dbd fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [1f4957d5-0282-4661-8861-2b3494b325e6] AMQP server on controller-2.internalapi.localdomain:5672 is unreachable: timed out. Trying again in 1 seconds. Client port: None: timeout: timed out
2018-05-04 16:58:23.797 1 INFO oslo.messaging._drivers.impl_rabbit [req-73458c2e-fdd5-4e3e-9139-264dda190dbd fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [1f4957d5-0282-4661-8861-2b3494b325e6] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 41136.
          
192.168.24.8 | SUCCESS | rc=0 >>
2018-05-04 16:55:04.115 1 INFO oslo.messaging._drivers.impl_rabbit [req-66f26bf5-93dd-4049-a474-db06695b0ec7 fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [b7870cf8-04c5-4053-aeea-aa8340f5b3c9] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 34236.
2018-05-04 16:57:04.117 1 ERROR oslo.messaging._drivers.impl_rabbit [req-66f26bf5-93dd-4049-a474-db06695b0ec7 fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [b7870cf8-04c5-4053-aeea-aa8340f5b3c9] AMQP server on controller-2.internalapi.localdomain:5672 is unreachable: timed out. Trying again in 1 seconds. Client port: None: timeout: timed out
2018-05-04 16:57:05.134 1 INFO oslo.messaging._drivers.impl_rabbit [req-66f26bf5-93dd-4049-a474-db06695b0ec7 fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [b7870cf8-04c5-4053-aeea-aa8340f5b3c9] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 34246.
2018-05-04 16:59:05.136 1 ERROR oslo.messaging._drivers.impl_rabbit [req-66f26bf5-93dd-4049-a474-db06695b0ec7 fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [b7870cf8-04c5-4053-aeea-aa8340f5b3c9] AMQP server on controller-2.internalapi.localdomain:5672 is unreachable: timed out. Trying again in 1 seconds. Client port: None: timeout: timed out
2018-05-04 16:59:06.155 1 INFO oslo.messaging._drivers.impl_rabbit [req-66f26bf5-93dd-4049-a474-db06695b0ec7 fd88b324480b478a8b52a289862f94c4 c1d9a1aa57f149e6b4fa7eed7416daf7 - default default] [b7870cf8-04c5-4053-aeea-aa8340f5b3c9] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 34266.

Interestingly on controller-0 has this constant "in an old incarnation (2) of this node(3) error whereas the other controllers
seem to get the connection just fine:
(undercloud) [stack@undercloud-0 ~]$ ansible -i inv.yaml Controller --become -m shell -a 'hostname; tail -n10 /var/log/containers/rabbitmq/rabbit@controller-?.log'
 [WARNING]: Skipping unexpected key (hostvars) in group (_meta), only "vars", "children" and "hosts" are valid
          
192.168.24.10 | SUCCESS | rc=0 >>
controller-2
          
=INFO REPORT==== 4-May-2018::17:03:22 ===
Connection <0.14750.44> (172.17.1.11:55490 -> 172.17.1.22:5672) has a client-provided name: neutron-server:28:92d64749-201b-4f26-82be-6c870ec1eff8
          
=INFO REPORT==== 4-May-2018::17:03:22 ===
connection <0.14750.44> (172.17.1.11:55490 -> 172.17.1.22:5672 - neutron-server:28:92d64749-201b-4f26-82be-6c870ec1eff8): user 'guest' authenticated and granted access to vhost '/'
          
=WARNING REPORT==== 4-May-2018::17:03:23 ===
closing AMQP connection <0.12049.44> (172.17.1.20:56346 -> 172.17.1.22:5672 - neutron-server:30:983b537c-354b-4e29-b837-5d50624d79df, vhost: '/', user: 'guest'):
client unexpectedly closed TCP connection
          
192.168.24.7 | SUCCESS | rc=0 >>
controller-1
connection <0.22488.5641> (172.17.1.11:52756 -> 172.17.1.11:5672 - neutron-server:30:81fdad49-53e6-492b-9932-46f64d61485d): user 'guest' authenticated and granted access to vhost '/'
          
=INFO REPORT==== 4-May-2018::17:03:22 ===
accepting AMQP connection <0.17610.5657> (172.17.1.22:55792 -> 172.17.1.11:5672)
          
=INFO REPORT==== 4-May-2018::17:03:22 ===
Connection <0.17610.5657> (172.17.1.22:55792 -> 172.17.1.11:5672) has a client-provided name: neutron-server:27:2b34f848-7b69-4970-bb45-19c15fe6f45a
          
=INFO REPORT==== 4-May-2018::17:03:22 ===
connection <0.17610.5657> (172.17.1.22:55792 -> 172.17.1.11:5672 - neutron-server:27:2b34f848-7b69-4970-bb45-19c15fe6f45a): user 'guest' authenticated and granted access to vhost '/'
          
192.168.24.22 | SUCCESS | rc=0 >>
controller-0
=INFO REPORT==== 4-May-2018::17:03:23 ===
connection <0.4990.50> (172.17.1.22:51554 -> 172.17.1.20:5672 - neutron-server:28:a646ebef-4874-4265-81b7-22b2e34eaf1f): user 'guest' authenticated and granted access to vhost '/'
          
=ERROR REPORT==== 4-May-2018::17:03:23 ===
Discarding message {'$gen_cast',{deliver,{delivery,false,true,<0.2182.50>,{basic_message,{resource,<<"/">>,exchange,<<"q-agent-notifier-port-update_fanout">>},[<<>>],{content,60,{'P_basic',<<"application/json">>,<<"utf-8">>,[],2,0,undefined,undefined,undefined,undefined,undefined,undefined,undefined,undefined,undefined},<<248,0,16,97,112,112,108,105,99,97,116,105,111,110,47,106,115,111,110,5,117,116,102,45,56,0,0,0,0,2,0>>,rabbit_framing_amqp_0_9_1,[<<"{\"oslo.message\": \"{\\"_context_domain\\": null, \\"_context_request_id\\": \\"req-7870ea49-e7f3-4dce-8d61-01f9ad087de5\\", \\"_context_global_request_id\\": null, \\"_context_auth_token\\": null, \\"_context_resource_uuid\\": null, \\"_context_tenant_name\\": null, \\"_context_user\\": null, \\"_context_user_id\\": null, \\"_context_show_deleted\\": false, \\"_context_is_admin\\": true, \\"version\\": \\"1.0\\", \\"_context_project_domain\\": null, \\"_context_timestamp\\": \\"2018-05-04 14:20:59.466165\\", \\"method\\": \\"port_update\\", \\"_context_project\\": null, \\"_context_roles\\": [], \\"args\\": {\\"segmentation_id\\": 10, \\"physical_network\\": null, \\"port\\": {\\"status\\": \\"DOWN\\", \\"binding:host_id\\": \\"controller-1.localdomain\\", \\"description\\": \\"\\", \\"allowed_address_pairs\\": [], \\"tags\\": [], \\"extra_dhcp_opts\\": [], \\"updated_at\\": \\"2018-05-04T14:21:00Z\\", \\"device_owner\\": \\"network:dhcp\\", \\"revision_number\\": 12, \\"port_security_enabled\\": false, \\"binding:profile\\": {}, \\"fixed_ips\\": [{\\"subnet_id\\": \\"fe0a1602-aacd-4890-90f6-3986a922ad4e\\", \\"ip_address\\": \\"192.168.32.3\\"}], \\"id\\": \\"d802614e-13e2-4054-b0bc-8775640645a8\\", \\"security_groups\\": [], \\"device_id\\": \\"dhcpf42f2830-b2ec-5a2c-93f3-e3e3328e20a3-5e19a278-c1ec-4035-aed9-e019804b65f3\\", \\"name\\": \\"\\", \\"admin_state_up\\": true, \\"network_id\\": \\"5e19a278-c1ec-4035-aed9-e019804b65f3\\", \\"tenant_id\\": \\"16424d1615154fbba2bca65bbc2f6607\\", \\"binding:vif_details\\": {\\"port_filter\\": true, \\"datapath_type\\": \\"system\\", \\"ovs_hybrid_plug\\": true}, \\"binding:vnic_type\\": \\"normal\\", \\"binding:vif_type\\": \\"ovs\\", \\"qos_policy_id\\": null, \\"mac_address\\": \\"fa:16:3e:39:37:2f\\", \\"project_id\\": \\"16424d1615154fbba2bca65bbc2f6607\\", \\"created_at\\": \\"2018-05-03T18:11:55Z\\"}, \\"network_type\\": \\"vxlan\\"}, \\"_unique_id\\": \\"a7006f9a5c9a4e5d8bf48a09e8e17071\\", \\"_context_tenant_id\\": null, \\"_context_is_admin_project\\": true, \\"_context_project_name\\": null, \\"_context_user_identity\\": \\"- - - - -\\", \\"_context_tenant\\": null, \\"_context_project_id\\": null, \\"_context_read_only\\": false, \\"_context_user_domain\\": null, \\"_context_user_name\\": null}\", \"oslo.version\": \"2.0\"}">>]},<<51,92,47,225,5,3,46,216,8,10,50,217,7,168,115,135>>,true},1,flow},false}} from <0.2182.50> to <0.3223.0> in an old incarnation (2) of this node (3)
          
          
=WARNING REPORT==== 4-May-2018::17:03:24 ===
closing AMQP connection <0.1301.50> (172.17.1.22:48016 -> 172.17.1.20:5672 - neutron-server:29:5280abc9-392b-4715-b6aa-889d37ee5fa2, vhost: '/', user: 'guest'):
client unexpectedly closed TCP connection
          
On the computes the rabbitmq string is configured correctly (i.e. with all three controllers)

Comment 2 Michele Baldessari 2018-05-08 11:11:23 UTC
Artem, can you fill in the details on how you reboot the nodes?
I am specifically interested in the following:
1) How you reboot the nodes (which command)
2) How do you wait for the next node to be rebooted? i.e. do you wait for a node to be a up bu checking that you can ping it or something else? Also how much do you wait?

I ask because we've done quite a bit of reboot testing and never hit this one.
If you can point me to the ansible task of your tests that is fine as well.

Thanks,
Michele

Comment 3 Artem Hrechanychenko 2018-05-08 12:34:10 UTC
Using jjb - jobs/defaults/stages/ir_tripleo_overcloud_reboot.groovy.inc
this stage uses infrared command

echo -e 'cleanup_services: [{ir_tripleo_overcloud_reboot_cleanup_services}]' > cleanup_services.yml

infrared tripleo-overcloud \
     -o overcloud-reboot.yml \
     --postreboot True \
     --deployment-files {ir_tripleo_overcloud_deployment_files} \
     -e @cleanup_services.yml \
     {ir_tripleo_overcloud_reboot_override|}

Comment 4 Michele Baldessari 2018-05-08 13:46:05 UTC
(In reply to Artem Hrechanychenko from comment #3)
> Using jjb - jobs/defaults/stages/ir_tripleo_overcloud_reboot.groovy.inc
> this stage uses infrared command
> 
> echo -e 'cleanup_services: [{ir_tripleo_overcloud_reboot_cleanup_services}]'
> > cleanup_services.yml
> 
> infrared tripleo-overcloud \
>      -o overcloud-reboot.yml \
>      --postreboot True \
>      --deployment-files {ir_tripleo_overcloud_deployment_files} \
>      -e @cleanup_services.yml \
>      {ir_tripleo_overcloud_reboot_override|}

Are any services being put into {ir_tripleo_overcloud_reboot_cleanup_services} ?

FTR code for this is here: https://github.com/redhat-openstack/infrared/blob/master/plugins/tripleo-overcloud/overcloud_reboot.yml

Comment 5 Michele Baldessari 2018-05-08 15:31:08 UTC
So here is my walk-through of your overcloud reboot code (do correct me if I am wrong) (I'll just walk through the controllers part which is the one relevant to this BZ):
1) Stop the pcmk cluster on the node:
https://github.com/redhat-openstack/infrared/blob/master/plugins/tripleo-overcloud/overcloud_reboot.yml#L81
2) Sleep 5 + shutdown -r now and wait until openssh on the node is reachable again
3) Once the node is up you do a 'pcs cluster start --wait=300'
4) You wait for the cluster to see the other nodes via 
https://github.com/redhat-openstack/infrared/blob/master/plugins/tripleo-overcloud/overcloud_reboot.yml#L153
5) You wait for the cluster to not have any stopped resources via
https://github.com/redhat-openstack/infrared/blob/master/plugins/tripleo-overcloud/overcloud_reboot.yml#L177

The problem is that 5) does not work:
1) You are grepping for 'Stopped:' which is not the output in such a situation:
Online: [ overcloud-controller-0 overcloud-controller-2 ]
OFFLINE: [ overcloud-controller-1 ]
GuestOnline: [ galera-bundle-0@overcloud-controller-0 galera-bundle-2@overcloud-controller-2 rabbitmq-bundle-0@overcloud-controller-0 rabbitmq-bundle-2@overcloud-controller-2 redis-bundle-0@overcloud-controller-0 redis-bundle-2@overcloud-controller-2 ]

Full list of resources:

 Docker container set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]
   rabbitmq-bundle-0	(ocf::heartbeat:rabbitmq-cluster):	Started overcloud-controller-0
   rabbitmq-bundle-1	(ocf::heartbeat:rabbitmq-cluster):	Stopped
   rabbitmq-bundle-2	(ocf::heartbeat:rabbitmq-cluster):	Started overcloud-controller-2
 Docker container set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Master overcloud-controller-0
   galera-bundle-1	(ocf::heartbeat:galera):	Stopped
   galera-bundle-2	(ocf::heartbeat:galera):	Master overcloud-controller-2
 Docker container set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]
   redis-bundle-0	(ocf::heartbeat:redis):	Master overcloud-controller-0
   redis-bundle-1	(ocf::heartbeat:redis):	Stopped
   redis-bundle-2	(ocf::heartbeat:redis):	Slave overcloud-controller-2
 ip-192.168.24.13	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-0
 ip-172.20.0.200	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-0
 ip-172.17.0.14	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-2
 ip-172.17.0.19	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-0
 ip-172.18.0.19	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-2
 ip-172.19.0.19	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-2
 Docker container set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]
   haproxy-bundle-docker-0	(ocf::heartbeat:docker):	Started overcloud-controller-0
   haproxy-bundle-docker-1	(ocf::heartbeat:docker):	Stopped
   haproxy-bundle-docker-2	(ocf::heartbeat:docker):	Started overcloud-controller-2
 Docker container: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]
   openstack-cinder-volume-docker-0	(ocf::heartbeat:docker):	Started overcloud-controller-0

2) Also you are appending || /bin/true so the exit core will never fail which means that later fail_stop will never fail.

So basically what you are doing is rebooting a node before the previous rebooted node has all resources up and you are doing this right when rabbit is about to start. rabbit won't cope well with a restart exactly while the rabbit cluster is forming.

Comment 6 Assaf Muller 2018-05-14 12:24:39 UTC
*** Bug 1574980 has been marked as a duplicate of this bug. ***

Comment 7 Assaf Muller 2018-05-14 12:25:23 UTC
Matching severity and blocker flag status of a duplicate of this bug.

Comment 14 Andrew Beekhof 2018-08-07 06:30:33 UTC
So is this a testing artefact or something we plan to fix in the agent?

Comment 15 Michele Baldessari 2018-08-09 15:49:00 UTC
(In reply to Andrew Beekhof from comment #14)
> So is this a testing artefact or something we plan to fix in the agent?

agent fix (as discussed today). Likely fix is in https://github.com/ClusterLabs/resource-agents/pull/1188

Comment 16 Michele Baldessari 2018-11-21 14:10:04 UTC
Oyvind, can you pull in the fix at comment 15 when you get to it?

Thanks,
Michele

Comment 28 pkomarov 2019-01-28 10:52:02 UTC
Verified via automation: 
https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/DFG/view/upgrades/view/update/job/DFG-upgrades-updates-13-from-z3-composable-ipv6/7/artifact/.sh/ir-tripleo-overcloud-reboot.log

section : 

TASK [check for any stopped pcs resources] *************************************
task path: /home/rhos-ci/jenkins/workspace/DFG-upgrades-updates-13-from-z3-composable-ipv6/infrared/plugins/tripleo-overcloud/overcloud_reboot.yml:208
Sunday 27 January 2019  16:04:23 +0000 (0:00:00.056)       0:16:07.014 ******** 
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using 
`result|failed` use `result is failed`. This feature will be removed in version
 2.9. Deprecation warnings can be disabled by setting 
deprecation_warnings=False in ansible.cfg.
 [WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: '{{ install.deployment.files | basename }}' ==
'virt'

skipping: [messaging-1] => (item=Stopped)  => {
    "changed": false, 
    "item": "Stopped", 
    "skip_reason": "Conditional result was False"
}
skipping: [messaging-1] => (item=Starting)  => {
    "changed": false, 
    "item": "Starting", 
    "skip_reason": "Conditional result was False"
}
skipping: [messaging-1] => (item=Promoting)  => {
    "changed": false, 
    "item": "Promoting", 
    "skip_reason": "Conditional result was False"
}

Comment 30 errata-xmlrpc 2019-08-06 12:01:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2012


Note You need to log in before you can comment on or make changes to this bug.