Description of problem: In the basic situation, the following command shows the list of consumes for rabbitmq cluster. $ sudo docker exec $(sudo docker ps -f name=rabbitmq-bundle -q) rabbitmqctl list_consumers However, after we restart all the controller nodes one by one, we can not get the result by the command as the command get stacked at some point. ~~~ [heat-admin@controller-0 ~]$ sudo docker exec $(sudo docker ps -f name=rabbitmq-bundle -q) rabbitmqctl list_consumers Listing consumers l3_agent_fanout_73c3e66599a648379e3149a85af6bd8e <rabbit.5979.0> 3 true 0 [] q-l3-plugin_fanout_b2e4eb3ef5d641dfb189002c4d6f9133 <rabbit.10747.0> 3 true 0 [] ... neutron-vo-SubPort-1.0.compute-0.localdomain <rabbit.6308.0> 2 true 0 [] q-reports-plugin_fanout_46dab178092e4ac986bb4bbad877ac99 <rabbit.6053.0> 3 true 0 [] q-reports-plugin_fanout_94b1171459d24ad5b79f483902c03223 <rabbit.5904.0> 3 true 0 [] trunk_fanout_b5424ee9eeaf47f480394907b4e03e49 <rabbit.6260.0> 3 true 0 [] ~~~ We do not see any specific error in console, and neither in rabbitmq logs. Also, to recover from this situation, we need to restart whole the controller nodes. Version-Release number of selected component (if applicable): ~~~ $ sudo docker images | grep rabbitmq 192.168.24.1:8787/rhosp13/openstack-rabbitmq 2019-01-10.1 766efb5b9b38 3 months ago 635 MB 192.168.24.1:8787/rhosp13/openstack-rabbitmq pcmklatest 766efb5b9b38 3 months ago 635 MB $ sudo docker exec $(sudo docker ps -f name=rabbitmq-bundle -q) rpm -qa | grep rabbitmq rabbitmq-server-3.6.15-3.el7ost.noarch puppet-rabbitmq-8.1.1-0.20180216013831.d4b06b7.el7ost.noarch ~~~ RHOSP13 How reproducible: Always Steps to Reproduce: 1. Restart all the controllers one by one, by "sudo reboot" 2. run the said command in one of the controller Actual results: The command get stuck Expected results: The command should return the complete information without any error or stuck. Additional info:
Could this be related to the following bug in rabbitmq? https://bugzilla.redhat.com/show_bug.cgi?id=1592528
*** This bug has been marked as a duplicate of bug 1592528 ***