When CFME disconnects from the RabbitMQ service, it leaves behind queues that are still bound to exchanges. The major problem with leaving behind the queues is that the queues will continue to receive messages and this causes the RabbitMQ process to consume memory unbounded until it dies unexpectedly. How to reproduce: 0. Install the RabbitMQ Management Plugin: https://www.rabbitmq.com/management.html on an OpenStack environment. (Double check it's not installed first) 1. Add CFME to the OpenStack environment, wait for the event catcher worker to fully start 2. Browse to RabbitMQ Management web ui (http://openstack_ip:15672), login with Rabbit credentials, click the "Queues" tab, look for queues named "miq-<cfme-ip_address>" * miq-<cfme-ip_address>-cinder * miq-<cfme-ip_address>-glance * miq-<cfme-ip_address>-nova * miq-<cfme-ip_address>-quantum 3. Disconnect CFME from OpenStack environment (remove provider from CFME) 4. Browse to RabbitMQ Management web ui and look for queues named "miq-<cfme-ip_adddress>" Expected Results: There should be no queues called: "miq-<cfme-ip_address>" Actual Results: There are four queues remaining called: * miq-<cfme-ip_address>-cinder * miq-<cfme-ip_address>-glance * miq-<cfme-ip_address>-nova * miq-<cfme-ip_address>-quantum
For CFME 5.3 and earlier, the queue that's left behind is actually called "notifications.*". That's a literal "notifications.*". Note that there are openstack-specific queues called "notifications.info" and "notifications.error" that will remain after CFME is disconnected. After this bug is fixed, when there is no CFME connected to an openstack environment, there should be no queues called: * miq-<cfme-ip_address>-cinder * miq-<cfme-ip_address>-glance * miq-<cfme-ip_address>-nova * miq-<cfme-ip_address>-quantum * notifications.*
New commit detected on ManageIQ/manageiq/master: https://github.com/ManageIQ/manageiq/commit/57a4bd3de48a63a150e1e7aa15524326c66e7e76 commit 57a4bd3de48a63a150e1e7aa15524326c66e7e76 Author: Greg Blomquist <gblomqui> AuthorDate: Tue Sep 22 12:54:02 2015 -0400 Commit: Greg Blomquist <gblomqui> CommitDate: Tue Sep 22 12:54:02 2015 -0400 Delete legacy rabbit queues When ManageIQ disconnects from the RabbitMQ service in OpenStack, it leaves behind the queue(s) used to bind to the messaging exchange for events from Nova, Glance, Cinder, etc... The problem with leaving these queues behind is that without a client to drain the queues, they keep filling up with messages from the notifications topics of the Nova, Glance, and Cinder exchanges. This patch will remove any legacy queues before attempting to create any new queues when ManageIQ connects to a RabbitMQ service in OpenStack. https://bugzilla.redhat.com/show_bug.cgi?id=1265289 .../openstack/amqp/openstack_rabbit_event_monitor.rb | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
https://github.com/ManageIQ/manageiq/pull/4514
New commit detected on ManageIQ/manageiq/master: https://github.com/ManageIQ/manageiq/commit/8c2dd990e7342509798d2a10da56a3f9d67fd2a1 commit 8c2dd990e7342509798d2a10da56a3f9d67fd2a1 Author: Greg Blomquist <gblomqui> AuthorDate: Thu Sep 24 10:31:06 2015 -0400 Commit: Greg Blomquist <gblomqui> CommitDate: Thu Sep 24 12:06:17 2015 -0400 Delete unused rabbit queues fixed the use of `queue_name` out of scope https://bugzilla.redhat.com/show_bug.cgi?id=1265289 .../amqp/openstack_rabbit_event_monitor.rb | 3 ++- .../amqp/openstack_rabbit_event_monitor_spec.rb | 23 +++++++++++++++++----- 2 files changed, 20 insertions(+), 6 deletions(-)
Created an Azure stack and check the UI. Working as submitted. Using 5.5.0.10 on https://10.16.7.101/miq_ae_customization/explorer Moving to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2015:2551