+++ This bug was initially created as a clone of Bug #1265289 +++ When CFME disconnects from the RabbitMQ service, it leaves behind queues that are still bound to exchanges. The major problem with leaving behind the queues is that the queues will continue to receive messages and this causes the RabbitMQ process to consume memory unbounded until it dies unexpectedly. How to reproduce: 0. Install the RabbitMQ Management Plugin: https://www.rabbitmq.com/management.html on an OpenStack environment. (Double check it's not installed first) 1. Add CFME to the OpenStack environment, wait for the event catcher worker to fully start 2. Browse to RabbitMQ Management web ui (http://openstack_ip:15672), login with Rabbit credentials, click the "Queues" tab, look for queues named "miq-<cfme-ip_address>" * miq-<cfme-ip_address>-cinder * miq-<cfme-ip_address>-glance * miq-<cfme-ip_address>-nova * miq-<cfme-ip_address>-quantum 3. Disconnect CFME from OpenStack environment (remove provider from CFME) 4. Browse to RabbitMQ Management web ui and look for queues named "miq-<cfme-ip_adddress>" Expected Results: There should be no queues called: "miq-<cfme-ip_address>" Actual Results: There are four queues remaining called: * miq-<cfme-ip_address>-cinder * miq-<cfme-ip_address>-glance * miq-<cfme-ip_address>-nova * miq-<cfme-ip_address>-quantum --- Additional comment from Greg Blomquist on 2015-09-22 11:01:53 EDT --- For CFME 5.3 and earlier, the queue that's left behind is actually called "notifications.*". That's a literal "notifications.*". Note that there are openstack-specific queues called "notifications.info" and "notifications.error" that will remain after CFME is disconnected. After this bug is fixed, when there is no CFME connected to an openstack environment, there should be no queues called: * miq-<cfme-ip_address>-cinder * miq-<cfme-ip_address>-glance * miq-<cfme-ip_address>-nova * miq-<cfme-ip_address>-quantum * notifications.*
CFME 5.3 is out of support