In RHOS 5.0, the periodic task related to the bandwidth_poll_interval configuration option is automatically turned into a no-op when the underlying driver does not support it. In RHOS 4.0, this task will do unnecessary work every time it runs. This includes querying the details of all running instances on the compute host from the conductor service. This adds unnecessary load on the database, conductor, message bus, and the compute host. This task is only relevant for the xenapi driver so we should backport the patch that turns off the task automatically. commit 4f82543ac7427638fec7e286bbb84fd7b3e3e9f3 Author: Phil Day <philip.day> Date: Fri Dec 6 23:54:43 2013 +0000 Make it possible to disable polling for bandwidth usage Bandwidth usage is only supported by some hypervisor drivers, but the period task always runs and asks conductor for a list of instances before it gets a NotImplementedError from the virt driver. This change allows this to be disabled by setting bandwidth_poll_interval to 0 which is consistent with other periodic task interval settings, avoiding the wasted conductor call. It also will automatically disable bandwidth polling if the driver raises NotImplemented. Change-Id: I2ac9c967c5ceafffc39a0a372146c762891a08b8
Without the patch (by setting the periodic task to 1min): 2014-05-28 11:30:39.372 15114 INFO nova.compute.manager [req-560be7dc-1c0c-454e-9100-acfaf9a533b4 None None] Updating bandwidth usage cache ... 2014-05-28 11:31:39.776 15114 INFO nova.compute.manager [-] Updating bandwidth usage cache ... 2014-05-28 11:32:40.232 15114 INFO nova.compute.manager [-] Updating bandwidth usage cache With the patch that only happens once: 2014-05-28 11:34:06.097 15304 INFO nova.compute.manager [-] Updating bandwidth usage cache 2014-05-28 11:34:06.126 15304 WARNING nova.compute.manager [-] Bandwidth usage not supported by hypervisor. 2014-05-28 11:35:06.156 15304 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2014-0578.html