Description of problem: Bug in log output in hardware.py "Not enough available memory to schedule instance" prints full memory instead of available memory Version-Release number of selected component (if applicable): Additional info: When nova fails scheduling, it will print: ~~~ 2017-11-29 10:50:16.904 325123 DEBUG nova.virt.hardware [req-b62c53d2-13db-4fac-a125-409b4f046418 8f883df20fce46dbef3ce634610c51be53b87e658359f05b7eba1062ce7e5d8b 5b54e36678a542d899f1ff62268fc25a - - -] Not enough available memory to schedule instance. Oversubscription is not possible with pinned instances. Required: 32768, actual: 65406 _numa_fit_instance_cell_with_pinning /usr/lib/python2.7/site-packages/nova/virt/hardware.py:845 2017-11-29 10:50:16.904 325123 DEBUG oslo_concurrency.lockutils [req-b62c53d2-13db-4fac-a125-409b4f046418 8f883df20fce46dbef3ce634610c51be53b87e658359f05b7eba1062ce7e5d8b 5b54e36678a542d899f1ff62268fc25a - - -] Lock "compute_resources" released by "nova.compute.resource_tracker.instance_claim" :: held 0.021s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282 2017-11-29 10:50:16.905 325123 DEBUG nova.compute.manager [req-b62c53d2-13db-4fac-a125-409b4f046418 8f883df20fce46dbef3ce634610c51be53b87e658359f05b7eba1062ce7e5d8b 5b54e36678a542d899f1ff62268fc25a - - -] [instance: 86cc16bd-5f51-402b-aa03-01ab0c3ffaf4] Insufficient compute resources: Requested instance NUMA topology cannot fit the given host NUMA topology. _build_and_run_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:1934 ~~~ This of course looks very confusing (we need 32 GB, we have 64, so why is this failing?). The problem here is the log output, which is flawed --- /usr/lib/python2.7/site-packages/nova/virt/hardware.py: ~~~ 840 if host_cell.avail_memory < instance_cell.memory: 841 LOG.debug('Not enough available memory to schedule instance. ' 842 'Oversubscription is not possible with pinned instances. ' 843 'Required: %(required)s, actual: %(actual)s', 844 {'required': instance_cell.memory, 845 'actual': host_cell.memory}) 846 return ~~~ This should be: ~~~ 840 if host_cell.avail_memory < instance_cell.memory: 841 LOG.debug('Not enough available memory to schedule instance. ' 842 'Oversubscription is not possible with pinned instances. ' 843 'Required: %(required)s, actual: %(actual)s', 844 {'required': instance_cell.memory, 845 'actual': host_cell.avail_memory}) 846 return ~~~ Or even better: ~~~ 840 if host_cell.avail_memory < instance_cell.memory: 841 LOG.debug('Not enough available memory to schedule instance. ' 842 'Oversubscription is not possible with pinned instances. ' 843 'Required: %(required)s, actual: %(actual)s, total: %(total)s', 844 {'required': instance_cell.memory, 845 'actual': host_cell.avail_memory, 846 'total': host_cell.memory}) 847 return ~~~
Thanks for the fix upstream. While merged I will take care of doing the backport.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:0369