Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1537045

Summary: Bug in log output in hardware.py "Not enough available memory to schedule instance" prints full memory instead of available memory
Product: Red Hat OpenStack Reporter: Sahid Ferdjaoui <sferdjao>
Component: openstack-novaAssignee: Sahid Ferdjaoui <sferdjao>
Status: CLOSED ERRATA QA Contact: Joe H. Rahme <jhakimra>
Severity: medium Docs Contact:
Priority: medium    
Version: 11.0 (Ocata)CC: akaris, berrange, dasmith, eglynn, jhakimra, kchamart, lyarwood, sbauza, sferdjao, sgordon, srevivo, stephenfin, vromanso
Target Milestone: asyncKeywords: Triaged, ZStream
Target Release: 11.0 (Ocata)   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: openstack-nova-15.0.8-5.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1519054
: 1537047 (view as bug list) Environment:
Last Closed: 2018-02-13 16:27:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1519054    
Bug Blocks: 1537047    

Description Sahid Ferdjaoui 2018-01-22 10:37:16 UTC
+++ This bug was initially created as a clone of Bug #1519054 +++

Description of problem:
Bug in log output in hardware.py "Not enough available memory to schedule instance" prints full memory instead of available memory

Version-Release number of selected component (if applicable):


Additional info:

When nova fails scheduling, it will print:
~~~
2017-11-29 10:50:16.904 325123 DEBUG nova.virt.hardware [req-b62c53d2-13db-4fac-a125-409b4f046418 8f883df20fce46dbef3ce634610c51be53b87e658359f05b7eba1062ce7e5d8b 5b54e36678a542d899f1ff62268fc25a - - -] Not enough available memory to schedule instance. Oversubscription is not possible with pinned instances. Required: 32768, actual: 65406 _numa_fit_instance_cell_with_pinning /usr/lib/python2.7/site-packages/nova/virt/hardware.py:845
2017-11-29 10:50:16.904 325123 DEBUG oslo_concurrency.lockutils [req-b62c53d2-13db-4fac-a125-409b4f046418 8f883df20fce46dbef3ce634610c51be53b87e658359f05b7eba1062ce7e5d8b 5b54e36678a542d899f1ff62268fc25a - - -] Lock "compute_resources" released by "nova.compute.resource_tracker.instance_claim" :: held 0.021s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282
2017-11-29 10:50:16.905 325123 DEBUG nova.compute.manager [req-b62c53d2-13db-4fac-a125-409b4f046418 8f883df20fce46dbef3ce634610c51be53b87e658359f05b7eba1062ce7e5d8b 5b54e36678a542d899f1ff62268fc25a - - -] [instance: 86cc16bd-5f51-402b-aa03-01ab0c3ffaf4] Insufficient compute resources: Requested instance NUMA topology cannot fit the given host NUMA topology. _build_and_run_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:1934
~~~

This of course looks very confusing (we need 32 GB, we have 64, so why is this failing?).

The problem here is the log output, which is flawed --- /usr/lib/python2.7/site-packages/nova/virt/hardware.py:
~~~
    840     if host_cell.avail_memory < instance_cell.memory:
    841         LOG.debug('Not enough available memory to schedule instance. '
    842                   'Oversubscription is not possible with pinned instances. '
    843                   'Required: %(required)s, actual: %(actual)s',
    844                   {'required': instance_cell.memory,
    845                    'actual': host_cell.memory})
    846         return
~~~

This should be:
~~~
    840     if host_cell.avail_memory < instance_cell.memory:
    841         LOG.debug('Not enough available memory to schedule instance. '
    842                   'Oversubscription is not possible with pinned instances. '
    843                   'Required: %(required)s, actual: %(actual)s',
    844                   {'required': instance_cell.memory,
    845                    'actual': host_cell.avail_memory})
    846         return
~~~

Or even better:
~~~
    840     if host_cell.avail_memory < instance_cell.memory:
    841         LOG.debug('Not enough available memory to schedule instance. '
    842                   'Oversubscription is not possible with pinned instances. '
    843                   'Required: %(required)s, actual: %(actual)s, total: %(total)s',
    844                   {'required': instance_cell.memory,
    845                    'actual': host_cell.avail_memory,
    846                    'total': host_cell.memory})
    847         return
~~~

--- Additional comment from Sahid Ferdjaoui on 2017-11-30 03:44:05 EST ---

Thanks for the fix upstream. While merged I will take care of doing the backport.

Comment 4 errata-xmlrpc 2018-02-13 16:27:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:0314