Bug 1642515 - Regression: cpu_allocation_ratio doesn't update existing providers
Summary: Regression: cpu_allocation_ratio doesn't update existing providers
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 14.0 (Rocky)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: beta
: 14.0 (Rocky)
Assignee: OSP DFG:Compute
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-24 14:55 UTC by awaugama
Modified: 2023-03-21 19:02 UTC (History)
11 users (show)

Fixed In Version: openstack-nova-18.0.3-0.20181011032837.d1243fe.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-11 11:54:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1799727 0 None None None 2018-10-24 14:55:45 UTC
OpenStack gerrit 613115 0 None MERGED Add functional recreate test for bug 1799727 2020-04-22 00:45:06 UTC
OpenStack gerrit 613126 0 None MERGED Provide allocation_ratio/reserved amounts from update_provider_tree() 2020-04-22 00:45:06 UTC
Red Hat Product Errata RHEA-2019:0045 0 None None None 2019-01-11 11:54:32 UTC

Description awaugama 2018-10-24 14:55:46 UTC
After changing the value of cpu_allocation_ratio in nova.conf from 16 to 1 and restarting nova containers, the ProviderTree still uses the old value

(A patch with extra debugging is applied to the system for ProviderTree information)

Nova.conf setting:

cpu_allocation_ratio=1

[root@compute-0 ~]# docker restart nova_compute nova_libvirt
nova_compute
nova_libvirt

Inside Nova-compute.log

2018-10-23 19:19:49.217 1 DEBUG oslo_service.service [req-9539f623-b342-4c5d-ab93-6ffacdbd8358 - - - - -] cpu_allocation_ratio = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:3023

2018-10-23 19:19:51.990 1 DEBUG nova.scheduler.client.report [req-9490c7f4-2157-44ef-a81c-ea3e6bf9be21 - - - - -] Updating ProviderTree inventory for provider ca60934d-074d-4628-ae61-3c3bbc9e5543 from _refresh_and_get_inventory using data: {u'VCPU': {u'allocation_ratio': 16.0, u'total': 6, u'reserved': 0, u'step_size': 1, u'min_unit': 1, u'max_unit': 6}, u'MEMORY_MB': {u'allocation_ratio': 1.0, u'total': 6143, u'reserved': 4096, u'step_size': 1, u'min_unit': 1, u'max_unit': 6143}, u'DISK_GB': {u'allocation_ratio': 1.0, u'total': 19, u'reserved': 0, u'step_size': 1, u'min_unit': 1, u'max_unit': 19}} _refresh_and_get_inventory /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:754
2018-10-23 19:19:51.990 1 DEBUG nova.compute.provider_tree [req-9490c7f4-2157-44ef-a81c-ea3e6bf9be21 - - - - -] Updating inventory in ProviderTree for provider ca60934d-074d-4628-ae61-3c3bbc9e5543 with inventory: {u'VCPU': {u'allocation_ratio': 16.0, u'total': 6, u'reserved': 0, u'step_size': 1, u'min_unit': 1, u'max_unit': 6}, u'MEMORY_MB': {u'allocation_ratio': 1.0, u'total': 6143, u'reserved': 4096, u'step_size': 1, u'min_unit': 1, u'max_unit': 6143}, u'DISK_GB': {u'allocation_ratio': 1.0, u'total': 19, u'reserved': 0, u'step_size': 1, u'min_unit': 1, u'max_unit': 19}} update_inventory /usr/lib/python2.7/site-packages/nova/compute/provider_tree.py:172

Comment 1 Stephen Finucane 2018-10-24 14:59:31 UTC
For reference, there were two patches applied:

- https://review.openstack.org/#/c/597560/7
- https://review.openstack.org/#/c/597553/3

These were applied onto commit 96a757ddf99cc82b2a9b201bd849f21cda0d3207 ("placement: Always reset conf.CONF when starting the wsgi app").

Comment 10 errata-xmlrpc 2019-01-11 11:54:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:0045


Note You need to log in before you can comment on or make changes to this bug.