Bug 1834938 - allocations database is not properly cleaned
Summary: allocations database is not properly cleaned
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 16.1 (Train)
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: async
: 16.1 (Train on RHEL 8.2)
Assignee: OSP DFG:Compute
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-12 17:24 UTC by Artom Lifshitz
Modified: 2023-03-21 19:30 UTC (History)
9 users (show)

Fixed In Version: openstack-nova-20.3.1-0.20200615113446.1a320f2.el8ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-29 07:52:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:3148 0 None None None 2020-07-29 07:52:57 UTC

Description Artom Lifshitz 2020-05-12 17:24:19 UTC
This bug was initially created as a copy of Bug #1721068

I am copying this bug because: 



Description of problem:

nova_api is groving huge, it seems never cleaned.

MariaDB [nova_api]> select count(*) from request_specs;
|  1126585 |

MariaDB [nova_api]> select count(*) from consumers;
|   444235 |

We have "nova-manage db archive_deleted_rows" running daily on each of our clouds, but the following seem to pile up regardless.

I've been looking at similar finding yesterday and today between several clouds and seems there's 2 culprits that leave allocations behind that effectively "consume" cloud resources but are not there:

- migrations -- sometimes the migration makes a reservation, fails, the instance gets back up but the reservation does not get cleared
- upgrades / maintenance / crashes on DB or rabbitmq -- high amount of allocations seem to correlate with clouds recently upgraded from RHOSP11 to 13 (upgrade is with our own installer following upstream guidelines of disabling services, doing offline data migrations and bringing the service back up)

And consumers/request_specs date back all the way to the original creation of the cloud. Seems these never get cleared.


Can we have a query or tool to properly clear the database of unused intried ?

Version-Release number of selected component (if applicable):
OSP13

How reproducible:
unsure

Steps to Reproduce:
unsure

Actual results:
Entries in the database are left there for ever

Expected results:
Entries removed or a way to properly clean it

Additional info:

Comment 2 Bogdan Dobrelya 2020-06-16 08:48:14 UTC
Could you please clarify what is still missing after https://review.opendev.org/670112 had been created as of OSP13 and https://access.redhat.com/solutions/4420801 , https://access.redhat.com/solutions/3537351 had been published?

Comment 9 Alex McLeod 2020-07-21 11:13:12 UTC
If this bug requires doc text for errata release, please set the 'Doc Type' and provide draft text according to the template in the 'Doc Text' field. The documentation team will review, edit, and approve the text.

If this bug does not require doc text, please set the 'requires_doc_text' flag to -.

Comment 11 errata-xmlrpc 2020-07-29 07:52:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3148


Note You need to log in before you can comment on or make changes to this bug.