Description of problem: While testing the cinder backup-export and cinder backup-import, I somehow hit the maximum quota for backups and cinder quota-usage doesn't appear to be working: (overcloud) [stack@director16 ~]$ openstack quota show admin +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | backup-gigabytes | 1000 | | backups | 10 | | cores | 20 | | floating-ips | 50 | | gigabytes | 1000 | | gigabytes___DEFAULT__ | -1 | | gigabytes_tripleo | -1 | | groups | 10 | | health_monitors | None | | instances | 10 | | key-pairs | 100 | | l7_policies | None | | listeners | None | | load_balancers | None | | location | Munch({'cloud': '', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': 'accdc96fac254b2d95aa3a866c8969b0', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) | | name | None | | networks | 100 | | per-volume-gigabytes | -1 | | pools | None | | ports | 500 | | project | accdc96fac254b2d95aa3a866c8969b0 | | project_name | admin | | properties | 128 | | ram | 51200 | | rbac_policies | 10 | | routers | 10 | | secgroup-rules | 100 | | secgroups | 10 | | server-group-members | 10 | | server-groups | 10 | | snapshots | 10 | | snapshots___DEFAULT__ | -1 | | snapshots_tripleo | -1 | | subnet_pools | -1 | | subnets | 100 | | volumes | 10 | | volumes___DEFAULT__ | -1 | | volumes_tripleo | -1 | +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ (overcloud) [stack@director16 ~]$ cinder quota-usage admin +-----------------------+--------+----------+-------+-----------+ | Type | In_use | Reserved | Limit | Allocated | +-----------------------+--------+----------+-------+-----------+ | backup_gigabytes | 0 | 0 | 1000 | | | backups | 0 | 0 | 10 | | | gigabytes | 0 | 0 | 1000 | | | gigabytes___DEFAULT__ | 0 | 0 | -1 | | | gigabytes_tripleo | 0 | 0 | -1 | | | groups | 0 | 0 | 10 | | | per_volume_gigabytes | 0 | 0 | -1 | | | snapshots | 0 | 0 | 10 | | | snapshots___DEFAULT__ | 0 | 0 | -1 | | | snapshots_tripleo | 0 | 0 | -1 | | | volumes | 0 | 0 | 10 | | | volumes___DEFAULT__ | 0 | 0 | -1 | | | volumes_tripleo | 0 | 0 | -1 | | +-----------------------+--------+----------+-------+-----------+ (overcloud) [stack@director16 ~]$ (overcloud) [stack@director16 ~]$ openstack volume backup list +--------------------------------------+---------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------------+-------------+-----------+------+ | 1b1b2d06-6dc1-4a34-9f3b-7dd852ad3fa6 | None | None | available | 1 | | 855aaaa5-e591-43b6-be20-a7150b9dc934 | attached-full | None | available | 1 | | 6ada70b0-ed1a-47de-9661-37517844cea1 | detached-full | None | available | 1 | | 55904715-fe70-4e4b-ac0d-458ceb4c9633 | None | snap-backup | available | 1 | | 5ca9ed1b-e719-47fa-bbfd-a57d5f1b877b | None | None | available | 1 | +--------------------------------------+---------------+-------------+-----------+------+ (overcloud) [stack@director16 ~]$ cinder backup-import cinder.backup.drivers.ceph.CephBackupDriver eyJkcml2ZXJfaW5mbyI6IHt9LCAiaWQiOiAiODU1YWFhYTUtZTU5MS00M2I2LWJlMjAtYTcxNTBiOWRjOTM0IiwgInVzZXJfaWQiOiAiMWUwM2U0NDVlNWUzNDcyODhiNWViNTMxMjljZWU5ZTkiLCAicHJvamVjdF9pZCI6ICJhY2NkYzk2ZmFjMjU0YjJkOTVhYTNhODY2Yzg5NjliMCIsICJ2b2x1bWVfaWQiOiAiY2VlOWIyZDgtNGJhMS00MmIwLTlmMzYtNjIxZjFmYTk3YTgwIiwgImhvc3QiOiAib3ZlcmNsb3VkLWNvbnRyb2xsZXItMCIsICJhdmFpbGFiaWxpdHlfem9uZSI6ICJub3ZhIiwgImNvbnRhaW5lciI6ICJiYWNrdXBzIiwgInBhcmVudF9pZCI6IG51bGwsICJwYXJlbnQiOiBudWxsLCAic3RhdHVzIjogImF2YWlsYWJsZSIsICJmYWlsX3JlYXNvbiI6IG51bGwsICJzaXplIjogMSwgImRpc3BsYXlfbmFtZSI6ICJhdHRhY2hlZC1mdWxsIiwgImRpc3BsYXlfZGVzY3JpcHRpb24iOiBudWxsLCAic2VydmljZV9tZXRhZGF0YSI6ICJ7XCJiYXNlXCI6IFwidm9sdW1lLWNlZTliMmQ4LTRiYTEtNDJiMC05ZjM2LTYyMWYxZmE5N2E4MC5iYWNrdXAuODU1YWFhYTUtZTU5MS00M2I2LWJlMjAtYTcxNTBiOWRjOTM0XCJ9IiwgInNlcnZpY2UiOiAiY2luZGVyLmJhY2t1cC5kcml2ZXJzLmNlcGguQ2VwaEJhY2t1cERyaXZlciIsICJvYmplY3RfY291bnQiOiAwLCAidGVtcF92b2x1bWVfaWQiOiBudWxsLCAidGVtcF9zbmFwc2hvdF9pZCI6IG51bGwsICJudW1fZGVwZW5kZW50X2JhY2t1cHMiOiAwLCAic25hcHNob3RfaWQiOiBudWxsLCAiZGF0YV90aW1lc3RhbXAiOiAiMjAyMC0wMi0xMlQxNzowMjozOVoiLCAicmVzdG9yZV92b2x1bWVfaWQiOiBudWxsLCAibWV0YWRhdGEiOiB7fSwgImVuY3J5cHRpb25fa2V5X2lkIjogbnVsbCwgImNyZWF0ZWRfYXQiOiAiMjAyMC0wMi0xMlQxNzowMjozOVoiLCAidXBkYXRlZF9hdCI6ICIyMDIwLTAyLTEyVDE3OjAyOjU1WiIsICJkZWxldGVkX2F0IjogbnVsbCwgImRlbGV0ZWQiOiBmYWxzZX0= ERROR: BackupLimitExceeded: Maximum number of backups allowed (10) exceeded (HTTP 413) (Request-ID: req-849e5e74-aaef-4714-bb17-f60767effcef) (overcloud) [stack@director16 ~]$ Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Create a volume backup 2. use cinder backup-export to dump the information 3. Login to overcloud controller, get into the galera container, delete the backup entry from cinder.backup table 4. use cinder backup-import to import the backup Repeat this a couple times, and you'll hit the quota (10). Actual results: Quota hit. Expected results: Quota should be tallied by rows in the DB but it sounds like it's sum is stored elsewhere and increased with each insert into the DB which may cause inconsistencies if there's some data loss somewhere. Additional info:
(In reply to Darin Sorrentino from comment #0) > Steps to Reproduce: > 1. Create a volume backup > 2. use cinder backup-export to dump the information > 3. Login to overcloud controller, get into the galera container, delete the > backup entry from cinder.backup table > 4. use cinder backup-import to import the backup > > Repeat this a couple times, and you'll hit the quota (10). > > Actual results: > > Quota hit. > > Expected results: > > Quota should be tallied by rows in the DB but it sounds like it's sum is > stored elsewhere and increased with each insert into the DB which may cause > inconsistencies if there's some data loss somewhere. Just to be clear: does it happen only if you manually changed the content of the database manually removing items from the cinder.backup table?
Luigi, It happens without manually touching the DB at all. If I do the following: 1. Create a volume (test-volume) 2. Create a volume backup of this volume (test-backup) 3. Create a backup-export of this backup (test-backup) 4. Do a cinder backup-list, I see the backup (test-backup) 5. Run a backup-import using the exported data (Exported data of test-backup), it fails saying the backup exists already 6. Do a cinder backup-list, I DO NOT see the backup Which is this BZ listed here: https://bugzilla.redhat.com/show_bug.cgi?id=1802263 7. Run a backup-import using the exported data (Exported data of test-backup), it succeeds 8. Do a cinder backup-list, I see the backup (test-backup) If I just re-run the backup-import multiple times, it alternates between failing and succeeding and the backup alternates between showing up in "cinder backup-list" and disappearing from the list then I eventually hit: ERROR: BackupLimitExceeded: Maximum number of backups allowed (10) exceeded (HTTP 413) (Request-ID: req-803e300c-4702-4a99-86af-0a6c587e5462)
Thanks. I reproduced it. At a first sight, I would say it is a consequence (a corollary) of bug 1802263, because that bug causes the "backups" resource in the quota_usages db table to be increased but never decreased due to the wrong removal. When the other bug is fixed and the quota is updated correctly, this one may disappear as well, and I'm not sure which other steps may show it as well. Waiting for developer's opinion.
No work and solution yet, moving to z3.
It's not that quota-usage doesn't work, but that the call is passing the wrong parameter. For the quota-usage cinder command we must pass the project's id, not its name: $ cinder help quota-usage usage: cinder quota-usage <tenant_id> Lists quota usage for a tenant. Positional Arguments: <tenant_id> ID of tenant for which to list quota usage. So in that case it should have been: $ cinder quota-usage accdc96fac254b2d95aa3a866c8969b0
I think Luigi's comment #3 is correct, and this has been addressed by the fix for BZ #1802263, so setting the fixed-in version to openstack-cinder-15.4.0-1.20221003203219.58f0e73.el8ost and setting this BZ to MODIFIED. I think it makes sense for this to be tested separately from BZ #1802263, or it could be closed as a duplicate of BZ #1802263.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.9 bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:8795