Bug 1128325 - [RFE][cinder]: Verifying the Cinder Quota Usage
Summary: [RFE][cinder]: Verifying the Cinder Quota Usage
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: RFEs
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: RHOS Maint
QA Contact:
URL: https://blueprints.launchpad.net/cind...
Whiteboard: upstream_milestone_none upstream_defi...
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-09 04:05 UTC by RHOS Integration
Modified: 2015-11-20 19:57 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-19 16:59:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description RHOS Integration 2014-08-09 04:05:42 UTC
Cloned from launchpad blueprint https://blueprints.launchpad.net/cinder/+spec/verifying-cinder-quota-usage.

Description:

Problem description
============

        Cinder maintains a table called quota_usage in the Cinder db to record the number of volumes and  their sizes that are used by each tenant. This record is used for the quota verification of the tenant. It is relatively slow to count all the volumes owned by the tenant, sum up the volume sizes and populate this record in scale and there are chances that this record may not match with what the tenant is actually using when there are coding errors in Cinder or there is constant create, update or delete of volumes in the tenant.

Proposed change
===========

        We are suggesting a tool or a script outside Cinder that counts all the volumes owned by each and every tenant and also sums up their sizes to verify if the quota usage record for the respective tenant is correct in the table. Given that the script can find quota problems before the limit is hit it should make debugging much easier, especially in scale.

Specification URL (additional information):

None


Note You need to log in before you can comment on or make changes to this bug.