Bug 1042413 - [RFE][neutron]: used resource count in tenant quotas
Summary: [RFE][neutron]: used resource count in tenant quotas
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: unspecified
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: RHOS Maint
QA Contact: Ofer Blaut
URL: https://blueprints.launchpad.net/neut...
Whiteboard: upstream_milestone_next upstream_stat...
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-12 22:11 UTC by RHOS Integration
Modified: 2016-04-26 13:42 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-19 10:29:58 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description RHOS Integration 2013-12-12 22:11:44 UTC
Cloned from launchpad blueprint https://blueprints.launchpad.net/neutron/+spec/tenant-used-quotas.

Description:

Add the used resource count when getting quotas information for a tenant.

Actually, there's no api call to get the information on used resources.
The goal is to return the count of used resources of a tenant.

There's already a way to show quotas of a tenant. And the payload contains these values:

* floatingip
* network
* port
* router
* security_group
* security_group_rule
* subnet

It is possible to count every resource. I propose to add these values to the quotas payload:

* used_floatingips
* used_networks
* used_ports
* used_routers
* used_security_group_rules
* used_security_groups
* used_subnets

It should then be easier to check for used resources.

Specification URL (additional information):

None

Comment 2 Nir Yechiel 2014-03-05 08:35:07 UTC
Updating based on upstream status

Comment 3 Nir Yechiel 2015-03-19 10:29:58 UTC
This RFE was automatically opened to track status of upstream development. At this point we see no reason to keep track of this in Red Hat bugzilla, thus closing it.


Note You need to log in before you can comment on or make changes to this bug.