Bug 1884455 - Quota is not honored when multiple requests are fired in parallel (or in bulk)
Summary: Quota is not honored when multiple requests are fired in parallel (or in bulk)
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: ---
: ---
Assignee: Rodolfo Alonso
QA Contact: Eran Kuris
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-02 03:43 UTC by Michele Valsecchi
Modified: 2023-12-15 19:38 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-09 07:53:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1862050 0 None None None 2020-10-08 11:53:46 UTC
Red Hat Issue Tracker OSP-17552 0 None None None 2022-07-11 14:23:34 UTC

Description Michele Valsecchi 2020-10-02 03:43:38 UTC
Description of problem:
Quota is not honored for bulk creation. 

Version-Release number of selected component (if applicable):
Red Hat OpenStack Platform release 13.0.12 (Queens)

How reproducible:
100%

Steps to Reproduce:

1. Create a project, attach an user to to it, and assign it a quota for Floating IPs

~~~
 $ source overcloudrc
 $ openstack project create test-quota
 $ openstack user create --project test-quota quota-tester --password 123456
 $ openstack quota set --floating-ips 10 test-quota

 $ cp overcloudrc quota-tester.rc
 $ sed -i 's/OS_USERNAME=admin/OS_USERNAME=quota-tester/' quota-tester.rc 
 $ sed 's/OS_PASSWORD=XXXXXXXXX*/OS_PASSWORD=123456/' quota-tester.rc 
 $ vi quota-tester.rc 
 $ source quota-tester.rc 
~~~

2. Confirm the quota is set
~~~
(overcloud) [stack@undercloud-0 ~]$ openstack quota list --network
+----------------------------------+--------------+----------+-------+---------------+---------+-----------------+----------------------+---------+--------------+
| Project ID                       | Floating IPs | Networks | Ports | RBAC Policies | Routers | Security Groups | Security Group Rules | Subnets | Subnet Pools |
+----------------------------------+--------------+----------+-------+---------------+---------+-----------------+----------------------+---------+--------------+
| 9fc8427e61c74d88b71d72916bddd294 |           10 |      100 |   500 |            10 |      10 |              10 |                  100 |     100 |           -1 |
+----------------------------------+--------------+----------+-------+---------------+---------+-----------------+----------------------+---------+--------------+
~~~

3. Create some floating IPs

~~~
 $ cat test.sh
#!/bin/sh
curl -s -X POST -H "Content-Type: application/json" -H "X-Auth-Token: XXXX" \-d '
{
	"floatingip": {
		"floating_network_id": "YYYY",
		"project_id": "ZZZZ"
	}
}' http://X.X.X.X:9696/v2.0/floatingips -i -o test$1.log
 $  for i in `seq 1 30`; do     sh test.sh  ${i} & done

~~~

3. Confirm the quota is exceeded, all 30 FP are created, with a quota of 10
~~~
(overcloud) [stack@undercloud-0 ~]$ openstack floating ip list -f value -c ID | wc -l
30 <===
~~~

3.1 This happens when using the openstack-cli as well, 13 FP out of 30 are created, with a quota of 10
~~~
(overcloud) [stack@undercloud-0 ~]$ openstack floating ip delete $(openstack floating ip list -f value -c ID)
(overcloud) [stack@undercloud-0 ~]$ openstack floating ip list -f value -c ID | wc -l
0
(overcloud) [stack@undercloud-0 ~]$  for i in `seq 1 30`; do openstack floating ip create YYYY & done
Error while executing command: HttpException: 409, {"NeutronError": {"message": "Quota exceeded for resources: ['floatingip'].", "type": "OverQuota", "detail": ""}}
Error while executing command: HttpException: 409, {"NeutronError": {"message": "Quota exceeded for resources: ['floatingip'].", "type": "OverQuota", "detail": ""}}
... omitted for brevity ...

(overcloud) [stack@undercloud-0 ~]$ openstack floating ip list -f value -c ID | wc -l
13 <===
~~~

Actual results:
The quota is exceeded.

Expected results:
The quota should be honored.

Additional info:

Comment 2 Rodolfo Alonso 2020-10-08 11:53:46 UTC
Hello Michele:

This is a known behavior in OpenStack, not only in Neutron. I'll point you to [1].

The recommendation is to place a rate-limiting solution in front of the API endpoints to reduce (not eliminate) the impact a user can cause by making rapid fire requests, as mentioned with the commented script:
  $ for i in `seq 1 30`; do openstack floating ip create YYYY & done

Neutron in particular "does not enforce quotas in such a way that a quota violation like this could never occur. The extent of the issue will vary greatly by deployment architecture, specifically the number of neutron workers that are deployed. If more workers are deployed, this becomes more probable." [2]

It was discussed in the Neutron community the possibility to implement a database locking quota system, using the database engine (isolating transactions bumping the same register storing the used resources). But that implies a negative impact in the performance of the API, increasing the response time. This idea was discarded.

Therefore, if the customer wants to limit the impact of parallel requests from the same project/user in the quota limits, a rate-limiting system should be applied [3].

Regards.

[1]https://bugs.launchpad.net/neutron/+bug/1862050
[2]https://bugs.launchpad.net/neutron/+bug/1862050/comments/5
[3]https://docs.openstack.org/security-guide/api-endpoints/api-endpoint-configuration-recommendations.html#api-endpoint-rate-limiting

Comment 3 Michele Valsecchi 2020-10-09 03:05:11 UTC
Hi Rodolfo,

Thanks for the swift and detailed response. Performances was picked over consistency: it makes sense given the OSP architecture.

Given the circumstances I think this BZ can be closed.
I'll be talking with my Customer regarding the necessity of implementation a rate-limiting system in order to address this issue.

Regards.


Note You need to log in before you can comment on or make changes to this bug.