Bug 1560422

Summary: The global amount of Octavia loadbalancers is constrained by the service project quotas
Product: Red Hat OpenStack Reporter: Alexander Stafeyev <astafeye>
Component: openstack-tripleo-commonAssignee: Brent Eagles <beagles>
Status: CLOSED ERRATA QA Contact: Alexander Stafeyev <astafeye>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 13.0 (Queens)CC: amuller, bcafarel, beagles, cgoncalves, ihrachys, jamsmith, jschluet, lpeer, majopela, mburns, nbarcet, nyechiel, oblaut, slinaber
Target Milestone: rcKeywords: Triaged
Target Release: 13.0 (Queens)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-tripleo-common-8.6.1-18.el7ost Doc Type: If docs needed, set a value
Doc Text:
Octavia does not scale to practical workloads because the default configured quotas for the "service" project limits the number of Octavia load balancers that can be created in the overcloud. To mitigate this problem, as the overcloud admin user, set the required quotas to unlimited or some sufficiently large value. For example, run the following commands on the undercloud # source ~/overcloudrc # openstack quota set --cores -1 --ram -1 --ports -1 --instances -1 --secgroups -1 service
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-06-27 13:48:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
example when we get blocked by security groups quota for the service project none

Description Alexander Stafeyev 2018-03-26 07:02:50 UTC
Description of problem:
LB creation creates neutron ports, security groups and instances(cores, RAM, keypairs, etc). 

Those objects, when created with LB create command execution are needed not to be omitted while planning the cloud with Octavia.  

Possible solutions 1 of 3 : 
a- Agreed quota numbers
b- Unlimited quota numbers
c- Documentation of the effect of LB creation on the resources in neutron and nova so the operator could include that in the cloud planning and services quotas configuration.

Comment 1 Assaf Muller 2018-05-07 14:04:53 UTC
Let's double check that instances and other quotas are taken from the 'services' tenant, and check what is the default quota in that tenant.

Comment 3 Nir Magnezi 2018-05-08 13:45:35 UTC
I was able to reproduce this issue, and indeed we are being blocked by 'service' project quota.



Octavia creates Amphorae (service VMs) under an operator configured project (tenant). In TripleO, we currently use 'service' project by default.

Since booting Amphorae consume the project quota, this essentially result a very low (around 10) global upper constraint for loadbalancers amount.

This is in opposed to how quotas normally being used in openstack.

To conclude:
When user 'test' creates a loadbalancer in project 'xyz':
1. Loadbalancer related quota is being consumed for project
xyz' (expected).
2. ports, cores, instances, ram and security groups quota is being consumed for project 'service', which will eventually prevent users from *any* project to create loadbalancers. Even if they did not fully consume their loadbalancer related project quota.

To fix this:
We need to address the 'service' project as a system project. Thus, we cannot be limited by project quotas for Octavia VMs.
We need to set '-1' for the following quotas (in the 'service project only):
1. ports
2. cores
3. instances
4. ram
5. security groups

Comment 4 Nir Magnezi 2018-05-08 13:47:18 UTC
Created attachment 1433219 [details]
example when we get blocked by security groups quota for the service project

Comment 5 Nir Magnezi 2018-05-08 13:52:08 UTC
Current default quota numbers:

ports 500
cores 20
instances 10
ram 51200
security groups 10

Comment 6 Nir Magnezi 2018-05-08 13:52:09 UTC
Current default quota numbers:

ports 500
cores 20
instances 10
ram 51200
security groups 10

Comment 18 Brent Eagles 2018-05-18 14:50:48 UTC
To resolve after deployment, run the following command

Comment 35 errata-xmlrpc 2018-06-27 13:48:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2086