Description of problem: ======================= Octavia creates Amphorae (service VMs) under an operator configured project (tenant). In TripleO, we currently use 'service' project by default. Each Amphora instance has its own tap device in a shared management subnet named lb-mgmt-subnet. That subnet is concealed under the 'service' project and cannot be accessed by non-privileged users. TripleO creates that subnet during the Octavia deployment process. Currently, it is created as a class B subnet with allocation_pools that essentially limit the number of address in that subnet to 150. Which means, globally for a given OpenStack deployment: - 150 Amphorae ==> 150 Loadbalancers if the Amphora topology is SINGLE - 150 Amphorae ==> 75 Loadbalancers if the Amphora topology is ACTIVE_STANDBY Here's how it currently looks (snipped): +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 192.168.199.50-192.168.199.200 | | cidr | 192.168.199.0/24 | | created_at | 2018-05-07T09:14:36Z | | enable_dhcp | True | | gateway_ip | 192.168.199.1 | | ip_version | 4 | | name | lb-mgmt-subnet | +-------------------+--------------------------------------+ Version-Release number of selected component (if applicable): ============================================================= OSP13 2018-05-10.3 openstack-tripleo-common-8.6.1-9 How reproducible: ================= 100% Steps to Reproduce: 1. Deploy OpenStack with Octavia via TripleO 2. 3. Actual results: =============== As mentioned above. Expected results: ================= We should have a much larger subnet such as class B, so the global amount of Octavia loadbalancers won't constrained to a very low number.
*** This bug has been marked as a duplicate of bug 1577612 ***