Description of problem: When creating isolated networks for the overcloud, there is no way to specify what range of IP addresses the hosts should be assigned. Version-Release number of selected component (if applicable): openstack-tripleo-heat-templates 0.8.6-2.el7ost (any poodle/puddle cut before June 16, 2015) How reproducible: 100% Steps to Reproduce: 1. Enable network isolation 2. 3. Actual results: There is no way to set the IP address range for isolated networks. Expected results: You should be able to set an IP address range for the isolated networks. Additional info: We just merged a patch downstream to enable this functionality. We need to test a deployment with multiple networks, and make sure that all the assigned IPs were within the range specified. This can be done by adding the following to the environment file where we enable isolated networks: parameter_defaults: InternalApiNetCidr: 172.17.0.0/24 InternalApiAllocationPools: [{'start': '172.16.2.100', 'end': '172.16.2.200'}] StorageNetCidr: 172.18.0.0/24 StorageAllocationPools: [{'start': '172.16.1.100', 'end': '172.16.1.200'}] StorageMgmtNetCidr: 172.19.0.0/24 StorageMgmtAllocationPools: [{'start': '172.16.3.100', 'end': '172.16.3.200'}] TenantNetCidr: 172.16.0.0/24 TenantAllocationPools: [{'start': '172.16.0.100', 'end': '172.16.0.200'}] ExternalNetCidr: 10.0.0.0/24 ExternalAllocationPools: [{'start': '10.0.0.100', 'end': '10.0.0.200'}]
I just discovered a bug that will prevent this from working with the external net. We can remove the ExternalAllocationPools definition from the above and test the other networks, or wait for a new poodle to get cut that includes this patch: https://review.openstack.org/#/c/192349/
I need to get more info about step to reproduce .
This was fixed when we added the AllocationPools to the network environment file. I have validated it, and all deploys are now using the <network>AllocationPools parameters. If you are deploying via virt (and just using the -e includes for network-isolation.yaml and single-nic-with-vlans.yaml), you should see that the IP addresses of the overcloud begin at .4. There are some problems with virt deploys, so it doesn't need to get to CREATE_COMPLETE, just far enough that you can log in to the overcloud nodes and see that their IPs on the isolated networks (not the ctlplane) start at .4 (because of the default AllocationPools defined in the network definitions in the 'network' directory of T-H-T). If you are deploying on bare metal, something similar to this should appear in your network-environment.yaml (with different IPs). The IPs selected for the overcloud should start at .10, since all the AllocationPools start at .10: parameter_defaults: # Customize the IP subnets to match the local environment InternalApiNetCidr: 172.17.0.0/24 StorageNetCidr: 172.18.0.0/24 StorageMgmtNetCidr: 172.19.0.0/24 TenantNetCidr: 172.16.0.0/24 ExternalNetCidr: 10.8.148.0/24 InternalApiAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}] StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}] StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}] TenantAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}] ExternalAllocationPools: [{'start': '10.8.148.10', 'end': '10.8.148.200'}] InternalApiNetworkVlanID: 201 StorageNetworkVlanID: 204 StorageMgmtNetworkVlanID: 203 TenantNetworkVlanID: 205 ExternalNetworkVlanID: 104 # Specify the gateway on the external network. ExternalInterfaceDefaultRoute: 10.8.148.254 # Customize bonding options BondInterfaceOvsOptions: "bond_mode=balance-tcp lacp=active other-config:lacp-fallback-ab=true" I have also validated that this is working on bare metal.
Verified on RHEL-OSP director puddle 7.0 RC puddle 2015-06-29-1 --------------------------------------+------+--------------+------------------------------------------------+ [stack@instack ~]$ . stackrc [stack@instack ~]$ neutron subnet-list +--------------------------------------+---------------------+---------------+------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+---------------------+---------------+------------------------------------------------+ | 236a2c40-4f1f-449f-9435-41154714ba07 | storage_subnet | 172.16.1.0/24 | {"start": "172.16.1.4", "end": "172.16.1.250"} | | 2bc81643-3d6a-4500-b25e-be13e6dbd6dd | internal_api_subnet | 172.16.2.0/24 | {"start": "172.16.2.4", "end": "172.16.2.250"} | | 2d036047-c1da-46b5-b7a2-d2ca9fd14b2c | storage_mgmt_subnet | 172.16.3.0/24 | {"start": "172.16.3.4", "end": "172.16.3.250"} | | 52b5cb2d-8622-4c44-9f43-8008105604d2 | | 192.0.2.0/24 | {"start": "192.0.2.5", "end": "192.0.2.24"} | | 67def19c-dd9f-4e4c-be3b-05288ce89e50 | tenant_subnet | 172.16.0.0/24 | {"start": "172.16.0.4", "end": "172.16.0.250"} | | 7f12b52e-b348-42be-b202-0bbd918dfe7e | external_subnet | 10.0.0.0/24 | {"start": "10.0.0.4", "end": "10.0.0.250"} | +--------------------------------------+---------------------+---------------+------------------------------------------------+ [stack@instack ~]$ rpm -qa | grep openstack-tripleo-heat openstack-tripleo-heat-templates-0.8.6-22.el7ost.noarch [stack@instack ~]$
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2015:1549