Created attachment 1122253 [details] Network configuration Description of problem: VM instances created in OpenStack are not configured with a DHCP address when they boot. We are using Neutron configured with OVS/VLANS. Horizon shows that an IP is assigned to the VM instances. When the VM instance boots, the log for it shows: Starting network... udhcpc (v1.20.1) started Sending discover... Sending discover... Sending discover... Usage: /sbin/cirros-dhcpc <up|down> No lease, failing Comparison of the Neutron configuration with a working Neutron configuration in a prior release shows that there is no patch-port linkage between br-tenant and br-int on either the compute or controller nodes. Meaning there are no phy-br-tenant or int-br-tenant ports on br-tenant and br-int respectively. I manually created and connected these patch ports using the following commands: ovs-vsctl add-port br-tenant phy-br-tenant -- set Interface phy-br-tenant type=patch options:peer=int-br-tenant ovs-vsctl add-port br-int int-br-tenant -- set Interface int-br-tenant type=patch options:peer=phy-br-tenant Then rebooted the controllers and computes, but the VMs are still not getting DHCP addresses. Version-Release number of selected component (if applicable): OSP Director 8 Beta 5 How reproducible: See below. Steps to Reproduce: 1. Deploy OpenStack using OSP Director 8 Beta 5 with OVS/VLANs for Neutron config. 2. Create a VM instance in OpenStack 3. Note that the instance does not successfully get a DHCP address. 4. Do an "ovs-vsctl show" on any controller or compute 5. Note the initial problem that br-int is not connected to br-tenant Actual results: VM instances do not get DHCP addresses on boot up. Expected results: VM instances should get DHCP addresses on boot up. Additional info: From a compute node: [root@overcloud-novacompute-2 ~]# ovs-vsctl show 8a8eef86-e873-4d1b-abc4-0cf567460aac Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port br-int Interface br-int type: internal Bridge br-ex Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Bridge br-tenant Port "bond0" Interface "bond0" Port "vlan140" tag: 140 Interface "vlan140" type: internal Port br-tenant Interface br-tenant type: internal Bridge "br-bond1" Port "bond1" Interface "bond1" Port "br-bond1" Interface "br-bond1" type: internal ovs_version: "2.4.0" Command used to deploy: openstack overcloud deploy -t {} --templates ~/pilot/templates/overcloud -e ~/pilot/templates/network-environment.yaml -e ~/pilot/templates/overcloud/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --control-flavor controller --compute-flavor compute --ceph-storage-flavor storage --swift-storage-flavor storage --block-storage-flavor storage --neutron-public-interface bond1 --neutron-network-type vlan --neutron-disable-tunneling --os-auth-url xxx --os-project-name xxx --os-user-id xxx --os-password xxx --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --ntp-server 0.centos.pool.ntp.org --neutron-network-vlan-ranges datacentre:201:220 See attached for network config.
Chris, what is the latest beta for OSP and OSP-d that worked? (passed our sanity tests)
I see from your nic_configs you have a br-tenant, but on your command line you pass only the datacentre bridge which is to use vlan 201:210. What bridge and vlans do you expect your tenants to be using?
I doubt it has ever worked. Our sanity test evidently does not test for network connectivity of OpenStack VMs.
Steve, I expect Neutron to use br-tenant:201-220. Do I need to change the parameters passed to the "openstack overcloud deploy" command to: --neutron-network-vlan-ranges br_tenant:201:220 ?
Ah, I see some changes that I need to make: --neutron-network-vlan-ranges physint:201:220,physext --neutron-bridge-mappings physint:br-tenant Will try a redeploy with the above.
Thanks much for the help! With the above changes, my instances are now getting DHCP addresses, and I can ping into the netns on the controller nodes.