Bug 1305651 - OpenStack VMs don't get DHCP addresses on boot
Summary: OpenStack VMs don't get DHCP addresses on boot
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 8.0 (Liberty)
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 8.0 (Liberty)
Assignee: Angus Thomas
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1261979
TreeView+ depends on / blocked
 
Reported: 2016-02-08 20:03 UTC by Chris Dearborn
Modified: 2016-04-18 06:58 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-09 21:29:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Network configuration (4.36 KB, application/x-gzip)
2016-02-08 20:03 UTC, Chris Dearborn
no flags Details

Description Chris Dearborn 2016-02-08 20:03:50 UTC
Created attachment 1122253 [details]
Network configuration

Description of problem:
VM instances created in OpenStack are not configured with a DHCP address when they boot.

We are using Neutron configured with OVS/VLANS.  Horizon shows that an IP is assigned to the VM instances.  When the VM instance boots, the log for it shows:

Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending discover...
Sending discover...
Usage: /sbin/cirros-dhcpc <up|down>
No lease, failing

Comparison of the Neutron configuration with a working Neutron configuration in a prior release shows that there is no patch-port linkage between br-tenant and br-int on either the compute or controller nodes.  Meaning there are no phy-br-tenant or int-br-tenant ports on br-tenant and br-int respectively.

I manually created and connected these patch ports using the following commands:

ovs-vsctl add-port br-tenant phy-br-tenant -- set Interface phy-br-tenant type=patch options:peer=int-br-tenant
ovs-vsctl add-port br-int int-br-tenant -- set Interface int-br-tenant type=patch options:peer=phy-br-tenant

Then rebooted the controllers and computes, but the VMs are still not getting DHCP addresses.

Version-Release number of selected component (if applicable):
OSP Director 8 Beta 5

How reproducible:
See below.

Steps to Reproduce:
1. Deploy OpenStack using OSP Director 8 Beta 5 with OVS/VLANs for Neutron config.
2. Create a VM instance in OpenStack
3. Note that the instance does not successfully get a DHCP address.
4. Do an "ovs-vsctl show" on any controller or compute
5. Note the initial problem that br-int is not connected to br-tenant

Actual results:
VM instances do not get DHCP addresses on boot up.

Expected results:
VM instances should get DHCP addresses on boot up.


Additional info:
From a compute node:
[root@overcloud-novacompute-2 ~]# ovs-vsctl show
8a8eef86-e873-4d1b-abc4-0cf567460aac
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-tenant
        Port "bond0"
            Interface "bond0"
        Port "vlan140"
            tag: 140
            Interface "vlan140"
                type: internal
        Port br-tenant
            Interface br-tenant
                type: internal
    Bridge "br-bond1"
        Port "bond1"
            Interface "bond1"
        Port "br-bond1"
            Interface "br-bond1"
                type: internal
    ovs_version: "2.4.0"

Command used to deploy:
openstack overcloud deploy -t {} --templates ~/pilot/templates/overcloud -e ~/pilot/templates/network-environment.yaml -e ~/pilot/templates/overcloud/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --control-flavor controller --compute-flavor compute --ceph-storage-flavor storage --swift-storage-flavor storage --block-storage-flavor storage --neutron-public-interface bond1 --neutron-network-type vlan --neutron-disable-tunneling --os-auth-url xxx --os-project-name xxx --os-user-id xxx --os-password xxx --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --ntp-server 0.centos.pool.ntp.org --neutron-network-vlan-ranges datacentre:201:220

See attached for network config.

Comment 2 arkady kanevsky 2016-02-08 20:43:59 UTC
Chris,
what is the latest beta for OSP and OSP-d that worked? (passed our sanity tests)

Comment 3 Steve Reichard 2016-02-08 21:09:37 UTC
I see from your nic_configs you have a br-tenant, but on your command line you pass only the datacentre bridge which is to use vlan 201:210.   What bridge and vlans do you expect your tenants to be using?

Comment 4 Chris Dearborn 2016-02-08 21:25:56 UTC
I doubt it has ever worked.  Our sanity test evidently does not test for network connectivity of OpenStack VMs.

Comment 5 Chris Dearborn 2016-02-08 21:29:46 UTC
Steve,

I expect Neutron to use br-tenant:201-220.  Do I need to change the parameters passed to the "openstack overcloud deploy" command to:

--neutron-network-vlan-ranges br_tenant:201:220

?

Comment 6 Chris Dearborn 2016-02-08 22:32:06 UTC
Ah, I see some changes that I need to make:

--neutron-network-vlan-ranges physint:201:220,physext --neutron-bridge-mappings physint:br-tenant

Will try a redeploy with the above.

Comment 7 Chris Dearborn 2016-02-09 21:29:03 UTC
Thanks much for the help!  With the above changes, my instances are now getting DHCP addresses, and I can ping into the netns on the controller nodes.


Note You need to log in before you can comment on or make changes to this bug.