RDO tickets are now tracked in Jira https://issues.redhat.com/projects/RDO/issues/
Bug 1164770 - On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network)
Summary: On a 3 node setup (controller, network and compute), instance is not getting ...
Keywords:
Status: CLOSED EOL
Alias: None
Product: RDO
Classification: Community
Component: distribution
Version: Juno
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: Juno
Assignee: RHOS Maint
QA Contact: Ofer Blaut
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-17 11:59 UTC by swamybabu
Modified: 2023-09-18 00:10 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-05-19 15:40:22 UTC
Embargoed:


Attachments (Terms of Use)
compute logs (14.78 MB, application/x-gzip)
2014-11-17 12:41 UTC, swamybabu
no flags Details
controller logs (13.94 MB, application/x-gzip)
2014-11-17 13:17 UTC, swamybabu
no flags Details
network node logs (10.83 MB, application/x-gzip)
2014-11-17 13:48 UTC, swamybabu
no flags Details

Description swamybabu 2014-11-17 11:59:39 UTC
Setup Details :

- Controller and Network nodes are 2 VMware ESX VMs (each VM with 2 NICS. both of them connected to management network 10.192.0.0/16)

- Compute node is baremetal machine with two 1 gig nics where both of them connected to management network i.e. 10.192.0.0/16)

Description of problem:

Instances are not getting dhcp ip when connected to a flat network.

Version-Release number of selected component (if applicable):

RDO juno

How reproducible:

Always

Steps to Reproduce:
1. Have used the attached answers.txt and a packstack deployment then modified the following 

[a] with centos = 7 version, packstack failed with erlang installation failure. to solve this issue, I ran 
 Yum --enablerepo=epel info erlang 
[b] CONTOLLER NODE
to enable flat networks:
updated /etc/neutron/plugins/ml2/ml2_conf.ini
     * set type_drivers = flat 
     * [ml2_type_flat]
          flat_networks = physnet1,physnet2 

[c] COMPUTE NODE:
file : /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
bridge_mappings = physnet1:br-eno2 (br-eno2 is created by me manually and assigned the second management interface by using the following commands
ovs-vsctl add-br br-eno2 and then updating corresponding ifcfg-eno2 and ifcfg-br-eno2 files)

[d] NETWORD NODE:

NETWORK NODE
we need to update bridge_mappings in order to get the dhcp agent running for the created network
vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini and update bridge_mappings = physnet1:br-ens224

2. restarted openstack-service on all the nodes 
3. As admin, created the following flat network

neutron net-create flat-network1 --provider:network_type=flat --provider:physical_network=physnet1 

and updated it to shared mode

Then added subnet as mentioned below

[root@junovm1 ~(keystone_admin)]# neutron subnet-list
+--------------------------------------+-------------+---------------+----------------------------------------------------+
| id                                   | name        | cidr          | allocation_pools                                   |
+--------------------------------------+-------------+---------------+----------------------------------------------------+
| fba450eb-2949-46da-bd3d-4fa06f870ff5 | mgmt subnet | 10.192.0.0/16 | {"start": "10.192.25.150", "end": "10.192.25.170"} |
+--------------------------------------+-------------+---------------+----------------------------------------------------+

4. Deploy an instance using the above network


Actual results:

[i] Found that the instance did not get any IPADDR from dhcp
[ii] when I manually configure and ip from the mgmt subnet then I can reach gateway and internet.

Expected results:

[i] Instance should get an ipaddress from dnsmasq


Additional info:

[ii] I can see that there is dhcp namesapce created and dnmasq running without any issues but, unable to reach the gateway.

[root@junovm2 ~(keystone_admin)]# ip netns 
qdhcp-a4ba619b-b7b4-4a62-9fd6-ffda2ecc751f
[root@junovm2 ~(keystone_admin)]# ip netns exec qdhcp-a4ba619b-b7b4-4a62-9fd6-ffda2ecc751f bash
[root@junovm2 ~(keystone_admin)]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
11: tape819b230-5a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fa:16:3e:e7:9c:d5 brd ff:ff:ff:ff:ff:ff
    inet 10.192.25.151/16 brd 10.192.255.255 scope global tape819b230-5a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fee7:9cd5/64 scope link 
       valid_lft forever preferred_lft forever
[root@junovm2 ~(keystone_admin)]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.192.0.1      0.0.0.0         UG    0      0        0 tape819b230-5a
10.192.0.0      0.0.0.0         255.255.0.0     U     0      0        0 tape819b230-5a
[root@junovm2 ~(keystone_admin)]# ping 10.192.0.1
PING 10.192.0.1 (10.192.0.1) 56(84) bytes of data.

[iii] found that network and compute node are not showing int-br and phy-br veths when I executed ifconfig -a 

Compute node:
[root@c1 network-scripts]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:25:90:ec:7a:10 brd ff:ff:ff:ff:ff:ff
    inet 10.192.25.193/16 brd 10.192.255.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::225:90ff:feec:7a10/64 scope link 
       valid_lft forever preferred_lft forever
3: enp4s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether a0:36:9f:1e:99:cc brd ff:ff:ff:ff:ff:ff
4: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
    link/ether 00:25:90:ec:7a:11 brd ff:ff:ff:ff:ff:ff
5: enp4s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether a0:36:9f:1e:99:ce brd ff:ff:ff:ff:ff:ff
6: enp130s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether a0:36:9f:1e:99:dc brd ff:ff:ff:ff:ff:ff
7: enp130s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether a0:36:9f:1e:99:de brd ff:ff:ff:ff:ff:ff
8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether aa:0d:e4:91:4c:47 brd ff:ff:ff:ff:ff:ff
9: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether d2:46:a1:c8:b1:42 brd ff:ff:ff:ff:ff:ff

10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:55:53:4b:2f:69 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
32: qbrab5445a6-00: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a0c4:61ff:fe3b:3b5b/64 scope link 
       valid_lft forever preferred_lft forever
36: qbr2e67cada-93: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::54f4:45ff:fe80:c04/64 scope link 
       valid_lft forever preferred_lft forever
40: br-eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 00:25:90:ec:7a:11 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::6097:9ff:fee5:6641/64 scope link 
       valid_lft forever preferred_lft forever
45: qbr7c5ab97c-45: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::30ed:cbff:fec2:6a3b/64 scope link 
       valid_lft forever preferred_lft forever
51: qbr0662c702-ee: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 06:51:9b:64:60:40 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d417:71ff:fe32:7a1e/64 scope link 
       valid_lft forever preferred_lft forever
52: qvo0662c702-ee: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 2a:cf:84:ca:57:70 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::28cf:84ff:feca:5770/64 scope link 
       valid_lft forever preferred_lft forever
53: qvb0662c702-ee: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master qbr0662c702-ee state UP qlen 1000
    link/ether 06:51:9b:64:60:40 brd ff:ff:ff:ff:ff:ff
54: tap0662c702-ee: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master qbr0662c702-ee state UNKNOWN qlen 500
    link/ether fe:16:3e:34:75:ec brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:3eff:fe34:75ec/64 scope link 
       valid_lft forever preferred_lft forever
[root@c1 network-scripts]# ovs-vsctl show
60e9579b-859f-4851-b89a-6f345fe7d138
    Bridge br-int
        fail_mode: secure
        Port "qvo0662c702-ee"
            tag: 1
            Interface "qvo0662c702-ee"
        Port br-int
            Interface br-int
                type: internal
        Port "int-br-eno2"
            Interface "int-br-eno2"
                type: patch
                options: {peer="phy-br-eno2"}
    Bridge "br-eno2"
        Port "eno2"
            Interface "eno2"
        Port "br-eno2"
            Interface "br-eno2"
                type: internal
        Port "phy-br-eno2"
            Interface "phy-br-eno2"
                type: patch
                options: {peer="int-br-eno2"}

[root@c1 network-scripts]# brctl show
bridge name     bridge id               STP enabled     interfaces
qbr0662c702-ee          8000.06519b646040       no              qvb0662c702-ee
                                                        tap0662c702-ee
qbr2e67cada-93          8000.000000000000       no
qbr7c5ab97c-45          8000.000000000000       no
qbrab5445a6-00          8000.000000000000       no
virbr0          8000.000000000000       yes


Network node:

[root@junovm2 ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
    link/ether 00:50:56:9c:63:76 brd ff:ff:ff:ff:ff:ff
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
    link/ether 00:50:56:9c:57:30 brd ff:ff:ff:ff:ff:ff
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether e2:3f:df:a9:53:84 brd ff:ff:ff:ff:ff:ff
5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 00:50:56:9c:63:76 brd ff:ff:ff:ff:ff:ff
    inet 10.192.25.187/16 brd 10.192.255.255 scope global br-ex
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9c:6376/64 scope link 
       valid_lft forever preferred_lft forever
6: br-ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 00:50:56:9c:57:30 brd ff:ff:ff:ff:ff:ff
    inet 10.192.25.188/16 brd 10.192.255.255 scope global br-ens224
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9c:5730/64 scope link 
       valid_lft forever preferred_lft forever
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether 1e:f3:47:df:13:4d brd ff:ff:ff:ff:ff:ff

[root@junovm2 ~]# ovs-vsctl show
0765b72e-fc7d-4fc2-9620-9dc4141dbac7
    Bridge br-ex
        Port "ens192"
            Interface "ens192"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge "br-ens224"
        Port "ens224"
            Interface "ens224"
        Port "br-ens224"
            Interface "br-ens224"
                type: internal
        Port "phy-br-ens224"
            Interface "phy-br-ens224"
                type: patch
                options: {peer="int-br-ens224"}
    Bridge br-int
        fail_mode: secure
        Port "tape819b230-5a"
            tag: 1
            Interface "tape819b230-5a"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "int-br-ens224"
            Interface "int-br-ens224"
                type: patch
                options: {peer="phy-br-ens224"}
    Bridge ""
        Port ""
            Interface ""
                type: internal
    ovs_version: "2.1.3"


Attaching the following logs to the bug.

1. mysqldb
2. answers.txt
3. /etc/ folder from all the machines 
4. log folders from all the machines.

Comment 1 swamybabu 2014-11-17 12:17:15 UTC
Have done the following change as well on both compute and controller nodes.

https://openstack.redhat.com/Workarounds

nova boot: failure creating veth devices 
https://bugzilla.redhat.com/show_bug.cgi?id=1149043


Attached all the logs including the DB

Comment 2 swamybabu 2014-11-17 12:41:34 UTC
Created attachment 958227 [details]
compute logs

Comment 3 swamybabu 2014-11-17 13:17:49 UTC
Created attachment 958235 [details]
controller logs

Comment 4 swamybabu 2014-11-17 13:48:13 UTC
Created attachment 958236 [details]
network node logs

Comment 5 Assaf Muller 2015-06-22 13:05:53 UTC
Is this still relevant?

Comment 6 Chandan Kumar 2016-05-19 15:40:22 UTC
This bug is against a Version which has reached End of Life.
If it's still present in supported release (http://releases.openstack.org), please update Version and reopen.

Comment 7 Red Hat Bugzilla 2023-09-18 00:10:55 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.