Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1143057

Summary: HA: Not able to complete HA deployment because while construction of a bridge, interfaces loses its IP address.
Product: Red Hat OpenStack Reporter: Leonid Natapov <lnatapov>
Component: openstack-puppet-modulesAssignee: Gilles Dubreuil <gdubreui>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Leonid Natapov <lnatapov>
Severity: high Docs Contact:
Priority: high    
Version: 5.0 (RHEL 7)CC: aberezin, gdubreui, lars, lbezdick, lnatapov, mburns, oblaut, sasha, sclewis, yeylon
Target Milestone: ---Keywords: ZStream
Target Release: Installer   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-09-24 13:37:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1142873    
Attachments:
Description Flags
"ip a" output before puppet run none

Description Leonid Natapov 2014-09-17 20:16:18 UTC
HA deployment fails because during the construction of br-ex interfaces looses Ip address. Looks like puppet-vswitch issue. This module responsible for creating br-ex and moving the ip address from the slave interface and onto the bridge.

I have 3 networks: external is associated with  enp3s0f0 ,tenant with enp3s0f1 and provisioning (default) on eno1.

Here is the output of ip address show after the error:
--------------------------------------------------------
[root@mac848f69fbc4c3 yum.repos.d]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp3s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
    link/ether a0:36:9f:22:e6:f4 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a236:9fff:fe22:e6f4/64 scope link 
       valid_lft forever preferred_lft forever
3: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 84:8f:69:fb:c4:c3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.4/24 brd 192.168.0.255 scope global dynamic eno1
       valid_lft 447sec preferred_lft 447sec
    inet6 fe80::868f:69ff:fefb:c4c3/64 scope link 
       valid_lft forever preferred_lft forever
4: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 84:8f:69:fb:c4:c4 brd ff:ff:ff:ff:ff:ff
5: enp3s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether a0:36:9f:22:e6:f6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.2/24 brd 192.168.200.255 scope global enp3s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::a236:9fff:fe22:e6f6/64 scope link 
       valid_lft forever preferred_lft forever
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether c6:1e:e4:1d:a9:fc brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 72:7e:46:b6:c1:40 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::707e:46ff:feb6:c140/64 scope link 
       valid_lft forever preferred_lft forever
9: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether a0:36:9f:22:e6:f4 brd ff:ff:ff:ff:ff:ff
10: br-tun: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 82:2c:37:f5:b4:40 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::802c:37ff:fef5:b440/64 scope link 
       valid_lft forever preferred_lft forever


Here is the output of puppet run:
---------------------------------
    Sep 17 19:42:25 mac848f69fbc4c3 puppet-agent[11130]: (/Stage[main]/Neutron::Agents::Ovs/Neutron_plugin_ovs[agent/tunnel_types]/ensure) created
    Sep 17 19:42:25 mac848f69fbc4c3 puppet-agent[11130]: (/Stage[main]/Neutron::Agents::Ovs/Neutron_plugin_ovs[agent/tunnel_types]) Scheduling refresh of Service[neutron-plugin-ovs-service]
    Sep 17 19:42:25 mac848f69fbc4c3 puppet-agent[11130]: (/Stage[main]/Neutron::Agents::Ovs/Neutron_plugin_ovs[OVS/bridge_mappings]/ensure) created
    Sep 17 19:42:25 mac848f69fbc4c3 puppet-agent[11130]: (/Stage[main]/Neutron::Agents::Ovs/Neutron_plugin_ovs[OVS/bridge_mappings]) Scheduling refresh of Service[neutron-plugin-ovs-service]
    Sep 17 19:42:25 mac848f69fbc4c3 ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl add-br br-ex
    Sep 17 19:42:25 mac848f69fbc4c3 kernel: device br-ex entered promiscuous mode
    Sep 17 19:42:25 mac848f69fbc4c3 puppet-agent[11130]: (/Stage[main]/Neutron::Agents::Ovs/Neutron::Plugins::Ovs::Bridge[physnet-external:br-ex]/Vs_bridge[br-ex]/ensure) created
    Sep 17 19:42:25 mac848f69fbc4c3 ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl add-port br-ex enp3s0f0
    Sep 17 19:42:25 mac848f69fbc4c3 kernel: device enp3s0f0 entered promiscuous mode
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: ixgbe 0000:03:00.0 enp3s0f0: detected SFP+: 3
    Sep 17 19:42:26 mac848f69fbc4c3 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-br br-ex
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: device enp3s0f0 left promiscuous mode
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: device br-ex left promiscuous mode
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: ixgbe 0000:03:00.0 enp3s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: ixgbe 0000:03:00.0: removed PHC on enp3s0f0
    Sep 17 19:42:26 mac848f69fbc4c3 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex enp3s0f0
    Sep 17 19:42:26 mac848f69fbc4c3 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: device br-ex entered promiscuous mode
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: ixgbe 0000:03:00.0: registered PHC device on enp3s0f0
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: IPv6: ADDRCONF(NETDEV_UP): enp3s0f0: link is not ready
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: ixgbe 0000:03:00.0 enp3s0f0: detected SFP+: 3
    Sep 17 19:42:26 mac848f69fbc4c3 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-port br-ex enp3s0f0
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: device enp3s0f0 entered promiscuous mode
    Sep 17 19:42:26 mac848f69fbc4c3 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex
    Sep 17 19:42:26 mac848f69fbc4c3 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex
    Sep 17 19:42:26 mac848f69fbc4c3 puppet-agent[11130]: (/Stage[main]/Neutron::Agents::Ovs/Neutron::Plugins::Ovs::Port[br-ex:enp3s0f0]/Vs_port[enp3s0f0]/ensure) created
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: ixgbe 0000:03:00.0 enp3s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
    Sep 17 19:42:26 mac848f69fbc4c3 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp3s0f0: link becomes ready
    Sep 17 19:42:32 mac848f69fbc4c3 puppet-agent[11130]: Execution of '/usr/bin/yum -d 0 -e 0 -y install python-nova' returned 1: Error downloading packages:
    Sep 17 19:42:32 mac848f69fbc4c3 puppet-agent[11130]: python-paramiko-1.11.3-1.el7ost.noarch: [Errno 256] No more mirrors to try.
    Sep 17 19:42:32 mac848f69fbc4c3 puppet-agent[11130]: libjpeg-turbo-1.2.90-5.el7.x86_64: [Errno 256] No more mirrors to try.
    Sep 17 19:42:32 mac848f69fbc4c3 puppet-agent[11130]: python-cheetah-2.4.4-5.el7.x86_64: [Errno 256] No more mirrors to try.

--------------------------------------------
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el6ost.noarch
openstack-puppet-modules-2014.1-21.8.el6ost.noarch
rhel-osp-installer-0.3.5-1.el6ost.noarch
openstack-foreman-installer-2.0.24-1.el6ost.noarch

Comment 1 Leonid Natapov 2014-09-17 20:24:32 UTC
Deployment was on bare metals Neutron with VXLAN.

Comment 2 Leonid Natapov 2014-09-18 04:30:18 UTC
Created attachment 938741 [details]
"ip a" output before puppet run

Comment 3 Ivan Chavero 2014-09-18 15:32:45 UTC
Gilles i think this is related to the puppet-vswitch module. Can you give us a hand here?

Comment 4 Lon Hohberger 2014-09-18 15:35:30 UTC
What version of openstack-puppet-modules?

Comment 5 Alexander Chuzhoy 2014-09-18 15:39:45 UTC
openstack-puppet-modules-2014.1-21.8.el6ost.noarch

Comment 6 Lukas Bezdicka 2014-09-18 15:53:05 UTC
I need to know whether this was static ip or dhcp and ideally content of ifcfg files before and after.

Comment 7 Leonid Natapov 2014-09-18 19:52:32 UTC
ifconfig appear in the bug. look at the bug body and attached file.

Comment 8 Gilles Dubreuil 2014-09-19 03:33:27 UTC
I'm suspecting Quickstack parameters to be missing:

Could you please:

1. Check Foreman Smart class variables for the corresponding staypuft deployment:
- quickstack::neutron::networker::ovs_bridge_mappings 
- quickstack::neutron::networker::ovs_bridge_uplinks

2. Provide output generated by running following on the foreman server:
$ cd /usr/share/openstack-foreman-installer/bin/
# Check user/passwd in the header of the quickstack_defaults.rb file
$ ./quickstack_defaults.rb list_parameters > quickstack_defaults_params

Comment 9 Gilles Dubreuil 2014-09-19 06:43:20 UTC
Could you please also provide ifcfg-enp3s0f0 files from before and after installation?

Comment 10 Mike Burns 2014-09-19 11:50:18 UTC
If my understanding is correct (far from a sure thing), this comes about because the external network and the tenant network are collocated.  If they are separate, then this should not happen.

Lars, can you please keep me honest? ^^

Comment 11 Lars Kellogg-Stedman 2014-09-19 17:33:06 UTC
The external and tenant networks are not colocated in this case, based on the initial comment:

> I have 3 networks: external is associated with  enp3s0f0 ,tenant 
> with enp3s0f1 and provisioning (default) on eno1.

I'm just starting to look into this in more detail.

Comment 12 Lars Kellogg-Stedman 2014-09-22 15:03:21 UTC
If possible, can you update this bz with the information Gilles requested in comments #8 and #9?

Comment 13 Leonid Natapov 2014-09-22 20:44:13 UTC
Unfortunately I didn't keep that particular deployment. I have only one environment ,so I moved on installing a new puddle therefore I can't provide required information.

Comment 14 Mike Burns 2014-09-23 13:15:47 UTC
Is the issue not reproducible with the new content?

Comment 15 Mike Burns 2014-09-24 13:37:13 UTC
please reopen if this is reproduced