Bug 1124484 - Puppet fails on networker after br-ex is configured
Summary: Puppet fails on networker after br-ex is configured
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: Foreman (RHEL 6)
Hardware: Unspecified
OS: Unspecified
Target Milestone: z2
: Installer
Assignee: Jason Guiditta
QA Contact: Toni Freger
Depends On:
TreeView+ depends on / blocked
Reported: 2014-07-29 15:19 UTC by John Eckersberg
Modified: 2014-11-04 17:01 UTC (History)
9 users (show)

Fixed In Version: openstack-foreman-installer-2.0.29-1.el6ost
Doc Type: Known Issue
Doc Text:
If you attempt to use the same interface as your tunnel endpoint for GRE or VXLAN tenant networks AND as your external network interface, your initial deployment will succeed but subsequent puppet runs will generate the following error: Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Local ip for ovs agent must be set when tunneling is enabled This error occurs because when the interface is assigned to the bridge, the IP address configuration is moved to the bridge, but the puppet configuration is still trying to use the IP address of the interface. Workaround: Do not use the same interface as your tunnel endpoint and as your external network interface.
Clone Of:
Last Closed: 2014-11-04 17:01:36 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1800 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Installer Bug Fix Advisory 2014-11-04 22:00:19 UTC

Description John Eckersberg 2014-07-29 15:19:41 UTC
Description of problem:
After a successful run, all future puppet runs on the networker fail with:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Local ip for ovs agent must be set when tunneling is enabled at /etc/puppet/environments/production/modules/neutron/manifests/agents/ovs.pp:32 on node mac52540074c85b.example.org

This is because puppet has (correctly) moved the IP address off of the physical interface and onto the bridge, so the ipaddress_eth1 fact becomes empty:

[root@mac52540074c85b ~]# ip a s eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:4c:53:68 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe4c:5368/64 scope link 
       valid_lft forever preferred_lft forever
[root@mac52540074c85b ~]# ip a s br-ex
7: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 52:54:00:4c:53:68 brd ff:ff:ff:ff:ff:ff
    inet brd scope global dynamic br-ex
       valid_lft 2532sec preferred_lft 2532sec
    inet6 fe80::5815:46ff:fe8b:aada/64 scope link 
       valid_lft forever preferred_lft forever
[root@mac52540074c85b ~]# facter ipaddress_eth1
[root@mac52540074c85b ~]# facter ipaddress_br_ex

Version-Release number of selected component (if applicable):

Comment 1 Lars Kellogg-Stedman 2014-07-29 15:25:13 UTC
So really what we want is logic like:

if (eth0 is a member of a bridge)
  local_ip = ipaddress_BRIDGENAME
  local_ip = ipaddress_eth0

Comment 3 Jason Guiditta 2014-09-10 13:40:01 UTC
I just took a  look at the facts on a configured node, and saw no references to ipaddress_<BRIDGE>, only the actual NICs and lo.  Marek, does any of the recent network enhancements on the foreman side add facts for bridges?  If not, we may need to create our own custom fact here.

Comment 4 Marek Hulan 2014-09-12 07:26:32 UTC
We don't add any bridge facts, on the other hand, bridge facts are standard part of facter output. Recent changes in foreman create bridge devices in Foreman so they could be used for puppet parameters. But if the bridge is created after initial run, I'm not sure we are able to modify these parameters afterwards, since the network interfaces are assigned to subnets before deploy is triggered. Maybe we could do it during the deployment process, Scott Seago would probably know better.

Comment 6 Jason Guiditta 2014-10-06 17:39:14 UTC
Patch posted:

Comment 7 Jason Guiditta 2014-10-07 13:05:52 UTC

Comment 10 Toni Freger 2014-10-26 09:17:05 UTC

I am trying to verify this bug.
There were several changes since this bug opened. When I try to put External and Tenant networks under the same subnet, I get an error message:
"Tenant: Subnet cannot be shared with other traffic types in this deployment."

I suggest to close this bug, since this configuration blocked in the Staypuft GUI.

Comment 11 Jason Guiditta 2014-10-27 13:45:04 UTC
@summer - The patch actually fixes the behavior, so once the ip gets moved to the bridge (which only happens when you are not using a provider network, which I believe is the default in Staypuft now), subsequent puppet runs _do_ now work.  The puppet code will now check for the ip on both the bridge and the nic-to-be-bridged, returning an ip from whichever one it finds the IP on.

@toni - I am not sure if you still need info here, as I see you marked it verfied after the request?  The way to test this is in staypuft via advanced parameters, set the external_network_bridge to 'br-ex' (or any other name you want to call the bridge).  Previously, this would work fine on the first pass, but subsequent runs would fail.  This should no longer be the case.  In my opinion, this was a valid bug which is now corrected.

Comment 13 errata-xmlrpc 2014-11-04 17:01:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.