Bug 1190185
| Summary: | OFI not reliably setting IP for tenant bridge when using tunnels | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Steve Reichard <sreichar> |
| Component: | openstack-foreman-installer | Assignee: | Jason Guiditta <jguiditt> |
| Status: | CLOSED ERRATA | QA Contact: | Asaf Hirshberg <ahirshbe> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 6.0 (Juno) | CC: | aberezin, ahirshbe, ajeain, arkady_kanevsky, cdevine, christopher_dearborn, gdubreui, jjarvis, joherr, John_walsh, kambiz, kurt_hey, mburns, morazi, oblaut, randy_perryman, rhos-maint, sreichar, yeylon |
| Target Milestone: | z2 | Keywords: | ZStream |
| Target Release: | Installer | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-04-07 15:08:16 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1171850 | ||
|
Description
Steve Reichard
2015-02-06 15:13:32 UTC
I wanted to add to this bug that I have seen in a VLAN tenant network based deployment not all the controllers' interfaces for br-ex were setup correctly. In my deployment, I have 5 controllers, and 2 of the 5 were missing: OVSDHCPINTERFACES=<external network interface> OVS_EXTRA="set bridge br-ex other-config:hwaddr=<above interface's mac>" e.g. OVSDHCPINTERFACES=eno1 OVS_EXTRA="set bridge br-ex other-config:hwaddr=b8:ca:3a:61:42:d0" Furthermore, the same settings (OVSDHCPINTERFACES and OVS_EXTRA) were missing off the br-eno2 interface definition (eno2 is the physical interface associated with tenant traffic on my deployment). Without the OVS options in the /etc/sysconfig/network-scripts/ifcfg-br-eno2, the br-eno2 interface does not come up with an address. And even with the VLAN tenant network type, I was unable to ping guests once they were launched from a host on the tenant network VLAN (outside of OSP). Once the options were added and controllers rebooted, I was able to network on the tenant network properly. Note that my deployment is OSP5. This looks to me like something that puppet-vswitch provider should be handling. Gilles, you have worked on that area before, any thoughts, or am I off base here? (In reply to Kambiz Aghaiepour from comment #5) > Note that my deployment is OSP5. The vswitch providers have changed between OSP5 and OSP6 and depending on OSP5 version as well, fixing earlier bugs. (In reply to Jason Guiditta from comment #6) > This looks to me like something that puppet-vswitch provider should be > handling. I'm not sure such scenario, as described in comment #0, is covered by OFI. The vswitch providers vs_bridge/vs_port when defined will create an OVS bridge and attach a port (interface) to it, making it resilient by adding ifcfg accordingly. This is normally happening by default on a neutron network l3 agent. The rest is beyond vswitch scope. We are seeing it multiple times on HA controller nodes on node reboot. Need to bump its priority to be fixed in A2. From what I understand about the initial problem's description, there is no issue here, unless other behaviour is expected from either puppet-vswitch or OFI, in which case I'd suggest to create an RFE accordingly. The puppet-vswitch actual default behaviour is: If the physical interface to be attached to the bridge exists but has no link (interface is down) then the bridge is configured with DHCP because there is no IP address to transfer over from the physical interface. On the contrary if the link is up, the existing physical interface's configuration is associated to the bridge configuration, whether it's static or dynamic. In all cases at the end of the process no IP address (neither static or dynamic) is available the physical interface. This behaviour might be confusing especially when no IP address is desired on the physical address. (In reply to arkady kanevsky from comment #8) > We are seeing it multiple times on HA controller nodes on node reboot. > Need to bump its priority to be fixed in A2. Is it, after reboot, a bridge interface ends up defined as DHCP but expected as static? If yes the assign an IP to the physical interface before hand. If no, could you please provide more information and describe in details what you're seeing and what's expected. We are seeing the following. After the install is completed, the interfaces information for IP is removed and not set to DHCP. We discovered this on reboot, the interface did not come up and connectivity is not there. We do not know when it happens. Addressing comments 7 & 9. I have been using OFI in support of the Dell solution for several releases. #7 Can you explain "I'm not sure such scenario, as described in comment #0, is covered by OFI." since I've been doing this for a while. If it is having separate NIC for tenant and L3 I show you it. #9 The br-ex is not starting. That is a problem. This is the discussion Jay and I had on it. As Jay said in comment 0 > So, on the first run, it will resolve to ovs_tunnel_iface, and once > the IP is moved, it should use external_network_bridge. I question what the tunnel interface would be expecting tunnel traffic on the external bridge. As I said, my config keeps these on separate NICs as we have been doing since OSP3. Could you please provide: The configuration of the network interfaces prior and after installation The openstack configuration used for installation. Verified on A2. I deployed HA-neutron (3 controllers, 1 compute) with separate subnets for tenant/external/public-api,admin,management. the tenant subnet configured with ipam=none/boot-mod=dhcp. the deployment didn't had any problem related to puppet waiting for ip for the tenant. After deployment finished tried to run some instances and boot the controllers, the system boot-up and the bond kept the ip for the bond interface. rhel-osp-installer-client-0.5.7-1.el7ost.noarch foreman-installer-1.6.0-0.3.RC1.el7ost.noarch openstack-foreman-installer-3.0.17-1.el7ost.noarch rhel-osp-installer-0.5.7-1.el7ost.noarch puppet-3.6.2-2.el7.noarch puppet-server-3.6.2-2.el7.noarch openstack-puppet-modules-2014.2.8-2.el7ost.noarch Can you also make sure you can ssh into a deployed instance on public IP address? And ssh between 2 instances on the same project on private IP addresses. I ran Rally boot-run-command on it, rally create a vm, then ssh into it using paramiko and running a script. the tests completed good. Asaf, Can you confirm if this was validated on bare metal or a virtualized setup? The setup is built of BM hosts connected with BOND of 2X10G using Trunk ( Controllers and Compute). External network/public/tenant/Admin API networks runs on these bonds using different VLANs. Tenant network (VXLAN) is using External DHCP over Bond Native VLAN. Host provision network use different 1G interface Ofer Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0791.html |