Description of problem:
When we activate DVR setup, compute node should have an interface in external network (eth2 should be added to br-ex).
Currently there is no configuration that does that and the compute node can not communicate with external network.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.install openstack with ospd + dvr activating yaml file ( /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dvr.yaml)
2.create router, external and internal networks, configure router with gateway and internal network port. Create FIP
3. Boot VM and assign it with FIP
No ping to 126.96.36.199 from VM.
After investigation we saw that the ping from fg interface on FIP-namespace cant reach external network.
ovs-vsctl add-port br-ex eth2
Ping should be successful
We need a yaml that will configure compute networking properly for DVR .
Did you follow the instructions in the environment file for configuring OS::TripleO::Compute::Net::SoftwareConfig and OS::TripleO::Compute::Ports::ExternalPort ?
(In reply to Brent Eagles from comment #2)
> Did you follow the instructions in the environment file for configuring
> OS::TripleO::Compute::Net::SoftwareConfig and
> OS::TripleO::Compute::Ports::ExternalPort ?
At this moment I am trying to deploy overcloud with proper compute.yaml( with interface in external network), and in the DVR yaml (/usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dvr.yaml) , OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/external.yaml instead of OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/noop.yaml
If there is more proper way to configure that pls advise which file should be edited and how .
The bug here is that we need the "OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/noop.yaml" to be OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/external.yaml
You also need to set OS::TripleO::Compute::Net::SoftwareConfig to a template that configures the external network access on the compute node. You can usually accomplish this by using the same heat template for configuring the network interfaces on the controller on the compute node. e.g. if you were using environment/net-multiple-nics.yaml for configuring the network interfaces, you would change OS::TripleO::Compute::Net::SoftwareConfig to be ./network/config/multiple-nics/controller.yaml. You would change it in either net-multiple-nics.yaml or neutron-ovs-dvr.yaml, depending on which was last in the command line.
Unfortunately there really isn't a generic way to do this in an environment file -- yet.
(In reply to Brent Eagles from comment #4)
> You also need to set OS::TripleO::Compute::Net::SoftwareConfig to a template
> that configures the external network access on the compute node. You can
> usually accomplish this by using the same heat template for configuring the
> network interfaces on the controller on the compute node. e.g. if you were
> using environment/net-multiple-nics.yaml for configuring the network
> interfaces, you would change OS::TripleO::Compute::Net::SoftwareConfig to
> be ./network/config/multiple-nics/controller.yaml. You would change it in
> either net-multiple-nics.yaml or neutron-ovs-dvr.yaml, depending on which
> was last in the command line.
> Unfortunately there really isn't a generic way to do this in an environment
> file -- yet.
I think I did it .
I have OS::TripleO::Compute::Net::SoftwareConfig: three-nics-vlans/compute.yaml
in my environment file, and I edited the compute yaml to configure external network and third nic.
*** Bug 1388437 has been marked as a duplicate of this bug. ***
I tried a few different permutations of this and haven't run into a proper configuration that hasn't worked for me yet. I have however accidentally stumbled into a few that do not by forgetting to configure neutron and messing up the yaml that generates data for os-net-config.
The problem with configuring DVR in an arbitrary environment that it requires reconciling the tripleo overcloud configuration with the neutron configuration. The short version is that you need to make sure that a.) you create a network configuration on the heat managed nodes that neutron can "use" and b.) you let neutron know about it. At the moment, there isn't an automatic way to do this.
The default configuration tends to rely on flat networking and using the control plane network for floating-ip/external network traffic. If you deviate from that you need to let neutron know. What I think is happening in the environment indicated in the BZ is that an OVS bridge isn't being configured in the network deployment (the stuff in the yaml referenced in OS::TripleO::Compute::Net::SoftwareConfig) and/or isn't being configured in NeutronExternalNetworkBridge as well as in the NeutronBridgeMappings parameter. The fact that eth2 is even available to bridge to br-ex supports this theory. There might be some other configs that by necessary depending on how the systems are configured.
I did some brainstorming at summit and there might be a way to simplify reconciling the os-net-config and neutron configuration but it would be quite pervasive and am not quite sure that it is even feasible yet.
I've updated the "DVR Docs" working document with the following text:
The neutron-ovs-dvr.yaml environment file configures the necessary DVR specific parameters for enabling the feature. Configuring DVR for arbitrary deployment configuration requires additional consideration. The requirements are:
a.) The interface connected to the physical network for external network traffic must be configured on both the compute and controller nodes.
b.) A bridge must be created on compute and controller nodes with the interface for external network traffic.
c.) Neutron must be configured to allow this bridge to be used.
The host networking configuration (a and b) are controlled by heat templates that pass configuration to the heat managed nodes to be consumed by the os-net-config process. This is essentially automation of provisioning host networking. Neutron must also be configured (c.) to match the provisioned networking environment. The defaults are not expected to work in production environments. In a proof-of-concept environment using the typical defaults the steps are:
1. Verify that the value for OS::TripleO::Compute::Net::SoftwareConfig in environments/neutron-ovs-dvr.yaml is the same as the OS::TripleO::Controller::Net::SoftwareConfig value in use. This can normally be found in the network environment file in use when deploying the overcloud, e.g. environments/net-multiple-nics.yaml. This will create the appropriate external network bridge for the Compute node’s L3 agent. Note that if customizations to the network configuration for the compute node have been made, it may be necessary to add the appropriate configuration to those files instead.
2. Configure a neutron port for the compute node on the external network by modifying OS::TripleO::Compute::Ports::ExternalPort to an appropriate value, e.g. OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/external.yaml
3. Include environments/neutron-ovs-dvr.yaml as an environment file when deploying. e.g. openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dvr.yaml
4. Verify that L3 HA is disabled
For production environments or test environments that require special customization (e.g. involves network isolation, dedicated nics, etc), the example environments can be used as a guide. Knowledge of how to configure Neutron for a particular networking environment is an asset.
Moving to documentation. We might be able to make a little easier in the future, but it will likely always require some documentation for environments that are not homogeneous or have special requirements for hosts for roles that include neutron routing services. The above text is a start. A concrete example for an "off-menu" deployment might also be valuable.