Description of problem: My answer file includes the following: CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1 CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1 CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1 CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1:1000 My intent is to use GRE tunnels for tenant networks, but use a VLAN on eth1 for my external network. The installation completed, and configuration for the tenant GRE tunnels looks correct, but the configuration for the physical network was ignored. Version-Release number of selected component (if applicable): openstack-packstack.noarch 2013.2.1-0.9.dev756.el6 @openstack-havana How reproducible: every time Steps to Reproduce: 1. Run packstack with options above 2. Examine configuration for server and agents 3. Try to create a provider network using physnet1 Actual results: On the server node, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini does not contain network_vlan_ranges. On the nodes where neutron-openvswitch-agent runs, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini does not contain bridge_mappings, and br-eth1 has not been created. Provider external network cannot be created because physnet1 is not known. Expected results: On the server node, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini should contain: "network_vlan_ranges = physnet1". On the nodes where neutron-openvswitch-agent runs, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini should contain "bridge_mappings=physnet1:br-eth1", and the br-eth1 OVS bridge should be created with eth1 added as a port. Additional info: N/A
Erratum, this was due to guestfs.py being copied accross from havana2. Therefore closing it.
I don't think this was the bug Gilles intended to close.
Sorry, I effectively closed the wrong one, too many windows opened.
Hi Bob So I was basically trying to reproduce what you have explained above. So for configuring the above, are you using just a single server and a single client? Or can I use multiple clients for the same? I had another query and this might be trivial. Do I need to run the same script that you have given above on the server and the client? Thanks Rushil
My setup used separate controller (i.e. server) and compute nodes, but that should not be necessary to reproduce the bug. I'd suggest starting with a single-node packstack install using an answer file with the settings above. The neutron-server and neutron-openvswitch-agent will both be configured to run on that node, and I suspect the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file will not contain the expected network_vlan_ranges value.
Bob My ovs_neutron_plugin.ini file does contain the following output: CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1 Is this what you were looking for? I have attached my ovs_neutron_plugin.ini file with this bug for reference. Regards RUshil (In reply to Bob Kukura from comment #5) > My setup used separate controller (i.e. server) and compute nodes, but that > should not be necessary to reproduce the bug. I'd suggest starting with a > single-node packstack install using an answer file with the settings above. > The neutron-server and neutron-openvswitch-agent will both be configured to > run on that node, and I suspect the > /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file will not > contain the expected network_vlan_ranges value.
Created attachment 817109 [details] Config file
Your attached ovs_neutron_plugin.ini file generated by packstack does not contain the following expected items: network_vlan_ranges = physnet1 bridge_mappings=physnet1:br-eth1 These should be present when the packstack answer file contains: CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1 CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1 Regardless of whether the answer file contains: CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre or CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=local or CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan
Fixing bug ownership.
This is still an issue. Snippet from my answer file: CONFIG_NEUTRON_L3_HOSTS=10.16.137.106 CONFIG_NEUTRON_L3_EXT_BRIDGE=provider CONFIG_NEUTRON_L2_PLUGIN=openvswitch CONFIG_NEUTRON_DHCP_HOSTS=10.16.137.106 CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre CONFIG_NEUTRON_OVS_VLAN_RANGES=physext CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physext:br-em2 CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-em2:em2 CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1:1000 CONFIG_NEUTRON_OVS_TUNNEL_IF=p3p1 I had to execute the following commands to have a functional external provider network with br-em2 for GRE traffic: # Neutron server ssh rhos1 "openstack-config --set /etc/neutron/plugin.ini OVS network_vlan_ranges physext" ssh rhos1 "openstack-config --set /etc/neutron/plugin.ini OVS bridge_mappings physext:br-em2" # L3 server, network node ssh rhos6 "openstack-config --set /etc/neutron/plugin.ini OVS network_vlan_ranges physext" ssh rhos6 "openstack-config --set /etc/neutron/plugin.ini OVS bridge_mappings physext:br-em2" ssh rhos6 "ovs-vsctl add-br br-em2" ssh rhos6 "ovs-vsctl add-port br-em2 em2" # L2 servers, compute nodes for i in rhos4 rhos5 do ssh $i "openstack-config --set /etc/neutron/plugin.ini OVS network_vlan_ranges physext" ssh $i "openstack-config --set /etc/neutron/plugin.ini OVS bridge_mappings physext:br-em2" ssh $i "ovs-vsctl add-br br-em2" ssh $i "ovs-vsctl add-port br-em2 em2" done # restart all openstack and neutron services for i in 7 6 5 4 1 do ssh rhos$i "for svs in \$(chkconfig | awk '(/openstack/ || /neutron/) && /:on/ {print \$1}'); do service \$svs restart; done" done The important step seemed to be adding the physical interface to the bridge. My neutron.conf and plugin.ini had the appropriate entries but ovs-vsctl list-ports br-em2 did not have em2 interface: [root@rhos6 ~]# ovs-vsctl list-ports br-em2 em2 phy-br-em2