Bug 894924 - Modular layer 2 networking for Neutron
Summary: Modular layer 2 networking for Neutron
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 2.0 (Folsom)
Hardware: All
OS: Linux
medium
medium
Target Milestone: Upstream M1
: 4.0
Assignee: Bob Kukura
QA Contact: Rami Vaknin
URL: https://blueprints.launchpad.net/neut...
Whiteboard:
Depends On: 988892 988893
Blocks: RHOS40RFE 988432 989651
TreeView+ depends on / blocked
 
Reported: 2013-01-14 02:41 UTC by Perry Myers
Modified: 2023-09-14 01:40 UTC (History)
9 users (show)

Fixed In Version: openstack-neutron-2013.2-0.3.b2.el6ost
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-12-19 23:57:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2013:1859 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement Advisory 2013-12-21 00:01:48 UTC

Description Perry Myers 2013-01-14 02:41:38 UTC
Modular layer 2 networking for Quantum

Comment 1 Bob Kukura 2013-01-24 01:50:19 UTC
An upstream work-in-progress patch implementing the type management portion of the ML2 plugin is available for review at https://review.openstack.org/#/c/20105/.

Comment 2 Bob Kukura 2013-05-14 18:02:49 UTC
Initial version supporting linuxbridge, openvswitch, and hyperv L2 agents (simultaneously) should make the upstream H-1 milestone. Full MechanismDriver API supporting SDN controllers and top-of-rack switches is targeted for H-2 milestone. There are no current plans to back-port any of this to Grizzly (RHOS 3.0).

Comment 4 Bob Kukura 2013-07-23 00:55:45 UTC
This bug tracks the entire base set of upstream ML2 Neutron Plugin blueprints. Blueprints related to vendor-specific mechanism drivers (Arista, Cisco, Tail-F, OpenDaylight, ...) are not tracked here, and will instead be managed via the partner certification program.

In H-1 milestone:

https://blueprints.launchpad.net/neutron/+spec/modular-l2

In H-2 milestone:

https://blueprints.launchpad.net/neutron/+spec/ml2-mechanism-drivers
https://blueprints.launchpad.net/neutron/+spec/ml2-gre
https://blueprints.launchpad.net/neutron/+spec/ml2-vxlan

Targeted for H-3 milestone:

https://blueprints.launchpad.net/neutron/+spec/ml2-portbinding
https://blueprints.launchpad.net/neutron/+spec/ml2-multi-segment-api


Testing will need to be done with the local, vlan, gre, and vxlan tenant and provider network types, and with both the openvswitch and linuxbridge L2 agents, including interoperability between the L2 agents in the same deployment. Basic testing of the supported network types is similar to the existing openvswitch and linuxbridge plugins. The ml2-portbinding blueprint will require testing with nova's GenericVifDriver. The ml2-multi-segment-api blueprint will require testing with the multi-segment network configured via a switch (same network on VLAN on one switch port and untagged on other switch port) and created with the extended provider API. The extended provider API will likely be based on https://blueprints.launchpad.net/neutron/+spec/map-networks-to-multiple-provider-networks.

The current plan is for the upstream Neutron team to decide during the RC phase whether or not to officially declare the openvswitch and linuxbridge monolithic plugins deprecated in the havana release. They will be removed at the earliest in the icehouse release. The RHOS 4.0 documentation and packstack should continue to support the monolithic plugins, but emphasize using ml2 instead in havana, at least for new deployments.

Comment 5 Bob Kukura 2013-07-24 20:25:15 UTC
We've decided to use this BZ to track the base ml2 functionality (https://blueprints.launchpad.net/neutron/+spec/modular-l2) merged during H-1. This base functionality is equivalent to the existing openvswitch and linuxbridge plugins, and includes support for the local, flat, and vlan network types.

Changes in this base functionality that may be relevant to QE, users, documentation, etc. include:

* Different RPM
* Different plugin class
* Different config file
* Different config file sections
* tenant_network_types is a list, replacing tenant_network_type
* Can work with either the openvswitch-agent or linuxbridge-agent, or both simultaneously on different nodes. 
* Also can work with hyperv-agent, but I doubt we plan to test/support that.

Testing should be similar to testing done for the openvswitch and linuxbridge plugins, including testing of tenant networks, provider networks, DHCP, L3 agent, etc., except with ml2 configured as the core plugin. The only additional testing to consider is use of different types of agents on different nodes in the same deployment.

The tunnel network blueprints will be covered under separate BZs, allowing the base functionality to be tested prior to OVS tunnel support being available.

The ml2-portbinding blueprint and ml2-multi-segment-api blueprints will also be tracked via separate BZs. The ml2-mechanism-drivers blueprint is internal, and cannot be tested separately from the various network mechanisms that use it, so it will not be tracked with its own BZ.

Comment 11 Rami Vaknin 2013-11-18 13:16:05 UTC
I've enabled ml2 using the below steps (there is no packstack support for ml2 yet) on rhos 4.0 on rhel 6.5, 2013-11-08.1 puddle, I see the ml2 tables in the database, I also managed to create networks, subnets and a router but I get 404 on secgroup list and failures while listing instances:

# neutron security-group-list
404 Not Found

The resource could not be found.

# nova list --all-tenant
ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-bae51dfe-2e35-4d87-b4de-4988a68e2054)

Steps
=====
sudo yum install -y openstack-neutron-ml2

/etc/neutron/plugins/ml2/ml2_conf.ini, [ml2_type_vlan], network_vlan_ranges = range1:100:101
/etc/neutron/plugins/ml2/ml2_conf.ini, [ml2], tenant_network_types = vlan,gre,vxlan
/etc/neutron/plugins/ml2/ml2_conf.ini, [ml2], type_drivers = local,flat,vlan,gre,vxlan
/etc/neutron/plugins/ml2/ml2_conf.ini, [ml2], mechanism_drivers = openvswitch
/etc/neutron/plugins/ml2/ml2_conf.ini, [DATABASE], sql_max_retries = 10
/etc/neutron/plugins/ml2/ml2_conf.ini, [DATABASE], reconnect_interval = 2
/etc/neutron/plugins/ml2/ml2_conf.ini, [DATABASE], sql_idle_timeout = 3600
/etc/neutron/plugins/ml2/ml2_conf.ini, [DATABASE], sql_connection = mysql://neutron:${db_password}@${db_ip}/ovs_neutron

/etc/neutron/neutron.conf, [DEFAULT], core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

/etc/neutron/neutron.conf, [DEFAULT], *append value* service_plugins += neutron.services.l3_router.l3_router_plugin.L3RouterPlugin

/bin/rm -f /etc/neutron/plugin.ini
/bin/ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

Comment 12 Bob Kukura 2013-12-06 17:52:56 UTC
The "neutron security-group-list" command in comment 11 is failing because the securitygroups extension is disabled in the server unless firewall_driver (which should only apply in the agent) is set. Work around this by adding the following to /etc/neutron/plugins/ml2/ml2_conf.ini:

[securitygroup]
firewall_driver=dummy_value_to_enable_security_groups_in_server

Comment 13 Bob Kukura 2013-12-06 23:33:54 UTC
Additional information on configuring ML2 is at http://openstack.redhat.com/Modular_Layer_2_%28ML2%29_Plugin.

Comment 14 Rami Vaknin 2013-12-08 11:59:32 UTC
Thanks, this workaround solved the failures.

Verified on rhos 4.0 running on rhel 6.5 with 2013-12-06.3 puddle, openstack-neutron-2013.2-13.el6ost, openstack-neutron-ml2-2013.2-13.el6ost.

Comment 17 errata-xmlrpc 2013-12-19 23:57:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2013-1859.html

Comment 18 Red Hat Bugzilla 2023-09-14 01:40:13 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.