Bug 1017281 - OpenStack puppet modules do not provide installation/configuration of the 'ml2' core plugin
OpenStack puppet modules do not provide installation/configuration of the 'ml...
Status: CLOSED ERRATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer (Show other bugs)
4.0
Unspecified Unspecified
urgent Severity high
: z4
: 4.0
Assigned To: John Eckersberg
Nir Magnezi
: Rebase, TestBlocker, ZStream
Depends On: 1017280
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-09 10:49 EDT by Perry Myers
Modified: 2016-04-26 09:49 EDT (History)
18 users (show)

See Also:
Fixed In Version: openstack-foreman-installer-1.0.4-1.el6ost
Doc Type: Known Issue
Doc Text:
Foreman does not support deployment of the ML2 Networking plug-in. The ML2 plug-in can be implemented in manual deployments, or by initially deploying the Open vSwitch plugin using Foreman, and then converting the installation to use ML2. Refer to the RDO documentation for further information on the conversion process: http://openstack.redhat.com/Modular_Layer_2_%28ML2%29_Plugin
Story Points: ---
Clone Of: 1017280
Environment:
Last Closed: 2014-04-08 14:21:13 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
OpenStack gerrit 60283 None None None Never

  None (edit)
Description Perry Myers 2013-10-09 10:49:49 EDT
+++ This bug was initially created as a clone of Bug #1017280 +++

+++ This bug was initially created as a clone of Bug #1017144 +++

Version
=======
rhos 4.0 on rhel 6.5, 

Description
===========
Currently, it's impossible to auto-configure neutron to use the "ml2" plugin.

Packstack should allow that, which mean it should provide an option to choose the core plugin which would configure the core_plugin to be "neutron.plugins.ml2.plugin.Ml2Plugin" after installation of openstack-neutron-ml2.noarch.

--- Additional comment from Perry Myers on 2013-10-09 10:44:08 EDT ---

I think this needs to be done by a combination of rkukura (since he did the majority of the work on ml2 plugin and knows it best) along with the Packstack and Puppet folks.

This is actually not Packstack specific.  We'll need to clone this to the puppet modules component as well and Foreman too.
Comment 1 Jason Guiditta 2013-10-21 11:34:33 EDT
It looks like foreman gets this for free once puppet modules are updated, but claiming this so it is clear someone looked at it from the foreman side
Comment 3 Jason Guiditta 2013-12-05 12:34:42 EST
This looks to still be waiting on the packstack-modules-puppet rpm to be updated, as the ml2 stuff was _just_ merged to packstack upstream today.
Comment 4 Jason Guiditta 2013-12-13 12:02:07 EST
Bruce, this text makes sense to me until the issue is addressed.  Let me know if anything else is needed here.
Comment 5 Bruce Reeler 2013-12-15 20:55:16 EST
(In reply to Jason Guiditta from comment #4)
> Bruce, this text makes sense to me until the issue is addressed.  Let me
> know if anything else is needed here.

Thanks Jason, should be fine as-is.
Comment 6 Bob Kukura 2013-12-16 14:05:40 EST
The ML2 plugin is present and works in RHOS 4. What is missing is support for deploying with ML2 in puppet, packstack and foreman. It is possible to deploy with these tools using the openvswitch plugin and then manually convert to ML2, as is documented for RDO at http://openstack.redhat.com/Modular_Layer_2_%28ML2%29_Plugin.

Instead of stating "OpenStack Networking does not have ML2 plugin support", why not state that packstack and foreman don't yet support deploying with ML2, and provide a reference to a RHOS KB article with instructions for manual conversion? Of course, we also need to decide whether we officially support customers using ML2.
Comment 10 Mike Orazi 2014-01-10 15:52:22 EST
The puppet module exists, but packstack and foreman are experiencing issues pulling this functionality in presently.  Pushing to A2 so they can remain aligned.
Comment 12 Jason Guiditta 2014-02-04 10:28:50 EST
Pull request is up:

https://github.com/redhat-openstack/astapor/pull/107
Comment 14 Nir Magnezi 2014-02-26 06:57:37 EST
Tested with: openstack-foreman-installer-1.0.4-1.el6ost.noarch

Configured a Controller (Neutron) to work with OVS+GRE ML2 as follows:

ml2_mechanism_drivers: ["openvswitch"]
ml2_network_vlan_range: ["10:50"]
ml2_tenant_network_type: ["gre"]
ml2_tunnel_id_ranges: ["20:100"]
neutron_core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin

changed all relevant IP addresses to match my server address as well.

Result:
# puppet agent -t -v
Info: Retrieving plugin
Info: Loading facts in /var/lib/puppet/lib/facter/netns_support.rb
Info: Loading facts in /var/lib/puppet/lib/facter/network.rb
Info: Loading facts in /var/lib/puppet/lib/facter/iptables_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/hamysql_active_node.rb
Info: Loading facts in /var/lib/puppet/lib/facter/iptables_persistent_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Info: Loading facts in /var/lib/puppet/lib/facter/ip6tables_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/ipa_client_configured.rb
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Cannot alias Package[neutron-plugin-ml2] to ["openstack-neutron-ml2"] at /usr/share/packstack/modules/neutron/manifests/plugins/ml2.pp:118; resource ["Package", "openstack-neutron-ml2"] already declared at /usr/share/openstack-foreman-installer/puppet/modules/quickstack/manifests/neutron/controller.pp:178 at /usr/share/packstack/modules/neutron/manifests/plugins/ml2.pp:118 on node <FQDN>
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Comment 15 Jason Guiditta 2014-02-26 10:56:18 EST
Looks like neutron puppet module was updated under us with the upstream fix to install those packages.  I'll make a patch to remove the duplication.  Can you tell me which version of packstack-modules-puppet this was using, so I can verify my fix?
Comment 16 Jason Guiditta 2014-02-26 11:27:06 EST
Nir, actually, I was just starting to make the change, and remembered that I already accounted for this change getting merged in.  Would you please retest setting the parameter $ml2_install_deps to false instead of true?  We can still remove the call now that we have the needed dep, but if that param does what it was designed for, this should not be a blocker for the A2 release, just a doc note, at least in my opinion. Going to flip this back to you, let me know how you make out.
Comment 17 Nir Magnezi 2014-03-03 09:52:19 EST
(In reply to Jason Guiditta from comment #16)
> Nir, actually, I was just starting to make the change, and remembered that I
> already accounted for this change getting merged in.  Would you please
> retest setting the parameter $ml2_install_deps to false instead of true?  We
> can still remove the call now that we have the needed dep, but if that param
> does what it was designed for, this should not be a blocker for the A2
> release, just a doc note, at least in my opinion. Going to flip this back to
> you, let me know how you make out.

Re-tested (with the latest version) but it still doesn't work as expected.

Reopening.
Tested NVR: openstack-foreman-installer-1.0.4-1.el6ost.noarch

Tried to install OpenStack with Neutron ML2 OVS and GRE and Vlans as follows:

Controller (Neutron):
=====================
ml2_install_deps=false
ml2_network_vlan_ranges=["185:185"]
ml2_tenant_network_types=["vlan", "gre"]
ml2_tunnel_id_ranges=["1:1000"]
ml2_type_drivers=["gre", "vlan"]
ovs_vlan_ranges=ext_net:185:185
neutron_core_plugin=neutron.plugins.ml2.plugin.Ml2Plugin

Neutron Networker:
==================
ovs_bridge_mappings=["ext_net:br-ex"]
ovs_bridge_uplinks=["br-ex:eth3.185"]
ovs_tunnel_iface=eth3
ovs_tunnel_types=["gre"]
ovs_vlan_ranges=ext_net:185:185
tenant_network_type=gre,vlan

Compute (Neutron):
==================
neutron_core_plugin=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
ovs_tunnel_iface=eth3
ovs_tunnel_types=["gre"]


All services are up and running. but this is what I get when I try to use Neutron.

# neutron router-list
404 Not Found

The resource could not be found.
   
snipped from neutron server.log:

[root@puma08 ]# 2014-03-03 16:37:50.016 9700 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.160.25
2014-03-03 16:37:54.139 9700 ERROR neutron.openstack.common.rpc.amqp [-] Exception during message handling
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp Traceback (most recent call last):
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py", line 438, in _process_data
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     **args)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/neutron/common/rpc.py", line 45, in dispatch
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     neutron_ctxt, version, method, namespace, **kwargs)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/neutron/db/dhcp_rpc_base.py", line 63, in get_active_networks_info
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     networks = self._get_active_networks(context, **kwargs)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/neutron/db/dhcp_rpc_base.py", line 43, in _get_active_networks
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     context, host)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/neutron/db/agentschedulers_db.py", line 185, in list_active_networks_on_active_dhcp_agent
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     net_ids = [item[0] for item in query]
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2227, in __iter__
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     return self._execute_and_instances(context)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2242, in _execute_and_instances
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     result = conn.execute(querycontext.statement, self._params)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1449, in execute
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     params)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1584, in _execute_clauseelement
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     compiled_sql, distilled_params
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1698, in _execute_context
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     context)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1691, in _execute_context
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     context)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 331, in do_execute
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     cursor.execute(statement, parameters)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 173, in execute
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     self.errorhandler(self, exc, value)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp     raise errorclass, errorvalue
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp ProgrammingError: (ProgrammingError) (1146, "Table 'neutron.networkdhcpagentbindings' doesn't exist") 'SELECT networkdhcpagentbindings.network_id AS networkdhcpagentbindings_network_id \nFROM networkdhcpagentbindings \nWHERE networkdhcpagentbindings.dhcp_agent_id = %s' ('9dab8d15-e6b3-4f1a-8084-de3cd07a563d',)
2014-03-03 16:37:54.139 9700 TRACE neutron.openstack.common.rpc.amqp 
2014-03-03 16:37:54.142 9700 ERROR neutron.openstack.common.rpc.common [-] Returning exception (ProgrammingError) (1146, "Table 'neutron.networkdhcpagentbindings' doesn't exist") 'SELECT networkdhcpagentbindings.network_id AS networkdhcpagentbindings_network_id \nFROM networkdhcpagentbindings \nWHERE networkdhcpagentbindings.dhcp_agent_id = %s' ('9dab8d15-e6b3-4f1a-8084-de3cd07a563d',) to caller
2014-03-03 16:37:54.142 9700 ERROR neutron.openstack.common.rpc.common [-] ['Traceback (most recent call last):\n', '  File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py", line 438, in _process_data\n    **args)\n', '  File "/usr/lib/python2.6/site-packages/neutron/common/rpc.py", line 45, in dispatch\n    neutron_ctxt, version, method, namespace, **kwargs)\n', '  File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n    result = getattr(proxyobj, method)(ctxt, **kwargs)\n', '  File "/usr/lib/python2.6/site-packages/neutron/db/dhcp_rpc_base.py", line 63, in get_active_networks_info\n    networks = self._get_active_networks(context, **kwargs)\n', '  File "/usr/lib/python2.6/site-packages/neutron/db/dhcp_rpc_base.py", line 43, in _get_active_networks\n    context, host)\n', '  File "/usr/lib/python2.6/site-packages/neutron/db/agentschedulers_db.py", line 185, in list_active_networks_on_active_dhcp_agent\n    net_ids = [item[0] for item in query]\n', '  File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2227, in __iter__\n    return self._execute_and_instances(context)\n', '  File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2242, in _execute_and_instances\n    result = conn.execute(querycontext.statement, self._params)\n', '  File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1449, in execute\n    params)\n', '  File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1584, in _execute_clauseelement\n    compiled_sql, distilled_params\n', '  File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1698, in _execute_context\n    context)\n', '  File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1691, in _execute_context\n    context)\n', '  File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 331, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 173, in execute\n    self.errorhandler(self, exc, value)\n', '  File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler\n    raise errorclass, errorvalue\n', 'ProgrammingError: (ProgrammingError) (1146, "Table \'neutron.networkdhcpagentbindings\' doesn\'t exist") \'SELECT networkdhcpagentbindings.network_id AS networkdhcpagentbindings_network_id \\nFROM networkdhcpagentbindings \\nWHERE networkdhcpagentbindings.dhcp_agent_id = %s\' (\'9dab8d15-e6b3-4f1a-8084-de3cd07a563d\',)\n']


Seems like some tables are absent from neutron database (in our case: the networkdhcpagentbindings table)

The tables list in Neutron db post foreman ML2 installation:

+------------------------------+
| Tables_in_neutron            |
+------------------------------+
| agents                       |
| alembic_version              |
| allowedaddresspairs          |
| arista_provisioned_nets      |
| arista_provisioned_tenants   |
| arista_provisioned_vms       |
| cisco_ml2_credentials        |
| cisco_ml2_nexusport_bindings |
| dnsnameservers               |
| externalnetworks             |
| floatingips                  |
| ipallocationpools            |
| ipallocations                |
| ipavailabilityranges         |
| ml2_flat_allocations         |
| ml2_gre_allocations          |
| ml2_gre_endpoints            |
| ml2_network_segments         |
| ml2_port_bindings            |
| ml2_vlan_allocations         |
| ml2_vxlan_allocations        |
| ml2_vxlan_endpoints          |
| networks                     |
| ports                        |
| quotas                       |
| routers                      |
| routes                       |
| securitygroups               |
| servicedefinitions           |
| servicetypes                 |
| subnets                      |
+------------------------------+
Comment 18 Jason Guiditta 2014-03-03 11:14:03 EST
Nir, I have actually seen this same behavior on someone else's deployment, and I believe it is something wrong with neutron setup itself (vs our code, at least for this part), but I have been unable to track it down yet.  I have 2 questions that woudl help me determine where the issue is coming from:

1. Have you had any better luck having packstack install neutron?  
2. Can you tell me what versions you have of packstack-modules-puppet, neutron, and neutron-ml2?
Comment 21 Nir Magnezi 2014-03-04 02:49:56 EST
(In reply to Jason Guiditta from comment #18)
> Nir, I have actually seen this same behavior on someone else's deployment,
> and I believe it is something wrong with neutron setup itself (vs our code,
> at least for this part), but I have been unable to track it down yet.  I
> have 2 questions that woudl help me determine where the issue is coming from:
> 
> 1. Have you had any better luck having packstack install neutron?

No, But for different reasons. See:
https://bugzilla.redhat.com/show_bug.cgi?id=1068962#c7
https://bugzilla.redhat.com/show_bug.cgi?id=1066519#c5

Nevertheless, When I configured my setup with ML2 manually[1], I did not encounter such issues.

> 2. Can you tell me what versions you have of packstack-modules-puppet,
> neutron, and neutron-ml2?

Unfortunately, the setup is not longer live. Yet since I used the 014-02-28.3 Havana puddle, i would presume it was packstack-modules-puppet-2013.2.1-0.25.dev987.el6ost.noarch



[1] http://openstack.redhat.com/ML2_plugin
Comment 22 Hugh Brock 2014-03-05 13:19:16 EST
This will not make A3, pushing to A4.
Comment 23 John Eckersberg 2014-03-13 14:01:23 EDT
Here's why the networkdhcpagentbindings table is missing:

https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/alembic_migrations/versions/4692d074d587_agent_scheduler.py?id=00281819f3707372e14005b0be8363fca17bd4e1#n32

The ml2 plugin isn't in the array, so it gets skipped in the upgrade method.  Thus table doesn't exist.  If you run the migrations once with one of the supported plugins enabled (e.g. ovs_neutron), the table will get created and then changing plugins later will leave it intact.  Since the manual setup instructions state:

"Start with a working packstack installation with neutron and openvswitch." 

I'm pretty sure that's why it works with a manual install but not with foreman.

How to fix the problem, I don't exactly know.  Add the ml2 plugin to the migration plugin list?  I'm not familiar enough with neutron and its migrations to know for sure.
Comment 24 Jason Guiditta 2014-03-14 10:53:39 EDT
I wonder then why it also seems to work for packstack?  I do remember Bob Kukura mentioning this bug, and we tried to use this fix (and the one other_ at the time, however I think we were hung up because of other issues.  After that, I must have forgotten about it, as it was at least several weeks ago.  I searched through my irc logs, and the proposed fix (at least for the same kind of problem) was out a while ago, but is marked abandoned:

https://review.openstack.org/#/c/61677/

Not sure if that is any help, but at least some context.
Comment 25 John Eckersberg 2014-03-14 11:02:25 EDT
(In reply to Jason Guiditta from comment #24)
> I wonder then why it also seems to work for packstack?  I do remember Bob
> Kukura mentioning this bug, and we tried to use this fix (and the one other_
> at the time, however I think we were hung up because of other issues.  After
> that, I must have forgotten about it, as it was at least several weeks ago. 
> I searched through my irc logs, and the proposed fix (at least for the same
> kind of problem) was out a while ago, but is marked abandoned:
> 
> https://review.openstack.org/#/c/61677/
> 
> Not sure if that is any help, but at least some context.

Aye, I saw that abandoned one.  The one I was most recently looking at
is here:

https://bugs.launchpad.net/neutron/+bug/1260224
https://review.openstack.org/#/c/61663/

I got to that bug via this bug which was marked as a dupe:

https://bugs.launchpad.net/neutron/+bug/1264464

It all sounds related.
Comment 26 John Eckersberg 2014-03-17 11:49:50 EDT
I've followed the instructions from comment #17, on:

openstack-foreman-installer-1.0.5-1.el6ost.noarch

And on the controller:

openstack-neutron-2013.2.2-5.el6ost.noarch

The only deviation from the config in comment #17 was changing
ml2_network_vlan_ranges=["185:185"]
to
ml2_network_vlan_ranges=["ext_net:185:185"]

The former gives:

2014-03-15 01:10:59.841 1300 TRACE neutron.plugins.ml2.drivers.type_vlan NetworkVlanRangeError: Invalid network VLAN range: '185:185' - 'need more than 2 values to unpack'

In this setup, I get the following errors.

Instead of the 404 on router-list, instead i get:

# neutron router-list
Request Failed: internal server error while processing your request.

And the traceback from the neutron server.log:

2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource Traceback (most recent call last):
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 84, in resource
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     result = method(request=request, **args)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 273, in index
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     return self._items(request, True, parent_id)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 227, in _items
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     obj_list = obj_getter(request.context, **kwargs)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib/python2.6/site-packages/neutron/db/extraroute_db.py", line 165, in get_routers
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     marker=marker, page_reverse=page_reverse)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib/python2.6/site-packages/neutron/db/l3_db.py", line 266, in get_routers
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     page_reverse=page_reverse)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib/python2.6/site-packages/neutron/db/db_base_plugin_v2.py", line 196, in _get_collection
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     items = [dict_func(c, fields) for c in query]
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2227, in __iter__
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     return self._execute_and_instances(context)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2242, in _execute_and_instances
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     result = conn.execute(querycontext.statement, self._params)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1449, in execute
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     params)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1584, in _execute_clauseelement
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     compiled_sql, distilled_params
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1698, in _execute_context
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     context)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1691, in _execute_context
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     context)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 331, in do_execute
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     cursor.execute(statement, parameters)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 173, in execute
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     self.errorhandler(self, exc, value)
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource   File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource     raise errorclass, errorvalue
2014-03-15 01:34:20.413 3403 TRACE neutron.api.v2.resource ProgrammingError: (ProgrammingError) (1146, "Table 'neutron.routerroutes' doesn't exist") 'SELECT routers.tenant_id AS routers_tenant_id, routers.id AS routers_id, routers.name AS routers_name, routers.status AS routers_status, routers.admin_state_up AS routers_admin_state_up, routers.gw_port_id AS routers_gw_port_id, routers.enable_snat AS routers_enable_snat, routerroutes_1.destination AS routerroutes_1_destination, routerroutes_1.nexthop AS routerroutes_1_nexthop, routerroutes_1.router_id AS routerroutes_1_router_id \nFROM routers LEFT OUTER JOIN routerroutes AS routerroutes_1 ON routers.id = routerroutes_1.router_id' ()

Confirmed there is no neutron.routerroutes table:

mysql> select table_name from information_schema.tables where table_schema = 'neutron' and table_name like '%route%';
+------------+
| table_name |
+------------+
| routers    |
| routes     |
+------------+
2 rows in set (0.00 sec)
Comment 27 Jason Guiditta 2014-03-17 12:39:48 EDT
What version of packstack-modules-puppet do you have?  The latest version was supposed to include a fix for the clan range thing.  They were previously a bit over aggressive with validation, meaning you could not use provider networks.
Comment 28 John Eckersberg 2014-03-17 12:45:34 EDT
(In reply to Jason Guiditta from comment #27)
> What version of packstack-modules-puppet do you have?  The latest version
> was supposed to include a fix for the clan range thing.  They were
> previously a bit over aggressive with validation, meaning you could not use
> provider networks.

packstack-modules-puppet-2013.2.1-0.28.dev989.el6ost.noarch
Comment 29 John Eckersberg 2014-03-17 15:42:55 EDT
OK, I think I've dug to the root cause here:

https://bugs.launchpad.net/neutron/+bug/1288358/comments/3

The end result is neutron tables created with myisam storage engine, and then table creation failing due to lack of support of foreign keys in myisam and/or foreign keys between tables on different storage engines.

I posted a workaround[1] to force InnoDB.  Repeated here:

cat <<EOF > /etc/mysql/conf.d/innodb.cnf 
[mysqld]
default-storage-engine = innodb
EOF

service mysqld restart

mysql -e 'drop database neutron; create database neutron;'

Then either run puppet agent -tv or neutron-db-manage to rerun the migrations and default all tables to innodb.

After that, I get an empty router list:

[root@control ~]# neutron router-list

[root@control ~]# 


[1] https://www.redhat.com/archives/rdo-list/2014-March/msg00068.html
Comment 30 John Eckersberg 2014-03-17 15:47:34 EDT
There's one more issue I almost forgot about.  I had to apply this patch to make the agents table get created:

https://review.openstack.org/#/c/79660/
Comment 31 John Eckersberg 2014-03-18 14:44:47 EDT
I've submitted a review upstream in neutron to correct the migrations so the tables get creates using innodb:

https://review.openstack.org/81334
Comment 32 John Eckersberg 2014-04-08 09:04:03 EDT
I've split out the Neutron issue here:

https://bugzilla.redhat.com/show_bug.cgi?id=1085360

As far as I can tell, everything in foreman is working as expected.  It correctly installs and configures the ml2 plugin after the changes Jay made earlier in the bug.
Comment 33 John Eckersberg 2014-04-08 14:21:13 EDT
Closing this as CLOSED ERRATA, since Jay's fix to enable install/configuration within openstack-foreman-installer was part of RHBA-2014:0213-03.  The remaining issue is being tracked against Neutron.

Note You need to log in before you can comment on or make changes to this bug.