Bug 1031167 - Rerunning packstack with answer file fails with "Error: The ovs_redhat provider can not handle attribute external_ids" during puppet apply neutron.pp
Summary: Rerunning packstack with answer file fails with "Error: The ovs_redhat provid...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-packstack
Version: 4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z1
: 4.0
Assignee: Ivan Chavero
QA Contact: Roey Dekel
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-15 18:45 UTC by jliberma@redhat.com
Modified: 2019-09-10 14:08 UTC (History)
15 users (show)

Fixed In Version: openstack-packstack-2013.2.1-0.21.dev948.el6ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-01-23 14:21:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Stack trace and error log associated with this packstack run. (8.96 KB, application/x-compressed-tar)
2013-11-15 18:45 UTC, jliberma@redhat.com
no flags Details
Answer file for packstack run (14.57 KB, text/plain)
2014-01-19 08:11 UTC, Roey Dekel
no flags Details


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 62488 0 None None None Never
OpenStack gerrit 62720 0 None None None Never
Red Hat Product Errata RHBA-2014:0046 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform 4 Bug Fix and Enhancement Advisory 2014-01-23 00:51:59 UTC

Description jliberma@redhat.com 2013-11-15 18:45:48 UTC
Created attachment 824656 [details]
Stack trace and error log associated with this packstack run.

Description of problem: Rerunning packstack with answer file fails with "Error: The ovs_redhat provider can not handle attribute external_ids" during puppet apply neutron.pp

Version-Release number of selected component (if applicable):
openstack-packstack-2013.2.1-0.9.dev840.el6ost.noarch

How reproducible:
every time

Steps to Reproduce:
1. Deploy OSP4 via packstack with neutron (11.13 puddle)
2. Make a minor change to answer file such as adding another NTP server or compute node
3. Rerun packstack with same answer file

Actual results: Fails with error

Expected results: Reinstalls  successfully

Additional info: Error appears to be related to ovs bridge:interface creation.

Comment 2 Ivan Chavero 2013-11-16 23:14:52 UTC
The current development branch does not have this problem, have you tried: openstack-packstack-2013.2.1-0.11.dev847.el6ost.noarch.rpm??

Comment 4 Scott Lewis 2013-11-19 16:54:26 UTC
Auto adding >= MODIFIED bugs to beta

Comment 5 Attila Darazs 2013-11-25 23:19:37 UTC
Additional info, I almost opened a bug when I found this, so this will be in the bug form, hopefully helping the fix.

Description of problem:
After upgrading from grizzly to havana and rerunning packstack to install some new parts, neutron installation failed with packstack.

Version-Release number of selected component (if applicable):
openstack-packstack-2013.2.1-0.11.dev847.el6ost.noarch

How reproducible:
Always

Steps to Reproduce:
1. Have an bridge mapping defined in CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS
2. Make sure that it doesn't have any external ids with "ovs-vsctl br-get-external-id br-eth3" (it doesn't return anything)
3. Run packstack with answer file like this:

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=inter-vlan:100:120
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=inter-vlan:br-eth3
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth3:eth3

Actual results:
Error: /Stage[main]/Neutron::Agents::Ovs/Neutron::Plugins::Ovs::Bridge[inter-vlan:br-eth3]/Vs_bridge[br-eth3]/external_ids: change from  to bridge-id=br-eth3 failed: The ovs_redhat provider can not handle attribute external_ids

Expected results:
Packstack runs and sets the IDs.

Workaround:
You can make it pass by setting the ID manually on all nodes where the bridge is created:
$ ovs-vsctl br-set-external-id br-eth3 bridge-id br-eth3

Additional info:
I don't know ruby and the puppet magic that well, but the error is probably here around the _split function, and using the empty variable as a dict:
/usr/lib/python2.6/site-packages/packstack/puppet/modules/vswitch/lib/puppet/provider/vs_bridge/ovs_redhat.rb, line 37-51.

Comment 7 Ofer Blaut 2013-11-27 19:40:10 UTC
Tested on openstack-packstack-2013.2.1-0.11.dev847.el6ost.noarch

I have changed GRE ranges on OVS i was using , no error is seen

Comment 9 Rami Vaknin 2013-12-16 14:57:54 UTC
I still encounters this bug (twice) on rhos 4.0, 2013-12-12.1 puddle, openstack-packstack-2013.2.1-0.19.dev935.el6ost:

Applying 10.35.160.25_neutron.pp
                                                                                           [ ERROR ]

ERROR : Error appeared during Puppet run: 10.35.160.23_neutron.pp
Error: The ovs_redhat provider can not handle attribute external_ids
You will find full trace in log /var/tmp/packstack/20131216-164315-mLHq6e/manifests/10.35.160.23_neutron.pp.log
Please check log file /var/tmp/packstack/20131216-164315-mLHq6e/openstack-setup.log for more information



Could you please point out what was the fix here or paste a link to the patch? could you please also add a fixed_in_version?

Comment 10 Ivan Chavero 2013-12-16 18:56:57 UTC
I've tested this using openstack-packstack-2013.2.1-0.18.dev934 on RHEL 6.5 and i don't get the error. Can you specify how did you run packstack and send the answer file?

Comment 12 Terry Wilson 2013-12-16 22:52:32 UTC
This is pretty easy to reproduce. Just do:
1) packstack --allinone --dry-run
2) Edit the generated answerfile to have (with eth1 being any unused second interface)
   CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan
   CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:1:1000
   CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
   CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1
3) packstack --answer-file $answer_file
4) After the run successfully completes, do something like:
   puppet apply --modulepath=/usr/lib/python2.6/site-packages/packstack/puppet/modules/ /var/tmp/packstack/20131216-152554-zj9o7Q/manifests/192.168.122.212_neutron.pp

   using the appropriate paths for your install.

It should fail with the reported error. The problem is a mis-placed 'private' in the ovs_redhat vs_bridge provider. See linked gerrit for the fix.

Comment 13 Ivan Chavero 2013-12-16 23:45:59 UTC
reviewing patch

Comment 14 Ivan Chavero 2013-12-17 00:14:20 UTC
A doc text was added to explain the problem while this patch is accepted upstream.

Comment 15 Rami Vaknin 2013-12-17 09:34:04 UTC
(In reply to Ivan Chavero from comment #10)
> I've tested this using openstack-packstack-2013.2.1-0.18.dev934 on RHEL 6.5
> and i don't get the error. Can you specify how did you run packstack and
> send the answer file?

I assume you don't need that info anymore due to comments >= #11, but anyway, I think I've changed only:
CONFIG_HORIZON_SSL=n
to
CONFIG_HORIZON_SSL=y

Comment 23 Eric Harney 2013-12-17 16:47:12 UTC
I'm not sure the workaround in the Doc Text works.

# grep OVS_TENANT_NETWORK_TYPE packans.txt 
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=

# packstack --answer-file=packans.txt
Welcome to Installer setup utility
Packstack changed given value  to required value /root/.ssh/id_rsa.pub
Parameter CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE failed validation: Given value is not allowed: 

ERROR : Failed handling answer file: Given value is not allowed: 
Please check log file /var/tmp/packstack/20131217-114719-43Cbnw/openstack-setup.log for more information

# rpm -q openstack-packstack
openstack-packstack-2013.2.1-0.20.dev936.el6ost.noarch

Comment 24 Terry Wilson 2013-12-17 17:16:26 UTC
Yes, it is only the BRIDGE_MAPPINGS/BRIDGE_IFACES that should affect this error.

Comment 25 Terry Wilson 2013-12-17 17:53:58 UTC
Linking fix for packstack to update to using the upstream repo for puppet-vswitch. Since it was created off of the github.com/packstack branch in the first place, it should have everything we need plus additional fixes. Not much changes with the puppet-vswitch module, so I'm not sure there is really much need to maintain our own fork at this point.

Comment 27 Terry Wilson 2013-12-17 18:56:47 UTC
Alan: all of my test instances are currently running fixed code, so I can't easily verify that the workaround stuff works. Whoever wrote the original doc text for doing a workaround should test it and update it. Or, we could merge the linked fix and do a build and not worry about workarounds at all. :p

Comment 31 navyfish 2013-12-22 02:14:16 UTC
I also encountered this problem, you can use "ovs-vsctl" tool to delete all the ports and bridges. If it is multi-nodes, each node on the server should be deleted.
I try again for at least three times to find the solution to this problem because multi-nodes.

Comment 33 Ivan Chavero 2014-01-09 19:18:25 UTC
contacted mmagr to assist us with the package building

Comment 34 Roey Dekel 2014-01-16 10:25:48 UTC
I reproduced Terry Wilson's steps and got other error.
Is this makes the bug verified? Is this another bug?

The command and it's feedback:
[root@rose11 ~]# puppet apply --modulepath=/usr/lib/python2.6/site-packages/packstack/puppet/modules/ /var/tmp/packstack/20140116-114634-YabX3Z/manifests/10.35.99.5_neutron.pp
Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults
Warning: Scope(Class[Neutron::Server]): sql_connection deprecated for connection
Warning: Scope(Class[Neutron::Server]): sql_max_retries deprecated for max_retries
Warning: Scope(Class[Neutron::Server]): sql_idle_timeout deprecated for idle_timeout
Warning: Scope(Class[Neutron::Server]): reconnect_interval deprecated for retry_interval
Notice: /Stage[main]/Neutron::Plugins::Ovs/Neutron_plugin_ovs[OVS/network_vlan_ranges]/ensure: removed
Notice: /Stage[main]/Neutron::Plugins::Ovs/Neutron_plugin_ovs[OVS/tenant_network_type]/value: value changed 'vlan' to 'local'
Notice: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/admin_password]/value: value changed 'acb8c50fc43d4f53' to 'f434f6d910eb4e89'
Notice: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_password]/value: value changed 'acb8c50fc43d4f53' to 'f434f6d910eb4e89'
Notice: /Stage[main]/Neutron::Server/Neutron_config[database/connection]/value: value changed 'mysql://neutron:f7484422635b445b.99.5/ovs_neutron' to 'mysql://neutron:151a46f6b394438e.99.5/ovs_neutron'
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns: No handlers could be found for logger "neutron.common.legacy"
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns: Traceback (most recent call last):
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/bin/neutron-db-manage", line 10, in <module>
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     sys.exit(main())
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib/python2.6/site-packages/neutron/db/migration/cli.py", line 143, in main
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     CONF.command.func(config, CONF.command.name)
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib/python2.6/site-packages/neutron/db/migration/cli.py", line 80, in do_upgrade_downgrade
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib/python2.6/site-packages/neutron/db/migration/cli.py", line 59, in do_alembic_command
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     getattr(alembic_command, cmd)(config, *args, **kwargs)
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib/python2.6/site-packages/alembic/command.py", line 124, in upgrade
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     script.run_env()
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib/python2.6/site-packages/alembic/script.py", line 191, in run_env
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     util.load_python_file(self.dir, 'env.py')
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib/python2.6/site-packages/alembic/util.py", line 186, in load_python_file
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     module = imp.load_source(module_id, path, open(path, 'rb'))
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib/python2.6/site-packages/neutron/db/migration/alembic_migrations/env.py", line 105, in <module>
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     run_migrations_online()
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib/python2.6/site-packages/neutron/db/migration/alembic_migrations/env.py", line 80, in run_migrations_online
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     connection = engine.connect()
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 2472, in connect
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     return self._connection_cls(self, **kwargs)
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 878, in __init__
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     self.__connection = connection or engine.raw_connection()
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 2558, in raw_connection
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     return self.pool.unique_connection()
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/pool.py", line 183, in unique_connection
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     return _ConnectionFairy(self).checkout()
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/pool.py", line 387, in __init__
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     rec = self._connection_record = pool._do_get()
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/pool.py", line 802, in _do_get
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     return self._create_connection()
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/pool.py", line 188, in _create_connection
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     return _ConnectionRecord(self)
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/pool.py", line 270, in __init__
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     self.connection = self.__connect()
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/pool.py", line 330, in __connect
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     connection = self.__pool._creator()
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/strategies.py", line 80, in connect
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     return dialect.connect(*cargs, **cparams)
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 281, in connect
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     return self.dbapi.connect(*cargs, **cparams)
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/MySQLdb/__init__.py", line 81, in Connect
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     return Connection(*args, **kwargs)
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:   File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 187, in __init__
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns:     super(Connection, self).__init__(*args, **kwargs2)
Notice: /Stage[main]//Exec[neutron-db-manage upgrade]/returns: sqlalchemy.exc.OperationalError: (OperationalError) (1045, "Access denied for user 'neutron'@'rose11.qa.lab.tlv.redhat.com' (using password: YES)") None None
Error: neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head returned 1 instead of one of [0]
Error: /Stage[main]//Exec[neutron-db-manage upgrade]/returns: change from notrun to 0 failed: neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head returned 1 instead of one of [0]
Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/value: value changed '25ab3309b94c4898' to '8db3cb1c4da249a4'
Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/admin_password]/value: value changed 'acb8c50fc43d4f53' to 'f434f6d910eb4e89'
Notice: /Stage[main]/Neutron::Agents::Ovs/Service[neutron-plugin-ovs-service]: Triggered 'refresh' from 4 events
Notice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Neutron::Server/Service[neutron-server]: Dependency Exec[neutron-db-manage upgrade] has failures: true
Warning: /Stage[main]/Neutron::Server/Service[neutron-server]: Skipping because of failed dependencies
Error: /Stage[main]/Neutron::Server/Service[neutron-server]: Failed to call refresh: Could not restart Service[neutron-server]: Execution of '/sbin/service neutron-server restart' returned 6: 
Error: /Stage[main]/Neutron::Server/Service[neutron-server]: Could not restart Service[neutron-server]: Execution of '/sbin/service neutron-server restart' returned 6: 
Notice: /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: Triggered 'refresh' from 4 events
Notice: Finished catalog run in 3.15 seconds

Comment 35 Jason Guiditta 2014-01-16 21:15:32 UTC
Not sure what happened there, tried to add myself to cc list and bunch of flags/settings changed, will try to change them back now

Comment 36 Ivan Chavero 2014-01-17 23:45:05 UTC
The patches have been merged and packaged so doc text workaround is not required anymore, removing it.

Comment 37 Roey Dekel 2014-01-19 08:11:17 UTC
Created attachment 852269 [details]
Answer file for packstack run

This answer filed encoutered error while used for running packstack

Comment 38 Roey Dekel 2014-01-19 08:13:39 UTC
Tried to verified on Havana with:

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
puddle: 2014-01-16.1
openstack-neutron-openvswitch-2013.2.1-4.el6ost.noarch
python-neutronclient-2.3.1-2.el6ost.noarch
python-neutron-2013.2.1-4.el6ost.noarch
openstack-neutron-2013.2.1-4.el6ost.noarch

Scenario:
---------
Tried to setup environment with 1 public VLAN and 2 private tenant VLANs.

Steps:
------
1. packstack --allinone --dry-run
2. reboot
3. vim /etc/sysconfig/network-scripts/ifcfg-eth1.233
4. service network restart
5. packstack --allinone --dry-run
6. reboot
7. packstack --answer-file=FILE

Error:
------
Applying 10.35.99.5_neutron.pp
                                                                                         [ ERROR ]

ERROR : Error appeared during Puppet run: 10.35.99.5_neutron.pp
Error: /Stage[main]//Vs_port[eth1]: Could not evaluate: Execution of '/usr/bin/ovs-vsctl list-ports br-eth1' returned 1: ovs-vsctl: no bridge named br-eth1
You will find full trace in log /var/tmp/packstack/20140119-093028-t6A8Rj/manifests/10.35.99.5_neutron.pp.log

Comments:
---------
1. After each packstack run I did reboot to validate kernel update.
2. Step 5 might be unnecessary but it validates step 1.
3. The answer file used in Step 7 is attachment 852269 [details]

Comment 39 Roey Dekel 2014-01-19 09:14:44 UTC
Had a problem with the answer file (attachment 852269 [details])

verified on Havana with:

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
puddle: 2014-01-16.1
openstack-neutron-openvswitch-2013.2.1-4.el6ost.noarch
python-neutronclient-2.3.1-2.el6ost.noarch
python-neutron-2013.2.1-4.el6ost.noarch
openstack-neutron-2013.2.1-4.el6ost.noarch

Scenario:
---------
Setup environment with 1 public VLAN and 2 private tenant VLANs. Boot an instance via private VLAN and associate it to floating-IP. Verifeid ingress and egress connection (via ping).

Steps:
------
1. packstack --allinone --dry-run
2. reboot
3. vim /etc/sysconfig/network-scripts/ifcfg-eth1.233
4. service network restart
5. packstack --allinone --dry-run
6. reboot
7. packstack --answer-file=FILE
8. Setup instance in private VLAN
9. Associate floating-IP to instance.
10. Verify ingress and egress connection.

comments:
---------
1. After each packstack run I did reboot to validate kernel update.
2. Step 5 might be unnecessary but it validates step 1.
3. I assume that working connection shows proper installation of interfaces. hence, working ping show the external id's were set by packstack.

Comment 42 Lon Hohberger 2014-02-04 17:19:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2014-0046.html


Note You need to log in before you can comment on or make changes to this bug.