Bug 1123393 - RFE: Rubygem-Staypuft: Deployment of neutron network node fails due to missing ip address
Summary: RFE: Rubygem-Staypuft: Deployment of neutron network node fails due to miss...
Keywords:
Status: CLOSED DUPLICATE of bug 1122726
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rubygem-staypuft
Version: 5.0 (RHEL 7)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ga
: Installer
Assignee: Scott Seago
QA Contact: Omri Hochman
URL:
Whiteboard:
: 1122302 1125217 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-25 13:56 UTC by Alexander Chuzhoy
Modified: 2014-08-05 18:16 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-08-05 18:16:17 UTC


Attachments (Terms of Use)
/var/log/messages file from the staypuft machine. (1.21 MB, text/plain)
2014-07-25 13:58 UTC, Alexander Chuzhoy
no flags Details
production.log from the staypuft host. (229.52 KB, text/x-log)
2014-07-25 13:58 UTC, Alexander Chuzhoy
no flags Details
/var/log/messages file from the compute node. (169.77 KB, text/plain)
2014-07-25 14:01 UTC, Alexander Chuzhoy
no flags Details

Description Alexander Chuzhoy 2014-07-25 13:56:32 UTC
Rubygem-Staypuft:  Deployment of HA Neutron with vxlan on bare metal gets stuck on 60% installing the controllers.

Environment: 
openstack-puppet-modules-2014.1-19.3.el6ost.noarch
openstack-foreman-installer-2.0.16-1.el6ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el6ost.noarch
rhel-osp-installer-0.1.4-1.el6ost.noarch


Steps to reproduce:
1. Install rhel-osp-installer.
2. Configure/run an HA deployment of neutron network with Vxlan tenant network type, em1 for Network Nodes and em2 for compute nodes.

Result:
The deployment gets paused with errors on 60% installing the controllers. I saw no puppet errors in UI. Resumed the deployment after several hours (in the morning). The deployment continued and got paused with errors on 60% installing the compute node. The puppet error I see on the compute node is:

Could not retrieve catalog from remote server: Error 400 on SERVER:  Local ip for ovs agent must be set when tunneling is enabled at  /etc/puppet/environments/production/modules/neutron/manifests/agents/ovs.pp:32  on node <nodename>


Expected result:
Deployment successfully completed 100%.

Comment 1 Alexander Chuzhoy 2014-07-25 13:58:35 UTC
Created attachment 920991 [details]
/var/log/messages file from the staypuft machine.

Comment 2 Alexander Chuzhoy 2014-07-25 13:58:57 UTC
Created attachment 920992 [details]
production.log from the staypuft host.

Comment 3 Alexander Chuzhoy 2014-07-25 14:01:28 UTC
Created attachment 920995 [details]
/var/log/messages file from the compute node.

Comment 6 Omri Hochman 2014-07-27 08:00:41 UTC
It looks the same ERROR in : https://bugzilla.redhat.com/show_bug.cgi?id=1122302

"ip for ovs agent must be set when tunneling is enabled at /etc/puppet/environments/production/modules/neutron/manifests/agents/ovs.pp:32 on "

Comment 7 Mike Burns 2014-07-31 12:10:01 UTC
*** Bug 1122302 has been marked as a duplicate of this bug. ***

Comment 8 Mike Burns 2014-07-31 12:12:08 UTC
*** Bug 1125217 has been marked as a duplicate of this bug. ***

Comment 9 Mike Burns 2014-07-31 12:15:06 UTC
Current workaround is to do either:

* enable dhcp on the subnet that is not getting an ip address
* configure the nic with a static ip in the kickstart

Real solution is network provisioning which is coming in a future version.

Comment 10 Ofer Blaut 2014-07-31 13:36:57 UTC
(In reply to Mike Burns from comment #9)
> Current workaround is to do either:
> 
> * enable dhcp on the subnet that is not getting an ip address
Who is the DHCP server in this case ?
> * configure the nic with a static ip in the kickstart

Can you please add information how to do it ?

Thanks 
> 
> Real solution is network provisioning which is coming in a future version.

Comment 11 Jaroslav Henner 2014-07-31 13:50:07 UTC
I am not sure this helps but I have this problem on deployment where I did NOT check to "configure external network on the networking node" or something like that.

Comment 12 Jaroslav Henner 2014-07-31 13:51:39 UTC
all of my nics on the networking node have IPv4

Comment 13 Jaroslav Henner 2014-07-31 17:34:59 UTC
This is caused by that the discovery OS does use probably only eth* ifaces names, but the real os, that is installed after clicking on the deploy, is using weird names (ens*) for the realtec ifaces.  So set all the interfaces to virtio before starting the VMs and you're good.

Comment 14 Alexander Chuzhoy 2014-07-31 17:55:19 UTC
These "weird" names will pop up on RHEL7 - particularly on bare metal (unless using the workaround with kernel argument). 

Anyway, you have to identify each interface - and know which network it is connected to. Even if it's called ethX.

Comment 16 Mike Burns 2014-08-05 18:16:17 UTC
There are 2 issues mentioned in this bug:

1.  nic naming is different between discovery and RHEL 7 -- this is documented now and will be solved by bug 1122726

2.  interfaces without IPs -- this will be handled with future network provisioning (multiple bugs opened on this)

Closing this as a duplicate

*** This bug has been marked as a duplicate of bug 1122726 ***


Note You need to log in before you can comment on or make changes to this bug.