Bug 970283 - [RFE] Add a hostgroup to Foreman for a deploying an OpenStack Controller node with Neutron
Summary: [RFE] Add a hostgroup to Foreman for a deploying an OpenStack Controller node...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z1
: 4.0
Assignee: Jason Guiditta
QA Contact: Omri Hochman
URL:
Whiteboard:
Depends On:
Blocks: RHOS40RFE 988451
TreeView+ depends on / blocked
 
Reported: 2013-06-03 21:28 UTC by Charles Crouch
Modified: 2016-04-26 14:26 UTC (History)
11 users (show)

Fixed In Version: openstack-foreman-installer-1.0.2-1.el6ost
Doc Type: Enhancement
Doc Text:
Volume Service services, such as DHCP, are dependent on the host group 'Neutron Networker'. A dedicated host is recommended for the 'Neutron Networker' host group.
Clone Of:
: 988451 (view as bug list)
Environment:
Last Closed: 2014-01-23 14:21:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0046 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform 4 Bug Fix and Enhancement Advisory 2014-01-23 00:51:59 UTC

Description Charles Crouch 2013-06-03 21:28:01 UTC
The version Foreman which will ship in RHOS3.0 will have support built in for two "hostgroups" (i.e. templates to which machines can be associated with)

1) All-in-one controller node: Includes all of the basic OpenStack services: nova, keystone, glance, horizon etc. Uses nova-networking.
2) Compute node: Includes services sufficient to allow the machine to be used as a host for compute instances

So there is no out-of-the box support for Quantum, though customers are free to use the base puppet classes to create their own hostgroup which deploys quantum related services. Note: such hostgroups have not been tested as part of RHOS3.0

When discussing what options there were to support quantum we came up with the following options:

> Hostgroups Option 1)
> a) all-in-one controller node with nova networking (but with the ability,
> through Foreman variables, to disable nova networking)
> b) quantum server node
> c) compute node
> 
> Hostgroups Option 2)
> a) all-in-one controller node with nova networking (just what we have today)
> b) all-in-one controller node with quantum (nova networking is not
> present/permanently disabled)
> c) compute node
> 
> Some potential customer deployment scenarios could be:
> 
> One machine with 1a) and lots with 1c)
> or
> One machine with 1a) (disable nova networking) and one machine with 1b) and
> lots with 1c)
> 
> alternatively:
> 
> One machine 2a) and lots with 2c)
> One machine 2b) and lots with 2c)
>

Comment 5 Charles Crouch 2013-06-12 04:05:14 UTC
RE: my comment1, given this is not a GA target, I think we should be going with option1) here since it matches the expected Quantum deployment model:

http://post-office.corp.redhat.com/archives/rh-openstack-dev/2013-May/msg00475.html

Comment 6 Charles Crouch 2013-08-19 15:17:09 UTC
Setting priority to high for RHOS4 and dropping 3.0.z flag

Comment 7 Charles Crouch 2013-08-20 14:45:00 UTC
From https://home.corp.redhat.com/wiki/rdo-usability-sprint, the general scope of the work is specified here:

"Use Foreman to deploy RDO with a L2/L3 Host Group ("Networking Node") and Neutron"

i.e. a single host group for Neutron, along with another general controller hostgroup

Comment 10 Jason Guiditta 2013-09-12 18:20:26 UTC
This functionality has been merged upstream (havana), not yet released as an rpm

Comment 14 Jason Guiditta 2013-10-23 17:42:22 UTC
This will get another rev soon making it more automated, but the functionality is in place on this release

Comment 16 Omri Hochman 2013-11-20 19:00:27 UTC
Adding  comment from oblaut  - after he checked the environment from comment #15 :

Hi

Looking on Omri setup it seems that foreman did deploy the following ( i think foreman must provide status about openstack services )

1. default Tenant network type is  OVS + GRE  ( which is not the default in packstack == vlan)
2. No interfaces are attached for GRE tunnels ( as in packstack ) so nothing is added to OVS, Who should create these interfaces ? Foreman ?
3. Main neutron services like DHCP/L3/Metadata/OVS agents are inactive on controller 

There should be basic testing after foreman deployment :

1. Each VM must get IP address and ping to its DHCP/Router ips
2. SSH ( after enable it in security group ) must work with key-pair ( validate meta-data server works )
3. Ping from VM to the Internet must work ( default external network must be added )

Ofer

Comment 17 Jason Guiditta 2013-12-02 14:29:03 UTC
Can you specify what is requested here?  None of the above look like actual errors, just things that were unclear to the tester.  I address the 2 blocks of questions 1-3 in order, then you can let me know if that answers the questions or not.  Our implementation of this feature is, from the original request detail:> 'Hostgroups Option 1)', with the exception that currently there is no flag to toggle nova networking vs neutron.

List 1:
1. VLAN was not requested specifically when this was being built, so the focus was on gre.  We can look to add clan support, but I do not think that belongs as part of this BZ, it should be tracked separately.
2. The user is expected to have the openstack public and private networks set up, and the hosts being deployed should already have IPs on those networks.  This can certainly be a candidate for further automation later, but NIC configuration is out of scope for what was requested.
3. This is expected, those agents do not live on the controller, they are on the separate 'networker' host group/node.

List 2:
These look like reasonable validation steps for the full setup.  Note that the standard setup of actual tenant networks is still manual, as it was with packstack - we cannot support unlimited/unknown network needs for the user.  I am not sure actually, if this part will ever quite fit into foreman, so for the time being at least, I think it makes sense to leave it as is.  Note there are a number of howtos on setting up the tenant networks (and security rules) on the rdo wiki, such as:
http://openstack.redhat.com/Using_GRE_Tenant_Networks

I am happy to help try to work through any issues/questions on irc if that is easier, my nick is jayg

Please let me know, what, if anything, is needed to flip this back to ON_QA

Comment 18 Mike Orazi 2013-12-05 19:31:59 UTC
Capturing discussion that happened during triage call (12/5) w/ Perry & Steve that clarified a number of things:

Overarching comment:  No requirement for this to be in sync with packstack

List 1:

1.  GRE is explicitly preferable to vlan until vxlan becomes available.
2.  This should be external to foreman as per Jason's comment in 17.
3.  Expected as per Jason's comment in 17 -- Jason can you write up Doc Text about the setup to make sure it is clear which host group is needed to have the neutron control type services.

List 2:

1.  Agree with the test criteria.  Again with the caveat that Jason G raised in comment 17.  

The setup done via packstack is essentially in 'demo mode'.

Once the docs text is there, let's flip back to ON_QA but be prepared for dev, qe, and docs to collaborate to make sure the required set up is clear and everyone is comfortable that neutron networking is set up as expected.

Comment 22 Omri Hochman 2014-01-13 09:49:49 UTC
I'm setting this RFE Bug to VERIFIED , 

Neutron host groups are already included in RHOS 4.0 RC puddle 2013-12-23.1 , (with: openstack-foreman-installer-1.0.1-1.el6ost.noarch.rpm )

Other specific issues regarding Neutron deployment using foreman - will be reported separately on different Bugs.

Comment 25 Lon Hohberger 2014-02-04 17:19:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2014-0046.html


Note You need to log in before you can comment on or make changes to this bug.