Red Hat Bugzilla – Bug 970283
[RFE] Add a hostgroup to Foreman for a deploying an OpenStack Controller node with Neutron
Last modified: 2016-04-26 10:26:39 EDT
The version Foreman which will ship in RHOS3.0 will have support built in for two "hostgroups" (i.e. templates to which machines can be associated with)
1) All-in-one controller node: Includes all of the basic OpenStack services: nova, keystone, glance, horizon etc. Uses nova-networking.
2) Compute node: Includes services sufficient to allow the machine to be used as a host for compute instances
So there is no out-of-the box support for Quantum, though customers are free to use the base puppet classes to create their own hostgroup which deploys quantum related services. Note: such hostgroups have not been tested as part of RHOS3.0
When discussing what options there were to support quantum we came up with the following options:
> Hostgroups Option 1)
> a) all-in-one controller node with nova networking (but with the ability,
> through Foreman variables, to disable nova networking)
> b) quantum server node
> c) compute node
> Hostgroups Option 2)
> a) all-in-one controller node with nova networking (just what we have today)
> b) all-in-one controller node with quantum (nova networking is not
> present/permanently disabled)
> c) compute node
> Some potential customer deployment scenarios could be:
> One machine with 1a) and lots with 1c)
> One machine with 1a) (disable nova networking) and one machine with 1b) and
> lots with 1c)
> One machine 2a) and lots with 2c)
> One machine 2b) and lots with 2c)
RE: my comment1, given this is not a GA target, I think we should be going with option1) here since it matches the expected Quantum deployment model:
Setting priority to high for RHOS4 and dropping 3.0.z flag
From https://home.corp.redhat.com/wiki/rdo-usability-sprint, the general scope of the work is specified here:
"Use Foreman to deploy RDO with a L2/L3 Host Group ("Networking Node") and Neutron"
i.e. a single host group for Neutron, along with another general controller hostgroup
This functionality has been merged upstream (havana), not yet released as an rpm
This will get another rev soon making it more automated, but the functionality is in place on this release
Adding comment from firstname.lastname@example.org - after he checked the environment from comment #15 :
Looking on Omri setup it seems that foreman did deploy the following ( i think foreman must provide status about openstack services )
1. default Tenant network type is OVS + GRE ( which is not the default in packstack == vlan)
2. No interfaces are attached for GRE tunnels ( as in packstack ) so nothing is added to OVS, Who should create these interfaces ? Foreman ?
3. Main neutron services like DHCP/L3/Metadata/OVS agents are inactive on controller
There should be basic testing after foreman deployment :
1. Each VM must get IP address and ping to its DHCP/Router ips
2. SSH ( after enable it in security group ) must work with key-pair ( validate meta-data server works )
3. Ping from VM to the Internet must work ( default external network must be added )
Can you specify what is requested here? None of the above look like actual errors, just things that were unclear to the tester. I address the 2 blocks of questions 1-3 in order, then you can let me know if that answers the questions or not. Our implementation of this feature is, from the original request detail:> 'Hostgroups Option 1)', with the exception that currently there is no flag to toggle nova networking vs neutron.
1. VLAN was not requested specifically when this was being built, so the focus was on gre. We can look to add clan support, but I do not think that belongs as part of this BZ, it should be tracked separately.
2. The user is expected to have the openstack public and private networks set up, and the hosts being deployed should already have IPs on those networks. This can certainly be a candidate for further automation later, but NIC configuration is out of scope for what was requested.
3. This is expected, those agents do not live on the controller, they are on the separate 'networker' host group/node.
These look like reasonable validation steps for the full setup. Note that the standard setup of actual tenant networks is still manual, as it was with packstack - we cannot support unlimited/unknown network needs for the user. I am not sure actually, if this part will ever quite fit into foreman, so for the time being at least, I think it makes sense to leave it as is. Note there are a number of howtos on setting up the tenant networks (and security rules) on the rdo wiki, such as:
I am happy to help try to work through any issues/questions on irc if that is easier, my nick is jayg
Please let me know, what, if anything, is needed to flip this back to ON_QA
Capturing discussion that happened during triage call (12/5) w/ Perry & Steve that clarified a number of things:
Overarching comment: No requirement for this to be in sync with packstack
1. GRE is explicitly preferable to vlan until vxlan becomes available.
2. This should be external to foreman as per Jason's comment in 17.
3. Expected as per Jason's comment in 17 -- Jason can you write up Doc Text about the setup to make sure it is clear which host group is needed to have the neutron control type services.
1. Agree with the test criteria. Again with the caveat that Jason G raised in comment 17.
The setup done via packstack is essentially in 'demo mode'.
Once the docs text is there, let's flip back to ON_QA but be prepared for dev, qe, and docs to collaborate to make sure the required set up is clear and everyone is comfortable that neutron networking is set up as expected.
I'm setting this RFE Bug to VERIFIED ,
Neutron host groups are already included in RHOS 4.0 RC puddle 2013-12-23.1 , (with: openstack-foreman-installer-1.0.1-1.el6ost.noarch.rpm )
Other specific issues regarding Neutron deployment using foreman - will be reported separately on different Bugs.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.