Bug 1299609 - [Docs] [Director] Some node type hardware requirements are incorrect
Summary: [Docs] [Director] Some node type hardware requirements are incorrect
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ga
: 8.0 (Liberty)
Assignee: Dan Macpherson
QA Contact: RHOS Documentation Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-18 18:33 UTC by Ben Nemec
Modified: 2016-04-13 04:46 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-04-13 04:46:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ben Nemec 2016-01-18 18:33:36 UTC
Description of problem: The current documentation at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/sect-Overcloud_Requirements.html#sect-Compute_Node_Requirements says compute and ceph nodes must has two nics.  However, the single-nic-vlans network configuration that is referenced later only requires a single nic on both of those node types.  We should change the requirement to one nic for both of these node types.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info: See the compute template in question here: https://github.com/openstack/tripleo-heat-templates/blob/master/network/config/single-nic-vlans/compute.yaml

Comment 2 Dan Sneddon 2016-01-18 18:45:41 UTC
(In reply to Ben Nemec from comment #0)


To the best of my knowledge we have never actually recommended that the single-nic templates be used in production. That's a pretty bare-bones configuration. I don't see any reason not to support it and change the Minimum Requirements section of the docs, though, if anyone want to. It probably makes sense, given that some people will use the minimum requirements as a basis for their dev/test or POC deployments.

Comment 3 Ben Nemec 2016-01-18 18:48:07 UTC
Okay, maybe we need clarification on what is actually supported then.  We're explicitly referencing single-nic-vlans in the documentation, so it seems unreasonable to say we won't support configurations using that.  See 6.2.6.1: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/sect-Scenario_2_Using_the_CLI_to_Create_a_Basic_Overcloud.html#sect-Isolating_the_External_Network

Comment 4 Andrew Dahms 2016-02-08 03:18:54 UTC
Assigning to Dan for review.

Comment 5 Dan Macpherson 2016-02-09 01:55:49 UTC
Hi Ben and Dan,

I was under the impression that you needed two NICs: a dedicated NIC for the provisioning network, and a separate NIC (or NICs) for the Overcloud networks. Is this not the case?

Comment 6 Ben Nemec 2016-02-09 22:37:11 UTC
No, it's not strictly required to have two nics.  Our virtual test environments actually run overcloud vms with just one nic.  I guess there's a question of what we would support in a production environment, but that's a bigger question we should maybe raise with PM.  For the moment, we document single-nic-vlans for network isolation, and those templates only require a single nic (surprisingly :-) so there's some inconsistency in what we have right now.

Comment 7 Dan Macpherson 2016-02-10 02:22:22 UTC
Right, I get it now. I previously thought the single NIC templates was for an interface that wasn't the provisioning NIC, but now I understand what's going on. Thanks, Ben.

Comment 8 Dan Macpherson 2016-02-10 02:36:34 UTC
I'll use the following text:

"A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic."

That way we have a minimum and a recommended value.

Comment 10 Ben Nemec 2016-02-10 16:08:11 UTC
Sounds good to me, thanks.

Comment 12 Andrew Dahms 2016-04-13 04:46:41 UTC
This content is now live on the Customer Portal.

Closing.


Note You need to log in before you can comment on or make changes to this bug.