Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1419190

Summary: [DOCS][RFE][director] Document SW deployment on already existing infrastructure (split stack - part2)
Product: Red Hat OpenStack Reporter: James Slagle <jslagle>
Component: documentationAssignee: Dan Macpherson <dmacpher>
Status: CLOSED CURRENTRELEASE QA Contact: Don Domingo <ddomingo>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 11.0 (Ocata)CC: dcadzow, dmacpher, jslagle, lbopf, mburns, srevivo
Target Milestone: gaKeywords: FutureFeature
Target Release: 11.0 (Ocata)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-05-18 08:05:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1337784    
Bug Blocks:    

Description James Slagle 2017-02-03 20:38:33 UTC
For OSP 11 product documentation, we need to document the "split stack phase 2" feature. This feature allows deploying OpenStack to already provisioned servers. In this scenario, Nova/Ironic are not used as part of Director.

The RFE bz: https://bugzilla.redhat.com/show_bug.cgi?id=1337784
Upstream docs: http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/deployed_server.html

Please let me know what other information I can provide. There will likely be additional updates to the upstream docs as we continue testing the feature.

Comment 1 James Slagle 2017-03-09 01:02:50 UTC
I have an additional patch to the upstream docs that adds more needed info:
https://review.openstack.org/#/c/442222/

I want point that out just in case it hasn't merged when you review the upstream docs content.

Comment 3 Dan Macpherson 2017-03-22 18:30:21 UTC
Hi James,

I tested this out and it seems to be working. However, I ran into some difficulty with the ctlplane IPs for the nodes.

The patch you added mentions the following:

> The polling process requires that the Undercloud services are bound to an
IP address that is on a L3 routed network that is accessible to the Overcloud
nodes. This is the IP address that is configured via ``local_ip`` in the
``undercloud.conf`` file used during Undercloud installation. Alternatively, it
is the IP address or hostname configured with ``undercloud_public_host`` if
using SSL with the Undercloud.
>
>If the deployed servers for the Overcloud are configured with IP addresses
from the network CIDR that is also used by the ctlplane subnet in the
Undercloud, then be sure to adjust ``dhcp_start``, ``dhcp_end``, and
``inspection_iprange`` in ``undercloud.conf`` appropriately so that the ctlplane
subnet range does not overlap with IP addresses that may have already been
configured on the deployed servers."

So I tested with 1 Controller and 1 Compute using a fairly basic setup -- no network isolation, so storage, just the nodes. I configured its NICs to use an address on the provisioning network/ctlplane but outside the standard deployment pool (192.168.201.20 to 192.168.201.80) as per the patch. So for example, I assigned the Controller 192.168.201.130.

The problem is that during deployment, the director created ports for the nodes using an address within the provisioning network range (for example, 192.168.201.27 for the Controller). This later led to the deployment failing due to the validation resource trying to ping 192.168.201.27 when the node was using 192.168.201.130.

I worked around this issue by changing the IP on the nodes to the ones that the director assigned in neutron. However, I know this isn't the right way to do it so I think I'm doing something wrong.

Is there any guidelines on setting the IP addresses to match the ports on the ctlplane? Does this require some predictable IP strategy?

Comment 4 Dan Macpherson 2017-03-23 04:40:34 UTC
Canceling NEEDINFO. I didn't see the second part below in the patch.