Bug 1725359 - When trying to deploy second overcloud, roles_data.yaml isn't updated with new subnet names
Summary: When trying to deploy second overcloud, roles_data.yaml isn't updated with ne...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 15.0 (Stein)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: beta
: 15.0 (Stein)
Assignee: Emilien Macchi
QA Contact: Sasha Smolyak
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-30 06:37 UTC by Sasha Smolyak
Modified: 2019-09-26 10:53 UTC (History)
2 users (show)

Fixed In Version: openstack-tripleo-heat-templates-10.5.1-0.20190701110422.889d4d4.el8ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-21 11:23:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1832759 0 None None None 2019-06-30 07:18:10 UTC
OpenStack gerrit 66550 0 None None None 2019-07-09 13:28:48 UTC
Red Hat Product Errata RHEA-2019:2811 0 None None None 2019-09-21 11:24:14 UTC

Description Sasha Smolyak 2019-06-30 06:37:14 UTC
Description of problem:
When deploying single undercloud with multiple overclouds, the subnets are different for the second overcloud, but roles_data.yaml doesn't take it into consideration when being built, thus deploying with wrong subnets.

Version-Release number of selected component (if applicable):
RHOS_TRUNK-15.0-RHEL-8-20190619.n.1

How reproducible:
100%

Steps to Reproduce:
1. Deploy undercloud
2. Deploy 1st overcloud as usual
3. Try to deploy 2nd overcloud as described in docs:

Prepare the second overcloud files:
Copy  ~/virt/network_data_1.yaml to  ~/virt/network_data_2.yaml.  network_data_2.yaml change all the networks. For example,
InternalApi:
name_lower: internal_api_cloud_1 -> internal_api_cloud_2
vlan: 20 -> 21
ip_subnet: '172.16.2.0/24' ->  '172.16.21.0/24'
allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}] -> [{'start': '172.16.21.4', 'end': '172.16.21.250'}]
Copy nodes_data.yaml to nodes_data_2.yaml. Fix it to include 1 controller and 1 compute.
(workaround) Change roles_data.yaml:
- name: Controller
    networks:
    External:
      subnet: external_cloud_2_subnet
    InternalApi:
      subnet: internal_api_cloud_2_subnet
    Storage:
      subnet: storage_cloud_2_subnet
    StorageMgmt:
      subnet: storage_mgmt_cloud_2_subnet
    Tenant:
      subnet: tenant_cloud_2_subnet
 
- name: Compute
   networks:
    InternalApi:
      subnet: internal_api_cloud_2_subnet
    Tenant:
      subnet: tenant_cloud_2_subnet
    Storage:
      subnet: storage_cloud_2_subnet

  - name: BlockStorage
   networks:
    InternalApi:
      subnet: internal_api_cloud_2_subnet
    Storage:
      subnet: storage_cloud_2_subnet
    StorageMgmt:
      subnet: storage_mgmt_cloud_2_subnet
  - name: ObjectStorage
    networks:
    InternalApi:
      subnet: internal_api_cloud_2_subnet
    Storage:
      subnet: storage_cloud_2_subnet
    StorageMgmt:
      subnet: storage_mgmt_cloud_2_subnet
  - name: CephStorage
   networks:
    Storage:
      subnet: storage_cloud_2_subnet
    StorageMgmt:
      subnet: storage_mgmt_cloud_2_subnet

Deploy second overcloud:
openstack overcloud deploy \
--timeout 100 \
--templates ~/overcloud-two \
--stack overcloud2 \
--libvirt-type kvm \
--ntp-server clock.redhat.com \
-n /home/stack/virt/network_data_2.yaml \
-e /home/stack/virt/config_lvm.yaml \
-e ~/overcloud-two/environments/network-isolation.yaml \
-e /home/stack/virt/network/network-environment.yaml \
-e /home/stack/virt/inject-trust-anchor.yaml \
-e /home/stack/virt/public_vip.yaml \
-e /home/stack/virt/nodes_data_2.yaml \
-e ~/containers-prepare-parameter.yaml \


Actual results:
Deployment fails, because roles_data.yaml contain subnets from 1st overcloud

Expected results:
Deployment passes

Additional info:

Comment 4 Sasha Smolyak 2019-07-18 12:29:13 UTC
Deployment passes now, verified

Comment 7 errata-xmlrpc 2019-09-21 11:23:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.