Bug 1725359

Summary: When trying to deploy second overcloud, roles_data.yaml isn't updated with new subnet names
Product: Red Hat OpenStack Reporter: Sasha Smolyak <ssmolyak>
Component: openstack-tripleo-heat-templatesAssignee: Emilien Macchi <emacchi>
Status: CLOSED ERRATA QA Contact: Sasha Smolyak <ssmolyak>
Severity: high Docs Contact:
Priority: high    
Version: 15.0 (Stein)CC: emacchi, mburns
Target Milestone: betaKeywords: Triaged
Target Release: 15.0 (Stein)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-tripleo-heat-templates-10.5.1-0.20190701110422.889d4d4.el8ost Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-09-21 11:23:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Sasha Smolyak 2019-06-30 06:37:14 UTC
Description of problem:
When deploying single undercloud with multiple overclouds, the subnets are different for the second overcloud, but roles_data.yaml doesn't take it into consideration when being built, thus deploying with wrong subnets.

Version-Release number of selected component (if applicable):
RHOS_TRUNK-15.0-RHEL-8-20190619.n.1

How reproducible:
100%

Steps to Reproduce:
1. Deploy undercloud
2. Deploy 1st overcloud as usual
3. Try to deploy 2nd overcloud as described in docs:

Prepare the second overcloud files:
Copy  ~/virt/network_data_1.yaml to  ~/virt/network_data_2.yaml.  network_data_2.yaml change all the networks. For example,
InternalApi:
name_lower: internal_api_cloud_1 -> internal_api_cloud_2
vlan: 20 -> 21
ip_subnet: '172.16.2.0/24' ->  '172.16.21.0/24'
allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}] -> [{'start': '172.16.21.4', 'end': '172.16.21.250'}]
Copy nodes_data.yaml to nodes_data_2.yaml. Fix it to include 1 controller and 1 compute.
(workaround) Change roles_data.yaml:
- name: Controller
    networks:
    External:
      subnet: external_cloud_2_subnet
    InternalApi:
      subnet: internal_api_cloud_2_subnet
    Storage:
      subnet: storage_cloud_2_subnet
    StorageMgmt:
      subnet: storage_mgmt_cloud_2_subnet
    Tenant:
      subnet: tenant_cloud_2_subnet
 
- name: Compute
   networks:
    InternalApi:
      subnet: internal_api_cloud_2_subnet
    Tenant:
      subnet: tenant_cloud_2_subnet
    Storage:
      subnet: storage_cloud_2_subnet

  - name: BlockStorage
   networks:
    InternalApi:
      subnet: internal_api_cloud_2_subnet
    Storage:
      subnet: storage_cloud_2_subnet
    StorageMgmt:
      subnet: storage_mgmt_cloud_2_subnet
  - name: ObjectStorage
    networks:
    InternalApi:
      subnet: internal_api_cloud_2_subnet
    Storage:
      subnet: storage_cloud_2_subnet
    StorageMgmt:
      subnet: storage_mgmt_cloud_2_subnet
  - name: CephStorage
   networks:
    Storage:
      subnet: storage_cloud_2_subnet
    StorageMgmt:
      subnet: storage_mgmt_cloud_2_subnet

Deploy second overcloud:
openstack overcloud deploy \
--timeout 100 \
--templates ~/overcloud-two \
--stack overcloud2 \
--libvirt-type kvm \
--ntp-server clock.redhat.com \
-n /home/stack/virt/network_data_2.yaml \
-e /home/stack/virt/config_lvm.yaml \
-e ~/overcloud-two/environments/network-isolation.yaml \
-e /home/stack/virt/network/network-environment.yaml \
-e /home/stack/virt/inject-trust-anchor.yaml \
-e /home/stack/virt/public_vip.yaml \
-e /home/stack/virt/nodes_data_2.yaml \
-e ~/containers-prepare-parameter.yaml \


Actual results:
Deployment fails, because roles_data.yaml contain subnets from 1st overcloud

Expected results:
Deployment passes

Additional info:

Comment 4 Sasha Smolyak 2019-07-18 12:29:13 UTC
Deployment passes now, verified

Comment 7 errata-xmlrpc 2019-09-21 11:23:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811