Deploying OCP with a root volume type that does not exist (think typo) leads to an obscure terraform error: level=error msg=Error: Error creating openstack_blockstorage_volume_v3: Resource not found level=error level=error msg= on ../tmp/openshift-install-401122317/bootstrap/main.tf line 34, in resource "openstack_blockstorage_volume_v3" "bootstrap_volume": level=error msg= 34: resource "openstack_blockstorage_volume_v3" "bootstrap_volume" { This is a user error that is easy to happen as seen in https://bugzilla.redhat.com/show_bug.cgi?id=1812847. The installer should validate the provided root volume type exists and fail with a clear error otherwise.
Verified on 4.8.0-0.nightly-2021-05-15-141455. # Test 1: Load below: compute: - name: worker platform: openstack: zones: ['AZ-0','AZ-1','AZ-2'] additionalNetworkIDs: [] rootVolume: size: 25 type: fake replicas: 1 controlPlane: name: master platform: openstack: additionalNetworkIDs: [] rootVolume: size: 25 type: fake zones: ['cinderAZ0'] replicas: 3 where: $ openstack volume type list +--------------------------------------+-------------+-----------+ | ID | Name | Is Public | +--------------------------------------+-------------+-----------+ | 41f0418d-d919-4a17-b735-3faa0f0154f5 | tripleo | True | | 6cbf6838-15e0-4bc3-bcba-8fa8a3d944e0 | __DEFAULT__ | True | +--------------------------------------+-------------+-----------+ (shiftstack) [stack@undercloud-0 ~]$ Installer is generating an error message: FATAL failed to fetch Master Machines: failed to load asset "Install Config": [controlPlane.platform.openstack.rootVolume.type: Invalid value: "fake": Volume Type either does not exist in this cloud, or is not available, compute[0].platform.openstack.rootVolume.type: Invalid value: "fake": Volume Type either does not exist in this cloud, or is not available] # Test 2: type is not defined: compute: - name: worker platform: openstack: zones: ['AZ-0','AZ-1','AZ-2'] additionalNetworkIDs: [] rootVolume: size: 25 replicas: 1 controlPlane: name: master platform: openstack: additionalNetworkIDs: [] rootVolume: size: 25 zones: ['cinderAZ0'] replicas: 3 FATAL failed to fetch Master Machines: failed to load asset "Install Config": [controlPlane.platform.openstack.rootVolume.type: Invalid value: "": Volume type must be specified to use root volumes, compute[0].platform.openstack.rootVolume.type: Invalid value: "": Volume type must be specified to use root volumes] # Test 3: volume type different to default one (tripleo) works: (overcloud) [stack@undercloud-0 ~]$ openstack volume type create --private --project shiftstack test (shiftstack) [stack@undercloud-0 ~]$ openstack volume type list +--------------------------------------+-------------+-----------+ | ID | Name | Is Public | +--------------------------------------+-------------+-----------+ | b9c7acc6-e406-4b3b-99fa-cab702320fc6 | test | False | | 41f0418d-d919-4a17-b735-3faa0f0154f5 | tripleo | True | | 6cbf6838-15e0-4bc3-bcba-8fa8a3d944e0 | __DEFAULT__ | True | +--------------------------------------+-------------+-----------+ install-config.yaml: compute: - name: worker platform: openstack: zones: ['AZ-0','AZ-1','AZ-2'] additionalNetworkIDs: [] rootVolume: size: 25 type: test replicas: 1 controlPlane: name: master platform: openstack: additionalNetworkIDs: [] rootVolume: size: 25 type: test zones: ['cinderAZ0'] replicas: 3 result manifests have the expected attributes: ostest/openshift/99_openshift-cluster-api_master-machines-0.yaml: availabilityZone: cinderAZ0 ostest/openshift/99_openshift-cluster-api_master-machines-0.yaml: volumeType: test ostest/openshift/99_openshift-cluster-api_master-machines-1.yaml: availabilityZone: cinderAZ0 ostest/openshift/99_openshift-cluster-api_master-machines-1.yaml: volumeType: test ostest/openshift/99_openshift-cluster-api_master-machines-2.yaml: availabilityZone: cinderAZ0 ostest/openshift/99_openshift-cluster-api_master-machines-2.yaml: volumeType: test ostest/openshift/99_openshift-cluster-api_worker-machineset-0.yaml: availabilityZone: AZ-0 ostest/openshift/99_openshift-cluster-api_worker-machineset-0.yaml: volumeType: test ostest/openshift/99_openshift-cluster-api_worker-machineset-1.yaml: availabilityZone: AZ-1 ostest/openshift/99_openshift-cluster-api_worker-machineset-1.yaml: volumeType: test ostest/openshift/99_openshift-cluster-api_worker-machineset-2.yaml: availabilityZone: AZ-2 ostest/openshift/99_openshift-cluster-api_worker-machineset-2.yaml: volumeType: test
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438