Description of problem: For GCP, currently the installer does not allow users to set the disktype for machines. this is esp problematic for control-plane where the users might want to set a different one for performance.
Verified with 4.5.0-0.nightly-2020-05-19-031245 1. Install GCP cluster with valid disktype and disksize specified compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: osDisk: diskType: pd-standard diskSizeGB: 512 replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: osDisk: diskType: pd-ssd diskSizeGB: 128 replicas: 3 # openshift-install create cluster --dir bz1836339/ Cluster is installed successfully. And the disk type and disk size are identical with specified in install-config.yaml. 2. Install GCP cluster with pd-standard specified to master compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: osDisk: diskType: pd-standard diskSizeGB: 512 replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: osDisk: diskType: pd-standard diskSizeGB: 128 replicas: 3 # openshift-install create cluster --dir bz1836339/ FATAL failed to fetch Metadata: failed to load asset "Install Config": invalid "install-config.yaml" file: controlPlane.platform.gcp.diskType: Invalid value: "pd-standard": pd-standard not compatible with control planes. 3. Install cluster with invalid disk type specified compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: osDisk: diskType: pd-nvme diskSizeGB: 512 replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: osDisk: diskType: pd-ssd diskSizeGB: 128 replicas: 3 # openshift-install create cluster --dir invalid FATAL failed to fetch Metadata: failed to load asset "Install Config": invalid "install-config.yaml" file: compute[0].platform.gcp.diskType: Unsupported value: "pd-nvme": supported values: "pd-ssd", "pd-standard" 4. Install cluster with invalid disk size compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: osDisk: diskType: pd-ssd diskSizeGB: 512 replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: osDisk: diskType: pd-ssd diskSizeGB: -128 replicas: 3 # openshift-install create cluster --dir invalid1 FATAL failed to fetch Metadata: failed to load asset "Install Config": invalid "install-config.yaml" file: controlPlane.platform.gcp.diskSizeGB: Invalid value: -128: must be a positive value
1. The image created based on rhcos-44-81-202004250133-0-gcp-x86-64.tar.gz has disk size 16GB. So disk size cannot be smaller than 16 GB. What do you think to change the current minimum disk size 0 to 16GB? Set the disk size to 1GB and install cluster # openshift-install create cluster --dir invalid1 INFO Credentials loaded from file "/root/.gcp/osServiceAccount.json" INFO Consuming Install Config from target directory INFO Creating infrastructure resources... ERROR ERROR Error: Error creating instance: googleapi: Error 400: Invalid value for field 'resource.disks[0].initializeParams.diskSizeGb': '1'. Requested disk size cannot be smaller than the image size (16 GB), invalid 2. Both pd-standard and pd-ssd has max disk size 65536GB. Adding max disk size validation would be better.
Thanks for that. I will create a new bug to put these restrictions in.
As the issue described in comment#4 is already tracked in Bug 1838631 - GCP: Set validations for disk size. So move it to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409