Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1833464

Summary: The installer breaks up default instance CIDR block into too many subnets, limiting the number of IP's
Product: OpenShift Container Platform Reporter: Matt Woodson <mwoodson>
Component: InstallerAssignee: Abhinav Dahiya <adahiya>
Installer sub component: openshift-installer QA Contact: Johnny Liu <jialiu>
Status: CLOSED NOTABUG Docs Contact:
Severity: unspecified    
Priority: unspecified CC: cblecker, jeder, mgahagan, nmalik, wking
Version: 4.3.0Keywords: ServiceDeliveryBlocker
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-08 18:44:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Matt Woodson 2020-05-08 17:37:05 UTC
Description of problem:

This is reporting from Openshift Dedicated.  We have concerns that the installer breaks the default subnet into too many subnets, limiting the number of IP's that are usable on each subnet.

From a real-world example, we have a customer bringing a network range of /24 for their instances/machine in AWS.  The install was a multi-az cluster.  The installer creates 6 subnets of /28. 

I believe 3 of these are public subnets, 3 of them are private subnets, each on in it's own AZ.

The problem we have is that after this is created, there are only 5 available IP's on the network for use after, which prevents us from creating private ELB's in that network (via the openshift-ingress-operator).

It's very difficult to justify to the customer that we need a /23 network in order to install a cluster that has 15 instaces ( 3 master, 3 infra, 9 worker).

It appears that the installer first takes the machine CIDR and splits it into 2, 1 for private, 1 for public

https://github.com/openshift/installer/blob/master/data/data/aws/vpc/vpc.tf

=====================================================================================
  new_private_cidr_range = cidrsubnet(data.aws_vpc.cluster_vpc.cidr_block, 1, 1)
  new_public_cidr_range  = cidrsubnet(data.aws_vpc.cluster_vpc.cidr_block, 1, 0)
=====================================================================================

It then takes each of these, and breaks it into 8 more (shifting the cidr block by 3)

=====================================================================================
https://github.com/openshift/installer/blob/master/data/data/aws/vpc/vpc-private.tf#L32
https://github.com/openshift/installer/blob/master/data/data/aws/vpc/vpc-public.tf#L50
======================================================================================


One possible solution would be allowing the user to specify how to break this subnet up.  In this example, we don't, and probably won't, ever need 8 private subnets and 8 public subnets.  Having 4 of each would be perfectly acceptable.


Version-Release number of the following components:

openshift installer v4

How reproducible:

Very

Comment 1 Abhinav Dahiya 2020-05-08 18:44:44 UTC
> This is reporting from Openshift Dedicated.  We have concerns that the installer breaks the default subnet into too many subnets, limiting the number of IP's that are usable on each subnet.
>
> From a real-world example, we have a customer bringing a network range of /24 for their instances/machine in AWS.  The install was a multi-az cluster.  The installer creates 6 subnets of /28. 
>
> I believe 3 of these are public subnets, 3 of them are private subnets, each on in it's own AZ.

The installer will expand the VPC to all the available AZs, and if you don't want that, bring your own networking or ask the installer to install in only specific AZs using defaultMachinePlaform or https://github.com/openshift/installer/blob/master/docs/user/aws/customization.md#custom-machine-pools

```
$ ./bin/openshift-install explain installconfig.platform.aws.defaultMachinePlatform
KIND:     InstallConfig
VERSION:  v1

RESOURCE: <object>
  DefaultMachinePlatform is the default configuration used when installing on AWS for machine pools which do not define their own platform configuration.

FIELDS:
    amiID <string>
      AMIID is the AMI that should be used to boot the ec2 instance. If set, the AMI should belong to the same region as the cluster.

    rootVolume <object>
      EC2RootVolume defines the root volume for EC2 instances in the machine pool.

    type <string>
      InstanceType defines the ec2 instance type. eg. m4-large

    zones <[]string>
      Zones is list of availability zones that can be used.

$ ./bin/openshift-install explain installconfig.platform.aws.defaultMachinePlatform.zones
KIND:     InstallConfig
VERSION:  v1

RESOURCE: <[]string>
  Zones is list of availability zones that can be used.

➜  installer git:(master) ✗

```

> One possible solution would be allowing the user to specify how to break this subnet up.  In this example, we don't, and probably won't, ever need 8 private subnets and 8 public subnets.  Having 4 of each would be perfectly acceptable.

that's can be achieved by reducing the HA by picking smaller AZs

lastly this is an RFE, so please open and track then in JIRA https://issues.redhat.com/projects/RFE/issues/

Comment 2 Matt Woodson 2020-05-11 14:25:14 UTC
Added RFE

https://issues.redhat.com/browse/RFE-872