Bug 1833464 - The installer breaks up default instance CIDR block into too many subnets, limiting the number of IP's
Summary: The installer breaks up default instance CIDR block into too many subnets, l...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Abhinav Dahiya
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-08 17:37 UTC by Matt Woodson
Modified: 2020-05-11 14:25 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-08 18:44:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Matt Woodson 2020-05-08 17:37:05 UTC
Description of problem:

This is reporting from Openshift Dedicated.  We have concerns that the installer breaks the default subnet into too many subnets, limiting the number of IP's that are usable on each subnet.

From a real-world example, we have a customer bringing a network range of /24 for their instances/machine in AWS.  The install was a multi-az cluster.  The installer creates 6 subnets of /28. 

I believe 3 of these are public subnets, 3 of them are private subnets, each on in it's own AZ.

The problem we have is that after this is created, there are only 5 available IP's on the network for use after, which prevents us from creating private ELB's in that network (via the openshift-ingress-operator).

It's very difficult to justify to the customer that we need a /23 network in order to install a cluster that has 15 instaces ( 3 master, 3 infra, 9 worker).

It appears that the installer first takes the machine CIDR and splits it into 2, 1 for private, 1 for public

https://github.com/openshift/installer/blob/master/data/data/aws/vpc/vpc.tf

=====================================================================================
  new_private_cidr_range = cidrsubnet(data.aws_vpc.cluster_vpc.cidr_block, 1, 1)
  new_public_cidr_range  = cidrsubnet(data.aws_vpc.cluster_vpc.cidr_block, 1, 0)
=====================================================================================

It then takes each of these, and breaks it into 8 more (shifting the cidr block by 3)

=====================================================================================
https://github.com/openshift/installer/blob/master/data/data/aws/vpc/vpc-private.tf#L32
https://github.com/openshift/installer/blob/master/data/data/aws/vpc/vpc-public.tf#L50
======================================================================================


One possible solution would be allowing the user to specify how to break this subnet up.  In this example, we don't, and probably won't, ever need 8 private subnets and 8 public subnets.  Having 4 of each would be perfectly acceptable.


Version-Release number of the following components:

openshift installer v4

How reproducible:

Very

Comment 1 Abhinav Dahiya 2020-05-08 18:44:44 UTC
> This is reporting from Openshift Dedicated.  We have concerns that the installer breaks the default subnet into too many subnets, limiting the number of IP's that are usable on each subnet.
>
> From a real-world example, we have a customer bringing a network range of /24 for their instances/machine in AWS.  The install was a multi-az cluster.  The installer creates 6 subnets of /28. 
>
> I believe 3 of these are public subnets, 3 of them are private subnets, each on in it's own AZ.

The installer will expand the VPC to all the available AZs, and if you don't want that, bring your own networking or ask the installer to install in only specific AZs using defaultMachinePlaform or https://github.com/openshift/installer/blob/master/docs/user/aws/customization.md#custom-machine-pools

```
$ ./bin/openshift-install explain installconfig.platform.aws.defaultMachinePlatform
KIND:     InstallConfig
VERSION:  v1

RESOURCE: <object>
  DefaultMachinePlatform is the default configuration used when installing on AWS for machine pools which do not define their own platform configuration.

FIELDS:
    amiID <string>
      AMIID is the AMI that should be used to boot the ec2 instance. If set, the AMI should belong to the same region as the cluster.

    rootVolume <object>
      EC2RootVolume defines the root volume for EC2 instances in the machine pool.

    type <string>
      InstanceType defines the ec2 instance type. eg. m4-large

    zones <[]string>
      Zones is list of availability zones that can be used.

$ ./bin/openshift-install explain installconfig.platform.aws.defaultMachinePlatform.zones
KIND:     InstallConfig
VERSION:  v1

RESOURCE: <[]string>
  Zones is list of availability zones that can be used.

➜  installer git:(master) ✗

```

> One possible solution would be allowing the user to specify how to break this subnet up.  In this example, we don't, and probably won't, ever need 8 private subnets and 8 public subnets.  Having 4 of each would be perfectly acceptable.

that's can be achieved by reducing the HA by picking smaller AZs

lastly this is an RFE, so please open and track then in JIRA https://issues.redhat.com/projects/RFE/issues/

Comment 2 Matt Woodson 2020-05-11 14:25:14 UTC
Added RFE

https://issues.redhat.com/browse/RFE-872


Note You need to log in before you can comment on or make changes to this bug.