Bug 1551427 - Support the setting of Multi-cluster-network-CIDR during installation
Summary: Support the setting of Multi-cluster-network-CIDR during installation
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: unspecified
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.1.0
Assignee: Casey Callendrello
QA Contact: Meng Bo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-05 06:40 UTC by wangzhida
Modified: 2019-06-04 10:40 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:40:18 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:40:28 UTC

Description wangzhida 2018-03-05 06:40:09 UTC
Description of problem:

The customer has a very sensitive network and the new deployment of OCP need to use pieces of network ranges instead of huge one. From our doc, we can add more range by add clusterNetworks in master-config.yaml. A requirement is to config that before deploy our OCP. However they can't find the way to set it but only "osm_cluster_network_cidr" can set as 1 network for ClusterNetwork.

Actual results:
Can only set 1 network cidr during installation.

Expected results:
Can set multi ranges of networks. or any workarounds.

Related Feature doc:
https://docs.openshift.com/container-platform/3.7/install_config/configuring_sdn.html#configuring-the-pod-network-on-masters



Thanks a lot.

Comment 1 wangzhida 2018-03-06 09:33:44 UTC
@Ben

Thank you for your update.

The customer is worry about below situation. Could you help to check if it will be happen in current version ?

Step1: Add more network via edit master-config.yaml, reboot master service to make it effective

Step2: Someone run ansible playbook again to update the cluster which contains only one definition of osm_cluster_network_cidr.

Does Step2 will overwrite the master-config and erase the origin definition of networks ? If so what will be happened to the nodes which assigned with such pod-network ?

Thank you very much.

Comment 2 Ben Bennett 2018-04-10 19:51:33 UTC
@ichavero - Can you work with @sdodson to work out the answer to this please?

Comment 3 Ivan Chavero 2018-04-24 17:54:48 UTC
I'm closing this bug because in the support case appears to be under control.

Comment 5 Ben Bennett 2018-09-28 15:30:04 UTC
Scott: If we can get someone on the networking team to make the ansible change, can you nominate someone to advise them and review the work?

Comment 6 Scott Dodson 2018-09-28 16:04:44 UTC
I'd be concerned about making this change. Many places in openshift-ansible assume that this is just a single item. I think the best thing we can do right now is make sure that if there's a list of CIDRs in the config file that we don't stomp on that during upgrade.

We were previously asked to migrate from networkConfig.clusterNetworkCIDR to a list of items in networkConfig.clusterNetworks and I believe the code we have to do that should only have done that when networkConfig.clusterNetworks is empty.

Vadim Rutkovs or Russ Teague can review, the bot should auto assign reviewers too.

Comment 13 Alex Crawford 2018-12-11 18:05:44 UTC
I have no idea if the operators that actually manage the CIDR blocks support multiple blocks. That's probably a question for the Master team and the Networking team (I have no idea how to tag people directly with Bugzilla). Personally, I'd like to avoid supporting this in 4.0. We already have enough work to get done in such a short amount of time. This is the sort of stuff I'd like to target in 4.1.

Comment 23 Meng Bo 2019-02-22 06:29:21 UTC
Tested in v4.0.0-0.177.0

After adding multiple cidr to the install-config.yaml, the cluster will be setup with multiple cidrs.

networking:
  clusterNetworks:
  - cidr: 10.200.0.0/16
    hostSubnetLength: 15
  - cidr: 10.128.0.0/14
    hostSubnetLength: 9
  machineCIDR: 10.0.0.0/16
  serviceCIDR: 172.30.0.0/16
  type: OpenShiftSDN


# oc get hostsubnet
NAME                                         HOST                                         HOST IP        SUBNET
ip-10-0-140-160.us-east-2.compute.internal   ip-10-0-140-160.us-east-2.compute.internal   10.0.140.160   10.130.0.0/23
ip-10-0-140-189.us-east-2.compute.internal   ip-10-0-140-189.us-east-2.compute.internal   10.0.140.189   10.200.0.0/17
ip-10-0-146-145.us-east-2.compute.internal   ip-10-0-146-145.us-east-2.compute.internal   10.0.146.145   10.200.128.0/17
ip-10-0-146-193.us-east-2.compute.internal   ip-10-0-146-193.us-east-2.compute.internal   10.0.146.193   10.129.0.0/23
ip-10-0-161-174.us-east-2.compute.internal   ip-10-0-161-174.us-east-2.compute.internal   10.0.161.174   10.128.0.0/23
ip-10-0-165-143.us-east-2.compute.internal   ip-10-0-165-143.us-east-2.compute.internal   10.0.165.143   10.131.0.0/23

Comment 26 errata-xmlrpc 2019-06-04 10:40:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.