Hide Forgot
Description of problem: The customer has a very sensitive network and the new deployment of OCP need to use pieces of network ranges instead of huge one. From our doc, we can add more range by add clusterNetworks in master-config.yaml. A requirement is to config that before deploy our OCP. However they can't find the way to set it but only "osm_cluster_network_cidr" can set as 1 network for ClusterNetwork. Actual results: Can only set 1 network cidr during installation. Expected results: Can set multi ranges of networks. or any workarounds. Related Feature doc: https://docs.openshift.com/container-platform/3.7/install_config/configuring_sdn.html#configuring-the-pod-network-on-masters Thanks a lot.
@Ben Thank you for your update. The customer is worry about below situation. Could you help to check if it will be happen in current version ? Step1: Add more network via edit master-config.yaml, reboot master service to make it effective Step2: Someone run ansible playbook again to update the cluster which contains only one definition of osm_cluster_network_cidr. Does Step2 will overwrite the master-config and erase the origin definition of networks ? If so what will be happened to the nodes which assigned with such pod-network ? Thank you very much.
@ichavero - Can you work with @sdodson to work out the answer to this please?
I'm closing this bug because in the support case appears to be under control.
Scott: If we can get someone on the networking team to make the ansible change, can you nominate someone to advise them and review the work?
I'd be concerned about making this change. Many places in openshift-ansible assume that this is just a single item. I think the best thing we can do right now is make sure that if there's a list of CIDRs in the config file that we don't stomp on that during upgrade. We were previously asked to migrate from networkConfig.clusterNetworkCIDR to a list of items in networkConfig.clusterNetworks and I believe the code we have to do that should only have done that when networkConfig.clusterNetworks is empty. Vadim Rutkovs or Russ Teague can review, the bot should auto assign reviewers too.
I have no idea if the operators that actually manage the CIDR blocks support multiple blocks. That's probably a question for the Master team and the Networking team (I have no idea how to tag people directly with Bugzilla). Personally, I'd like to avoid supporting this in 4.0. We already have enough work to get done in such a short amount of time. This is the sort of stuff I'd like to target in 4.1.
Tested in v4.0.0-0.177.0 After adding multiple cidr to the install-config.yaml, the cluster will be setup with multiple cidrs. networking: clusterNetworks: - cidr: 10.200.0.0/16 hostSubnetLength: 15 - cidr: 10.128.0.0/14 hostSubnetLength: 9 machineCIDR: 10.0.0.0/16 serviceCIDR: 172.30.0.0/16 type: OpenShiftSDN # oc get hostsubnet NAME HOST HOST IP SUBNET ip-10-0-140-160.us-east-2.compute.internal ip-10-0-140-160.us-east-2.compute.internal 10.0.140.160 10.130.0.0/23 ip-10-0-140-189.us-east-2.compute.internal ip-10-0-140-189.us-east-2.compute.internal 10.0.140.189 10.200.0.0/17 ip-10-0-146-145.us-east-2.compute.internal ip-10-0-146-145.us-east-2.compute.internal 10.0.146.145 10.200.128.0/17 ip-10-0-146-193.us-east-2.compute.internal ip-10-0-146-193.us-east-2.compute.internal 10.0.146.193 10.129.0.0/23 ip-10-0-161-174.us-east-2.compute.internal ip-10-0-161-174.us-east-2.compute.internal 10.0.161.174 10.128.0.0/23 ip-10-0-165-143.us-east-2.compute.internal ip-10-0-165-143.us-east-2.compute.internal 10.0.165.143 10.131.0.0/23
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758