Bug 1551427
Summary: | Support the setting of Multi-cluster-network-CIDR during installation | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | wangzhida <zhiwang> |
Component: | Networking | Assignee: | Casey Callendrello <cdc> |
Status: | CLOSED ERRATA | QA Contact: | Meng Bo <bmeng> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | unspecified | CC: | aos-bugs, bbennett, cdc, crawford, ichavero, jokerman, mfojtik, mgugino, mmccomas, sdodson, smossber, sttts |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | 4.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-06-04 10:40:18 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
wangzhida
2018-03-05 06:40:09 UTC
@Ben Thank you for your update. The customer is worry about below situation. Could you help to check if it will be happen in current version ? Step1: Add more network via edit master-config.yaml, reboot master service to make it effective Step2: Someone run ansible playbook again to update the cluster which contains only one definition of osm_cluster_network_cidr. Does Step2 will overwrite the master-config and erase the origin definition of networks ? If so what will be happened to the nodes which assigned with such pod-network ? Thank you very much. @ichavero - Can you work with @sdodson to work out the answer to this please? I'm closing this bug because in the support case appears to be under control. Scott: If we can get someone on the networking team to make the ansible change, can you nominate someone to advise them and review the work? I'd be concerned about making this change. Many places in openshift-ansible assume that this is just a single item. I think the best thing we can do right now is make sure that if there's a list of CIDRs in the config file that we don't stomp on that during upgrade. We were previously asked to migrate from networkConfig.clusterNetworkCIDR to a list of items in networkConfig.clusterNetworks and I believe the code we have to do that should only have done that when networkConfig.clusterNetworks is empty. Vadim Rutkovs or Russ Teague can review, the bot should auto assign reviewers too. I have no idea if the operators that actually manage the CIDR blocks support multiple blocks. That's probably a question for the Master team and the Networking team (I have no idea how to tag people directly with Bugzilla). Personally, I'd like to avoid supporting this in 4.0. We already have enough work to get done in such a short amount of time. This is the sort of stuff I'd like to target in 4.1. Tested in v4.0.0-0.177.0 After adding multiple cidr to the install-config.yaml, the cluster will be setup with multiple cidrs. networking: clusterNetworks: - cidr: 10.200.0.0/16 hostSubnetLength: 15 - cidr: 10.128.0.0/14 hostSubnetLength: 9 machineCIDR: 10.0.0.0/16 serviceCIDR: 172.30.0.0/16 type: OpenShiftSDN # oc get hostsubnet NAME HOST HOST IP SUBNET ip-10-0-140-160.us-east-2.compute.internal ip-10-0-140-160.us-east-2.compute.internal 10.0.140.160 10.130.0.0/23 ip-10-0-140-189.us-east-2.compute.internal ip-10-0-140-189.us-east-2.compute.internal 10.0.140.189 10.200.0.0/17 ip-10-0-146-145.us-east-2.compute.internal ip-10-0-146-145.us-east-2.compute.internal 10.0.146.145 10.200.128.0/17 ip-10-0-146-193.us-east-2.compute.internal ip-10-0-146-193.us-east-2.compute.internal 10.0.146.193 10.129.0.0/23 ip-10-0-161-174.us-east-2.compute.internal ip-10-0-161-174.us-east-2.compute.internal 10.0.161.174 10.128.0.0/23 ip-10-0-165-143.us-east-2.compute.internal ip-10-0-165-143.us-east-2.compute.internal 10.0.165.143 10.131.0.0/23 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758 |