Description of problem: While doing hundreds of scaleup/scaledown operations, "randomly" getting pods that will never start. They never pass the readiness check and get stuck in CrashLoopBackoff. The pods which get stuck like this always have .255 IP addresses which should be the broadcast for the node network. networkConfig: clusterNetworkCIDR: 172.20.0.0/14 hostSubnetLength: 8 networkPluginName: redhat/openshift-ovs-multitenant # serviceNetworkCIDR must match kubernetesMasterConfig.servicesSubnet serviceNetworkCIDR: 172.24.0.0/14 externalIPNetworkCIDRs: - 0.0.0.0/0 Version-Release number of selected component (if applicable): 3.4.0.22 How reproducible: Always when running scale up/down long enough Steps to Reproduce: 1. Install a 1 master 2 node OCP 3.4 cluster with the network configuration above 2. oc new-app cakephp-mysql-example 3. oc edit dc/cakephp-mysql-example and remove the 512M memory limit 4. oc scale --replicas=200 dc/cakephp-mysql-example 5. oc scale --replicas=0 dc/cakephp-mysql-example 6. verify all nodes are running and none are in CrashLoopBackoff 7. repeat 4 through 6 until one of the pods gets assigned a .255 address and gets stuck and cannot initialize Actual results: Eventually a pod will get a .255 address and get stuck. Network debug script location will be added shortly. Expected results: All pods can start. No pods handed an invalid address for their subnet.
dcbw has already fixed this in CNI upstream. Presumably he knows whether we should fully rebase cni or just pull in those fixes
Origin PR that bumps CNI: https://github.com/openshift/origin/pull/11815
This has been merged into ose and is in OSE v3.4.0.24 or newer.
Verified on 3.4.0.24. Performed hundreds of scale up/down as in the original scenario and all pods came active. No pods were handed a bad IP.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0066