Bug 1392296 - OpenShift handing out .255 network address to a pod when HOST_SUBNET_LENGTH is 8
Summary: OpenShift handing out .255 network address to a pod when HOST_SUBNET_LENGTH is 8
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 3.4.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Dan Williams
QA Contact: Mike Fiedler
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-07 06:57 UTC by Mike Fiedler
Modified: 2017-03-08 18:43 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2017-01-18 12:50:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Origin (Github) 11815 0 None None None 2016-11-08 16:19:33 UTC
Red Hat Product Errata RHBA-2017:0066 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.4 RPM Release Advisory 2017-01-18 17:23:26 UTC

Description Mike Fiedler 2016-11-07 06:57:11 UTC
Description of problem:

While doing hundreds of scaleup/scaledown operations, "randomly" getting pods that will never start.  They never pass the readiness check and get stuck in CrashLoopBackoff.   The pods which get stuck like this always have .255 IP addresses which should be the broadcast for the node network.

networkConfig:
  clusterNetworkCIDR: 172.20.0.0/14
  hostSubnetLength: 8
  networkPluginName: redhat/openshift-ovs-multitenant
# serviceNetworkCIDR must match kubernetesMasterConfig.servicesSubnet
  serviceNetworkCIDR: 172.24.0.0/14
  externalIPNetworkCIDRs:
  - 0.0.0.0/0


Version-Release number of selected component (if applicable): 3.4.0.22


How reproducible:  Always when running scale up/down long enough


Steps to Reproduce:
1.  Install a 1 master 2 node OCP 3.4 cluster with the network configuration above
2.  oc new-app cakephp-mysql-example
3.  oc edit dc/cakephp-mysql-example and remove the 512M memory limit
4.  oc scale --replicas=200 dc/cakephp-mysql-example
5.  oc scale --replicas=0 dc/cakephp-mysql-example
6.  verify all nodes are running and none are in CrashLoopBackoff
7.  repeat 4 through 6 until one of the pods gets assigned a .255 address and gets stuck and cannot initialize

Actual results:

Eventually a pod will get a .255 address and get stuck.   Network debug script location will be added shortly.

Expected results:

All pods can start.  No pods handed an invalid address for their subnet.

Comment 3 Dan Winship 2016-11-07 14:11:58 UTC
dcbw has already fixed this in CNI upstream. Presumably he knows whether we should fully rebase cni or just pull in those fixes

Comment 4 Dan Williams 2016-11-07 20:29:18 UTC
Origin PR that bumps CNI: https://github.com/openshift/origin/pull/11815

Comment 5 Troy Dawson 2016-11-09 19:50:41 UTC
This has been merged into ose and is in OSE v3.4.0.24 or newer.

Comment 7 Mike Fiedler 2016-11-10 02:03:13 UTC
Verified on 3.4.0.24.   Performed hundreds of scale up/down as in the original scenario and all pods came active.  No pods were handed a bad IP.

Comment 9 errata-xmlrpc 2017-01-18 12:50:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0066


Note You need to log in before you can comment on or make changes to this bug.