Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1430484 - Upgrade from 3.3 to 3.4, Insufficient Pods
Upgrade from 3.3 to 3.4, Insufficient Pods
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Pod (Show other bugs)
3.4.1
Unspecified Unspecified
unspecified Severity high
: ---
: 3.7.0
Assigned To: Seth Jennings
Anping Li
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-03-08 13:01 EST by Steven Walter
Modified: 2017-11-28 16:53 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-11-28 16:53:01 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:3188 normal SHIPPED_LIVE Moderate: Red Hat OpenShift Container Platform 3.7 security, bug, and enhancement update 2017-11-28 21:34:54 EST

  None (edit)
Description Steven Walter 2017-03-08 13:01:05 EST
Description of problem:
Getting "Insufficient Pods" after upgrading to 3.4, causing a large number of pods to suddenly stop working. This is because of the change from a flat max-pods, to whichever is smaller of {250,10pods/core}. However this is not yet documented (https://bugzilla.redhat.com/show_bug.cgi?id=1376585) and post-upgrade, it is unknown how to resolve this. In existing clusters that have many (small) pods, after upgrade suddenly the number of pods is higher than the pods-per-core mandate, causing dozens of pods to be unschedulable. It appears from looking around that the pods-per-core number can be configured in ansible hosts, but there is no documentation so it is hard to verify this, or what steps to take to increase this number.

Version-Release number of selected component (if applicable):
3.4.1


Marking as high severity given this can cause an outage to applications and we do not warn of this.

Additional info:

https://github.com/openshift/openshift-docs/issues/2455
https://bugzilla.redhat.com/show_bug.cgi?id=1371309
Comment 5 Seth Jennings 2017-03-10 12:14:00 EST
Origin Docs PR:
https://github.com/openshift/openshift-docs/pull/3916

As far as resolving the situation post-upgrade, the only thing to do is to set the pods-per-core >= max-pods so that max-pods becomes the limiting factor.

This can be done in the installer inventory file:
openshift_node_kubelet_args={'pods-per-core': ['200'], 'max-pods': ['200']}
Comment 6 Anping Li 2017-03-22 06:06:09 EDT
Specified openshift_node_kubelet_args={'pods-per-core': ['200'], 'max-pods': ['200']} can limit the pod number, the pod can be limited by the configuration.  so move bug to verified.
Comment 11 errata-xmlrpc 2017-11-28 16:53:01 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:3188

Note You need to log in before you can comment on or make changes to this bug.