Bug 1430484
| Summary: | Upgrade from 3.3 to 3.4, Insufficient Pods | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Steven Walter <stwalter> |
| Component: | Node | Assignee: | Seth Jennings <sjenning> |
| Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.4.1 | CC: | aos-bugs, decarr, erich, jokerman, mmccomas, wmeng |
| Target Milestone: | --- | ||
| Target Release: | 3.7.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | No Doc Update | |
| Doc Text: |
undefined
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-11-28 21:53:01 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Origin Docs PR: https://github.com/openshift/openshift-docs/pull/3916 As far as resolving the situation post-upgrade, the only thing to do is to set the pods-per-core >= max-pods so that max-pods becomes the limiting factor. This can be done in the installer inventory file: openshift_node_kubelet_args={'pods-per-core': ['200'], 'max-pods': ['200']} Specified openshift_node_kubelet_args={'pods-per-core': ['200'], 'max-pods': ['200']} can limit the pod number, the pod can be limited by the configuration. so move bug to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188 |
Description of problem: Getting "Insufficient Pods" after upgrading to 3.4, causing a large number of pods to suddenly stop working. This is because of the change from a flat max-pods, to whichever is smaller of {250,10pods/core}. However this is not yet documented (https://bugzilla.redhat.com/show_bug.cgi?id=1376585) and post-upgrade, it is unknown how to resolve this. In existing clusters that have many (small) pods, after upgrade suddenly the number of pods is higher than the pods-per-core mandate, causing dozens of pods to be unschedulable. It appears from looking around that the pods-per-core number can be configured in ansible hosts, but there is no documentation so it is hard to verify this, or what steps to take to increase this number. Version-Release number of selected component (if applicable): 3.4.1 Marking as high severity given this can cause an outage to applications and we do not warn of this. Additional info: https://github.com/openshift/openshift-docs/issues/2455 https://bugzilla.redhat.com/show_bug.cgi?id=1371309