Hide Forgot
Description of problem: In our starter clusters, --max-pods is set to 250. This leads to a persistent KubeletTooManyPods warning being raised. Version-Release number of selected component (if applicable): v3.11.0-0.21.0 How reproducible: 100% Steps to Reproduce: 1. Set kubelet max-pods to something > 110 (the default). 2. Fill node with > 100 pods 3. KubeletTooManyPods will be reported Actual results: KubeletTooManyPods reported. Expected results: KubeletTooManyPods should be relative to configured --max-pods.
This is indeed a bug, and already in our backlog to improve. In the mean time the best thing I can suggest is to silence this alert, sorry for the inconvenience.
This is bumped to 250 pods in 4.0, the patch that modified this: https://github.com/openshift/cluster-monitoring-operator/pull/238
The cloned issue https://jira.coreos.com/browse/MON-344 is fixed, set it to VERIFIED
Frederic - is the simple change of bumping the default value to 250 something that can be backported for a 3.11.x errata or is silencing the only option?
It's relatively straight forward, but does need to be scheduled into our sprints. This is a PM decision to make if/when.
Since we have fixed this for OCP4 already and it came up a few times, it's ok w/ me to backport this for the next OCP 3.11 z-release if possible.
In that case please create an item in our backlog and make it your responsibility to have it be part of an upcoming sprint.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758