Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be unavailable on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1495539 - starter-us-east-2 pods failing with cpu request 0.
Summary: starter-us-east-2 pods failing with cpu request 0.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Pod
Version: 3.x
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Seth Jennings
QA Contact: DeShuai Ma
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-26 09:29 UTC by Paul Bergene
Modified: 2017-09-26 21:19 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-26 21:19:17 UTC
Target Upstream Version:


Attachments (Terms of Use)
Logs across starter-us-east-2 tenants 260917 (15.60 KB, text/plain)
2017-09-26 09:53 UTC, Paul Bergene
no flags Details

Description Paul Bergene 2017-09-26 09:29:15 UTC
Description of problem:

After upgrade of starter-us-east-2 yesterday evening it has been reported that we have started seeing the following issue. We are currently investigating the impact and will update the issue as we know more.

10:13:49 PM    content-repository-1    Replication Controller    Warning    Failed create     Error creating: pods "content-repository-1-" is forbidden: [minimum cpu usage per Pod is 17m, but request is 0., minimum cpu usage per Container is 17m, but request is 0.]
19 times in the last 14 minutes
10:13:48 PM    jenkins-1    Replication Controller    Warning    Failed create     Error creating: pods "jenkins-1-" is forbidden: [minimum cpu usage per Pod is 17m, but request is 0., minimum cpu usage per Container is 17m, but request is 0.]
19 times in the last

Has there been a change to cpu requests with yesterdays upgrade with regards to minimum request limits?

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Paul Bergene 2017-09-26 09:53:24 UTC
Created attachment 1330954 [details]
Logs across starter-us-east-2 tenants 260917

Comment 4 Paul Bergene 2017-09-26 13:12:37 UTC
We had an account with a working jenkins that we reset the environment of and we are experiencing the same kind of issues on this one. 

This leads us to suspect that this might be the case for all accounts, or at least the ones with lack of limitrange?

Comment 5 Justin Pierce 2017-09-26 13:15:49 UTC
The quickest solution for the workshop would seem to be to set DC limit requests.cpu to 17m instead of 0  (assuming the limitrange minimum cpu is consistent).

Comment 6 Seth Jennings 2017-09-26 19:01:13 UTC
Putting a cpu request on a pod makes it subject to LimitRanges and ClusterResourceOverrides.

If you want your pod best-effort, do not put a request at all.  This can lead to poor performance on heavily loaded nodes, however, and increase risk for eviction.  It is best to make your pod Burstable by setting a reasonable cpu request like 100m for a light/periodic task.

This is a good reference to understand how/why/what to set your pod resource requests:
https://docs.openshift.com/container-platform/latest/admin_guide/overcommit.html

Comment 7 Seth Jennings 2017-09-26 21:19:17 UTC
Working as designed. Change in behavior caused by change in cluster configuration.


Note You need to log in before you can comment on or make changes to this bug.