Description of problem: If the requested CPU is below a threshold it fails. I'm only supplying the MEM request which is at or near the lower limit. I would not expect it to fail if memory was above the limit. I would also expect it to just inform me it will use more than what has been requested. Version-Release number of selected component (if applicable): OpenShift Master: v3.2.0.40 Kubernetes Master: v1.2.0-36-g4a3f9c5 How reproducible: I reproduced this 2 different ways. 1st nodejs-mongo-example, modified the memory requests for Nodejs to be 150 MiB which gave the same error below. 2nd used 3rd party template at https://raw.githubusercontent.com/GrahamDumpleton/openshift3-kallithea/master/template.json Steps to Reproduce: 1. deploy one of the 2 apps above 2. ensure request is low enough (minimum 150 MiB) 3. observe events Expected results: Either my request would be increased to allow app to deploy or it would accept my min mem request. Additional info: 11:45:34 AM Replication Controller kallithea-db-1 Failed create Error creating: pods "kallithea-db-1-" is forbidden: [Minimum cpu usage per Pod is 30m, but request is 22m., Minimum memory usage per Pod is 150Mi, but request is 120586240., Minimum cpu usage per Container is 30m, but request is 22m., Minimum memory usage per Container is 150Mi, but request is 115Mi.]
Luke - is this expected behavior? were we intending to change user input as requested above?
It's expected behavior. In online, the limits specified by the user are overridden based on the ClusterResourceOverrides admission plugin. This happens at the time the pod is instantiated (not when a pod template is given in e.g. a RC). This works exactly like the LimitRanger plugin normally would - if you specified limit/request out of bounds, it fails at the point where the pod is instantiated. The difference is that it's obvious what's going on when the user specifies the wrong numbers, but with the CRO plugin in place the numbers are actually rewritten so the user won't even recognize where they come from. Bad UX. There are a number of ways to address this. It would seem obvious to just set a floor, either an absolute one or use the per-project LimitRange, and reset too-low values to the floor. Abhishek did not like this as it violates the purpose of the CRO plugin which is to manage overcommit, arguing that in order to maintain the desired limit-to-request ratios we should just reject pods rather than set the floor. Other ways would be for ops to adjust the LimitRange to have lower request limits, or adjust the CRO plugin parameters to prevent the violation of limits. Finally, the templates could be adjusted. If we're shipping the nodejs-mongo-example online I would suggest we fix its request settings and lower the severity of this bug at least until we have agreement on how best to address it.
When will this get addressed (either in template/example or other mechanism)? I fear it will be the source of confusion with a number of our end users, when they use Actions -> Set Resource Limits and their deployments fail, even though they set a value within range.
Dan/Abhishek/David any opinions?
I had opened a dup of this bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1333029 My suggestion is that we should fix this in the UI to change the lower limit to 10/6 * the request. It actually doesn't matter whether you are using the extra admission controller. If the field is a limit field, both values should be for the limit. If the field is a request field, the min and max should be request values.
*** Bug 1333029 has been marked as a duplicate of this bug. ***
Discussed in https://bugzilla.redhat.com/show_bug.cgi?id=1324825
https://github.com/openshift/origin/pull/8775
https://bugzilla.redhat.com/show_bug.cgi?id=1324825#c15
Seemed https://github.com/openshift/origin/pull/8775 is fix in web console Set Resource Limits page but does not touch the CLI problem in this bug. Would leave the CLI user experience as it currently behaves?
The CLI doesn't do any validation again limit ranges even if you don't use ClusterResourceOverrides.
(In reply to Samuel Padgett from comment #12) > The CLI doesn't do any validation again limit ranges even if you don't use > ClusterResourceOverrides. Yes, CLI doesn't do any validation again limit ranges. But the problem 150Mi ~ 256Mi don't work is raised specifically when using ClusterResourceOverrides. (Though this is ClusterResourceOverrides's expected behavior, as comment 2 said). From user's view, `oc get limitrange -o yaml` will show acceptable Min is 150Mi. I think this bug doesn't intend to say: Pods should deploy when memory settings are < 150Mi. Instead, it may intend to say: it is surprise that "Pods won't deploy when memory settings are too low" between 150Mi ~ 256Mi.
Marking ON_QA to verify the web console changes.
Found that PR are merged to openshift:master, now we only have puddles build from openshift:enterprise-3.2(latest one is 2016-05-25.3 and oc version is oc v3.2.0.45). Only when 3.2.1 puddle is ready could we verify the bug, right? @samuel, could you please help confirm? thanks
https://bugzilla.redhat.com/show_bug.cgi?id=1324825#c20 a minor problem.
The PR is merged in latest v3.2.1.1-1-g33fa4ea. The bug (web console range issue) is fixed and could be VERIFIED. Will you change it to ON_QA?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1343