Bug 1331816
Summary: | [dev-preview-int] Pods won't deploy when memory settings are too low | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Steve Speicher <sspeiche> |
Component: | Management Console | Assignee: | Samuel Padgett <spadgett> |
Status: | CLOSED ERRATA | QA Contact: | Yadan Pei <yapei> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.2.0 | CC: | abhgupta, adellape, aos-bugs, deads, dmcphers, jokerman, mmccomas, qixuan.wang, spadgett, xxia |
Target Milestone: | --- | ||
Target Release: | 3.2.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
The web console has been updated to more accurately reflect memory limit values.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2016-06-27 15:06:10 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Steve Speicher
2016-04-29 15:55:53 UTC
Luke - is this expected behavior? were we intending to change user input as requested above? It's expected behavior. In online, the limits specified by the user are overridden based on the ClusterResourceOverrides admission plugin. This happens at the time the pod is instantiated (not when a pod template is given in e.g. a RC). This works exactly like the LimitRanger plugin normally would - if you specified limit/request out of bounds, it fails at the point where the pod is instantiated. The difference is that it's obvious what's going on when the user specifies the wrong numbers, but with the CRO plugin in place the numbers are actually rewritten so the user won't even recognize where they come from. Bad UX. There are a number of ways to address this. It would seem obvious to just set a floor, either an absolute one or use the per-project LimitRange, and reset too-low values to the floor. Abhishek did not like this as it violates the purpose of the CRO plugin which is to manage overcommit, arguing that in order to maintain the desired limit-to-request ratios we should just reject pods rather than set the floor. Other ways would be for ops to adjust the LimitRange to have lower request limits, or adjust the CRO plugin parameters to prevent the violation of limits. Finally, the templates could be adjusted. If we're shipping the nodejs-mongo-example online I would suggest we fix its request settings and lower the severity of this bug at least until we have agreement on how best to address it. When will this get addressed (either in template/example or other mechanism)? I fear it will be the source of confusion with a number of our end users, when they use Actions -> Set Resource Limits and their deployments fail, even though they set a value within range. Dan/Abhishek/David any opinions? I had opened a dup of this bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1333029 My suggestion is that we should fix this in the UI to change the lower limit to 10/6 * the request. It actually doesn't matter whether you are using the extra admission controller. If the field is a limit field, both values should be for the limit. If the field is a request field, the min and max should be request values. *** Bug 1333029 has been marked as a duplicate of this bug. *** Discussed in https://bugzilla.redhat.com/show_bug.cgi?id=1324825 Seemed https://github.com/openshift/origin/pull/8775 is fix in web console Set Resource Limits page but does not touch the CLI problem in this bug. Would leave the CLI user experience as it currently behaves? The CLI doesn't do any validation again limit ranges even if you don't use ClusterResourceOverrides. (In reply to Samuel Padgett from comment #12) > The CLI doesn't do any validation again limit ranges even if you don't use > ClusterResourceOverrides. Yes, CLI doesn't do any validation again limit ranges. But the problem 150Mi ~ 256Mi don't work is raised specifically when using ClusterResourceOverrides. (Though this is ClusterResourceOverrides's expected behavior, as comment 2 said). From user's view, `oc get limitrange -o yaml` will show acceptable Min is 150Mi. I think this bug doesn't intend to say: Pods should deploy when memory settings are < 150Mi. Instead, it may intend to say: it is surprise that "Pods won't deploy when memory settings are too low" between 150Mi ~ 256Mi. Marking ON_QA to verify the web console changes. Found that PR are merged to openshift:master, now we only have puddles build from openshift:enterprise-3.2(latest one is 2016-05-25.3 and oc version is oc v3.2.0.45). Only when 3.2.1 puddle is ready could we verify the bug, right? @samuel, could you please help confirm? thanks https://bugzilla.redhat.com/show_bug.cgi?id=1324825#c20 a minor problem. The PR is merged in latest v3.2.1.1-1-g33fa4ea. The bug (web console range issue) is fixed and could be VERIFIED. Will you change it to ON_QA? Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1343 |