Description of problem: For the same pod, it is considered as QoS BestEffort in Openshift 3.2 but Burstable in OpenShift 3.3. Version-Release number of selected component (if applicable): openshift v3.2.1.9-1-g2265530 kubernetes v1.2.0-36-g4a3f9c5 etcd 2.2.5 openshift v3.3.0.6 kubernetes v1.3.0+57fb9ac etcd 2.3.0+git How reproducible: Always Steps to Reproduce: 1. create a BestEffort quota oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/quota/quota-besteffort.yaml Scopes: BestEffort * Matches all pods that have best effort quality of service. Resource Used Hard -------- ---- ---- pods 0 2 2. create a OpenShift 3.2 BestEffort pod. oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/quota/pod-besteffort.yaml 3. oc describe quota 4. oc describe pods Actual results: On OpenShift 3.2.1.9 3. Scopes: BestEffort * Matches all pods that have best effort quality of service. Resource Used Hard -------- ---- ---- pods 1 2 4. QoS Tier: cpu: BestEffort memory: BestEffort Limits: cpu: 500m memory: 256Mi Requests: memory: 0 cpu: 0 On OpenShift 3.3.0.6 3. Scopes: BestEffort * Matches all pods that have best effort quality of service. Resource Used Hard -------- ---- ---- pods 0 2 4. Limits: cpu: 500m memory: 256Mi Requests: cpu: 0 memory: 0 QoS Tier: Burstable Expected results: Be consistent Additional info:
This looks like a bug in OpenShift 3.2, will investigate.
Kubernetes 1.2 has a bug for how it evaluated QoS when a request=0 and a limit was specified. * In 1.2, a resource was best effort if its request is unspecified or 0. * The proper behavior is to say a resource is best effort if it has no limit specified, and its request is unspecified or 0.
Fix for edge case in Origin PR: https://github.com/openshift/ose/pull/308 The behavior described in OpenShift 3.3.0.6 is correct moving forward.
The pull request has not been merged. I'm marking this back to assigned. Please move it to Modified when the pull request has been merged. I'm also moving the target to 3.3.0, per the conversation in the pull request.
This is a code fix for 3.2.x. Correcting version & target release.
Merged into 3.2.x stream.
not in latest 3.2 puddle. waiting for new puddle.
Fixed. openshift v3.2.1.15 kubernetes v1.2.0-36-g4a3f9c5 etcd 2.2.5 Now the Qos tier in OpenShift 3.2.1 is consistent with OpenShift 3.3. for OpenShift 3.2.1.15 Scopes: BestEffort * Matches all pods that have best effort quality of service. Resource Used Hard -------- ---- ---- pods 0 2 QoS Tier: cpu: Burstable memory: Burstable Limits: cpu: 1 memory: 1Gi Requests: memory: 0 cpu: 0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2016:1853