| Summary: | Quota usage increases when request>limit and return to the original status about 5mins | ||
|---|---|---|---|
| Product: | OKD | Reporter: | Qixuan Wang <qixuan.wang> |
| Component: | Pod | Assignee: | Derek Carr <decarr> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Qixuan Wang <qixuan.wang> |
| Severity: | low | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.x | CC: | aos-bugs, mmccomas, xtian |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-11-10 21:34:41 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
The current design has validation occurring after admission control in the kube-apiserver. This can result in resources that are invalid being counted by quota. This is not a regression from prior behavior, so I think this should not be a blocker. The system resets properly in the next quota pass. Tested on OCP3.6(openshift v3.6.136, kubernetes v1.6.1+5115d708d7, etcd 3.2.1) The bug has been fixed, thanks. Here are part of the verification steps: # oc create -f pod-request-limit-invalid-2.yaml; oc describe quota; sleep 5; oc describe quota Error from server (Forbidden): error when creating "pod-request-limit-invalid-2.yaml": pods "pod-request-limit-invalid-2" is forbidden: failed quota: myquota: [spec.containers[0].resources.limits: Invalid value: "500m": must be greater than or equal to cpu request, spec.containers[0].resources.limits: Invalid value: "256Mi": must be greater than or equal to memory request] Name: myquota Namespace: qwang1 Resource Used Hard -------- ---- ---- cpu 0 30 memory 0 16Gi persistentvolumeclaims 0 20 pods 0 20 replicationcontrollers 0 30 resourcequotas 1 1 secrets 9 15 services 0 10 Name: myquota Namespace: qwang1 Resource Used Hard -------- ---- ---- cpu 0 30 memory 0 16Gi persistentvolumeclaims 0 20 pods 0 20 replicationcontrollers 0 30 resourcequotas 1 1 secrets 9 15 services 0 10 |
Description of problem: Create a pod with request > limit spec, quota usage increase the request number, and return to the original usage need about 5 minutes (ResourceQuotaSyncPeriod). Version-Release number of selected component (if applicable): openshift v1.1.4-16-gb5da002 kubernetes v1.2.0-origin-41-g91d3e75 etcd 2.2.5 How reproducible: Always Steps to Reproduce: Apply hard quata to the namespace $ vi myquota.yaml apiVersion: v1 kind: ResourceQuota metadata: name: myquota spec: hard: cpu: "30" memory: 16Gi persistentvolumeclaims: "20" pods: "20" replicationcontrollers: "30" resourcequotas: "1" secrets: "15" services: "10" $ oc create -f myquota.yaml -n qwang1 --config=--config=/home/openshift.local.config/master/admin.kubeconfig 2. Set Requests and Limits to a pod/container and create it $ vi pod-request-limit-invalid-2.yaml apiVersion: v1 kind: Pod metadata: name: pod-request-limit-invalid-2 labels: name: pod-request-limit-invalid-2 spec: containers: - name: pod-request-limit-invalid-2 image: openshift/mysql-55-centos7:latest env: - name: MYSQL_USER value: userSUM name: MYSQL_PASSWORD value: P5J6s8wf name: MYSQL_DATABASE value: root name: MYSQL_ROOT_PASSWORD value: W5J6s8wf resources: limits: cpu: "500m" memory: "256Mi" requests: cpu: "600m" memory: "512Mi" $ oc create -f pod-request-limit-invalid-2.yaml 3. Check pod is not created $ oc describe pod pod-request-limit-invalid-2 4. Check quota to the namespace $ oc describe quota myquota Actual results: 3. [root@ip-172-18-0-136 home]# oc create -f pod-request-limit-invalid-2.yaml The Pod "pod-request-limit-invalid-2" is invalid. * spec.containers[0].resources.limits[cpu]: Invalid value: "500m": must be greater than or equal to request * spec.containers[0].resources.limits[memory]: Invalid value: "256Mi": must be greater than or equal to request 4. [root@ip-172-18-0-136 home]# oc describe quota myquota -n qwang1; date Name: myquota Namespace: qwang1 Resource Used Hard -------- ---- ---- cpu 600m 30 memory 512Mi 16Gi persistentvolumeclaims 0 20 pods 1 20 replicationcontrollers 0 30 resourcequotas 1 1 secrets 9 15 services 0 10 Fri Mar 18 02:46:25 EDT 2016 [root@ip-172-18-0-136 home]# oc describe quota myquota -n qwang1; date Name: myquota Namespace: qwang1 Resource Used Hard -------- ---- ---- cpu 0 30 memory 0 16Gi persistentvolumeclaims 0 20 pods 0 20 replicationcontrollers 0 30 resourcequotas 1 1 secrets 9 15 services 0 10 Fri Mar 18 02:50:52 EDT 2016 Expected results: Quota usage should not increment because pod does not start to create, or restore within a shorter time. Additional info: