Bug 1327532 - Can not update pod's mutable field when project later creates quota or limitrange
Summary: Can not update pod's mutable field when project later creates quota or limitr...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OKD
Classification: Red Hat
Component: Pod
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: ---
Assignee: Derek Carr
QA Contact: DeShuai Ma
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-15 10:23 UTC by Xingxing Xia
Modified: 2017-06-02 20:21 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-02 20:21:14 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Xingxing Xia 2016-04-15 10:23:46 UTC
Description of problem:
Create a pod, we can update pod's mutable fields such as metadata.labels, spec.activeDeadlineSeconds. Then create quota or limitrange in project, we fail to update the previous pod's mutable fields. Then create new pod, we succeed updating new pod's mutable fields. Plugin control may has sth wrong.

Version-Release number of selected component (if applicable):
openshift v1.1.6-97-gbe0abe5
kubernetes v1.2.0-36-g4a3f9c5
etcd 2.2.5

How reproducible:
Always

Steps to Reproduce:
1. oc login, create project xxia-proj
2. Create pod:
$ oc new-app -f origin/examples/sample-app/application-template-stibuild.json
3. After pod database-1-zr2nr is running:
1> $ oc edit pod database-1-zr2nr # Add/change label, e.g. new: abc
2> $ oc edit pod database-1-zr2nr # Under spec, add/change activeDeadlineSeconds: 21600
4. Create limitrange in project (with cluster-admin):
$ oc create -f origin/examples/project-quota/limits.yaml --config=./admin.kubeconfig -n xxia-proj
5. Repeat 3
6. Delete limitrange, then create quota
$ oc delete -f origin/examples/project-quota/limits.yaml --config=./admin.kubeconfig -n xxia-proj
$ oc create -f origin/examples/project-quota/quota.yaml --config=./admin.kubeconfig -n xxia-proj
7. Repeat 3

8. With limitrange or quota existing, create new pod, e.g. by scaling:
$ oc scale dc database --replicas=2
$ oc get pod
database-1-x3omd            1/1       Running   0          41s
database-1-zr2nr            1/1       Running   0          7h
9. Repeat 3 on new pod database-1-x3omd


Actual results:
3. 1> and 2> both can update successfully.
5. Both update fail with message:
spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
7. Both update fail with message:
error: pods "database-1-zr2nr" could not be patched: pods "database-1-zr2nr" is forbidden: Failed quota: quota: must specify cpu,memory
9. Both can update successfully on new pod.

Expected results:
Step 5 and 7 should succeed as step 3. And old pod before plugin creation should be updated successfully as same as new pod after plugin creation.

Additional info:
Only resource pod has this issue. Others such as dc doesn't have.

Comment 1 Derek Carr 2016-04-15 20:15:50 UTC
Step 5:

The LimitRanger is intercepting CREATE and UPDATE operations for pods.

The code is evaluating the updated pod against any active LimitRange(s) that are now present in the project.  It is merging the default resource requirements in an UPDATE operation, and since the pod did not previously have resource requirements, and we do not yet allow dynamically updating a pods resource requirements, the UPDATE operation is causing an error.

There had been a fix to have LimitRange ignore UPDATE operations for pods, but that appears to have regressed:

https://github.com/kubernetes/kubernetes/pull/18477/files#diff-e3f052839796781484de133fea5d5b5dR62

I opened the upstream issue to track a fix:
https://github.com/kubernetes/kubernetes/issues/24351

Comment 2 Derek Carr 2016-04-15 20:28:57 UTC
Step 7:

The quota system is intercepting the CREATE and UPDATE operations.

Since we do not yet support mutation of a pod's resource requirements on UPDATE, it should be safe to ignore pod updates all together.

https://github.com/kubernetes/kubernetes/blob/master/pkg/quota/evaluator/core/pods.go#L50

I opened the upstream issue to track a fix:
https://github.com/kubernetes/kubernetes/issues/24352

Comment 3 Derek Carr 2016-04-15 20:30:14 UTC
A correction to a previous comment on LimitRange and regression:

>>There had been a fix to have LimitRange ignore UPDATE operations for pods, but that appears to have regressed:
https://github.com/kubernetes/kubernetes/pull/18477/files#diff-e3f052839796781484de133fea5d5b5dR62

The above PR never merged into the code base, so there is no actual regression on LimitRange behavior to my knowledge, but a fix should still be made.

Comment 4 Derek Carr 2016-04-15 20:31:26 UTC
I do not think this should be a release blocker as best-practice should be to create a quota prior to creating resources in the project.

Comment 5 Derek Carr 2016-04-15 20:33:03 UTC
Similar discussion on issues that can arise if a quota is added post project creation is here:
https://github.com/kubernetes/kubernetes/issues/22509

Comment 6 Derek Carr 2017-06-02 20:21:14 UTC
for the error reported in step 5, we will not update a limit range.  right now, the presence of limit range precludes the ability to run pods that dont make resource requirements.

for the error reported in step 7, the quota system has evolved to now let you write a quota that tracks pods that are NotBestEffort via scopes.  for the quota in question for this scenario, the quota would not have rejected the pod if it had the scope NotBestEffort applied.


Note You need to log in before you can comment on or make changes to this bug.