Description of problem: After an app, failed to scale up dc on oc 3.3.0.25 against server 3.1.1.6 Version-Release number of selected component (if applicable): Server: openshift v3.1.1.6-64-g80b61da Client: oc v3.3.0.25+d2ac65e-dirty How reproducible: Always Steps to Reproduce: 1. Create an app # oc new-app --image-stream=openshift/perl:5.20 --name=myapp --code=https://github.com/openshift/sti-perl --context-dir=5.20/test/sample-test-app/ or # oc new-app -f https://raw.githubusercontent.com/openshift/origin/master/examples/sample-app/application-template-stibuild.json 2. # oc scale deploymentconfig myapp --replicas=3 error: Scaling the resource failed with: scale "myapp" is invalid: metadata.uid: invalid value '3de541e0-6a88-11e6-93cb-fa163eb1d16c', Details: field is immutable; Current resource version 40920 Actual results: 2. Failed to scale dc, for more detail please check the attachment. Expected results: deploymentconfig "myapp" scaled Additional info: Not reproduced on oc 3.1 against openshift 3.1 Not reproduced on oc 3.3 against openshift 3.2/3.3
agoldste@ (and I) did some digging, and the problem stems from three "issues": 1. OSE 3.1 has a ValidateScaleUpdate method, which calls ValidateObjectMetaUpdate. This is called on Scale updates in OSE 3.1, against a "fake" Scale object with only Name, Namespace, and CreationTimestamp filled out in ObjectMeta. 2. OpenShift's scaler code used in `oc scale` for DeploymentConfigs fetches the entire DC to scale, and then generates a scale object from that to submit 2. OSE 3.3's helper method for generating scale objects sets the UID, whereas OSE 3.1's does not (see https://github.com/openshift/origin/pull/6233). Thus, OSE 3.3's oc submits a scale update with a UID, and OSE 3.1 attempts to validate the ObjectMeta update against the fake "old" Scale, sees the UID being filled out as an attempt to set the UID, and bails. I think the right solution here is to not fetch the entire DeploymentConfig when attempting to submit a scale update, but instead just fetching the scale object itself.
it looks like we only fetch the DC to check if dc.Spec.Test is set, an print an error (but not actually fail to do the update) when it is. IMO, it seems worth it to drop the warning message there in order to do the right thing with the Scale update.
Adding Mikail, PTAL at the solution proposed by Solly.
SGTM
Fixed in https://github.com/openshift/origin/pull/10684
Fix has not yet been merged in OSE on oc v3.3.0.26. Verified it on origin client: oc v1.3.0-alpha.3+4250e53 against server (openshift v3.1.1.6-64-g80b61da)
Better to verify it again on OSE 3.3 when code is merged. In order to keep tracking this bug, mark it on_qa.
Verified on oc v3.3.0.27 against openshift v3.1.1.6-64-g80b61da Steps: # oc new-app -f https://raw.githubusercontent.com/openshift/origin/master/examples/sample-app/application-template-stibuild.json # oc get pod NAME READY STATUS RESTARTS AGE database-1-yv25d 1/1 Running 0 56s ruby-sample-build-1-build 1/1 Running 0 1m # oc scale dc/database --replicas=5 deploymentconfig "database" scaled # oc get pod NAME READY STATUS RESTARTS AGE database-1-biklu 1/1 Running 0 3m database-1-due7c 1/1 Running 0 3m database-1-ktq9u 1/1 Running 0 3m database-1-xj4hv 1/1 Running 0 3m database-1-yv25d 1/1 Running 0 4m frontend-1-yfeq9 1/1 Running 0 2m frontend-1-yfgh1 1/1 Running 0 2m ruby-sample-build-1-build 0/1 Completed 0 4m