Created attachment 1153959 [details] scale request Description of problem: After a deployment has created a new pod, and deleted the old pod, clicking on the up arrow to scale up the pod results in a message saying 'Scaling up...' but nothing happens. Version-Release number of selected component (if applicable): 3.2 How reproducible: Always Steps to Reproduce: 1. create an app with new-app: 'oc new-app https://github.com/csrwng/simple-ruby.git' 2. after the build completes, and an initial deployment has happened, start a new build. 3. on the overview page, wait for the new deployment to create the new pod, and delete the previous pod. 4. immediately after the previous pod disappears, click on the up arrow to scale up. Actual results: The pod says 'Scaling...' but nothing happens. Expected results: The pod scales up successfully. Additional info:
Created attachment 1153982 [details] event log
it seems like a race condition in the deployment controller, the UI is just updating the scale resource on the DC. The key seems to be related to the timing, if you scale up after the old deployment disappears, the new deployment has scaled up, but the deployment still says it is "in progress".
We can reproduce by command: `oc deploy simple-ruby --latest; oc scale dc/simple-ruby --replicas=5`
That's because 1) the deployment is running on a separate process with the desired replica size being fixed which means that the deployment needs to complete before being able to scale and 2) even after the deployment process finishes, it should be scaled up to dc.spec.replicas but we have hacked the controller to restore dc.spec.replicas back to rc.spec.replicas just because we need to support older clients that try to scale a deploymentconfig. For now, you should not try to scale the dc while a deployment is in-flight but set it prior to the deployment or after it is complete.
I'll disable the scaling controls during a deployment.
Thanks Sam!
https://github.com/openshift/origin/pull/8761
Pull request in the origin/master merge queue.
Confirmed with ami devenv-rhel7_4294, When the deployment in-flight, the scale was disabled, but when the deployment completed, and the scale enable immediately scale up, will meet : the pod saying:scaling to x ..., but wait for a long time , the scale not succeed. please see the attachments. openshift v1.3.0-alpha.1-41-g681170a kubernetes v1.3.0-alpha.1-331-g0522e63 etcd 2.3.0
Created attachment 1163019 [details] scaling
(In reply to zhou ying from comment #10) > Confirmed with ami devenv-rhel7_4294, When the deployment in-flight, the > scale was disabled, but when the deployment completed, and the scale enable > immediately scale up, will meet : the pod saying:scaling to x ..., but wait > for a long time , the scale not succeed. There are several reasons this could happen and might not be a bug. Can you check that you're not at your pods quota and the browse events page to see if there are any warnings?
yinzhou Any update? Do you still see the problem?
Confirmed with ami devenv-rhel7_4530, can't reproduce this issue now on browse. But by command: [root@ip-172-18-2-106 amd64]# oc get po NAME READY STATUS RESTARTS AGE ruby-ex-1-build 0/1 Completed 0 7m ruby-ex-3-hzkj1 1/1 Running 0 2m ruby-ex-3-qpgrg 1/1 Running 0 2m ruby-ex-3-unfuj 1/1 Running 0 2m [root@ip-172-18-2-106 amd64]# oc deploy ruby-ex --latest ; oc scale dc/ruby-ex --replicas=5 Started deployment #4 Use 'oc logs -f dc/ruby-ex' to track its progress. deploymentconfig "ruby-ex" scaled [root@ip-172-18-2-106 amd64]# oc get po NAME READY STATUS RESTARTS AGE ruby-ex-1-build 0/1 Completed 0 12m ruby-ex-4-26fsh 1/1 Running 0 4m ruby-ex-4-hn8o8 1/1 Running 0 4m ruby-ex-4-jiigp 1/1 Running 0 4m
Michail, Dan (Mace), do we want to guard against this problem when scaling with the CLI?
Reassigning since the web console side is fixed. See comment #14.
*** Bug 1353834 has been marked as a duplicate of this bug. ***
*** Bug 1306720 has been marked as a duplicate of this bug. ***
We will probably just display a warning to users in CLI, but after we talked with Michalis we don't want to prevent them from scaling.
Cesar, Sam: I don't think we can show a warning in CLI in a reasonable way as the `oc scale` is upstream. We will have to create 'smarter' wrapper that will check the state of DC. I'm not 100% convinced that we want to do that refactor to only gain the warning. I'm in favor of closing this as the UI portion is now fixed. WDYT?
Michal, I'm ok with closing it as well.
Setting ON_QA so QA can close this.
Confirmed with 3.3 latest env, the issue has fixed on browse. openshift version openshift v3.3.0.18 kubernetes v1.3.0+507d3a7 etcd 2.3.0+git On browse when deploying, the scaling up arrow is disable.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1933