Bug 1333158

Summary: Scaling widget is a little flaky
Product: OpenShift Container Platform Reporter: Dan McPherson <dmcphers>
Component: Management ConsoleAssignee: Samuel Padgett <spadgett>
Status: CLOSED ERRATA QA Contact: Yadan Pei <yapei>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.2.0CC: adellape, aos-bugs, jokerman, mmccomas
Target Milestone: ---   
Target Release: 3.2.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
When scaling deployments in the web console, if multiple scaling requests were made in a short amount of time, it was possible for the operation to result with an incorrect number of replicas. This bug fix addresses a timing issue, and as a result the correct number of replicas are now set in this scenario.
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-06-27 15:06:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Dan McPherson 2016-05-04 19:31:42 UTC
Description of problem:

Using the scaling widget is a little flaky.  There are times when you ask it to scale up 4 or 5 positions.  Let's say from 1 to 5.  The widget acknowledges that it is scaling to 5 in the smaller text.  But at some point the widget remove the smaller text and decides to only scale to 4 and stops.  What's likely happening is the widget is getting the expected value from the server (potentially from an eventually consistent replica) and overriding the request from the client.


Version-Release number of selected component (if applicable):


How reproducible: 30% of the time I try to recreate it I can get some sort of failure.


Steps to Reproduce:
1. Keep requesting scale up operations on a 1 or 2 second interval.


Additional info:

It would probably make sense to not poll the server for the target value until after a longer period of inactivity on the client.

Comment 1 Samuel Padgett 2016-05-05 17:24:39 UTC
I can reproduce. It happens if you scale in the window between when we make a scaling request and we get an updated rc.spec.replicas in the watch callback.

Comment 2 Samuel Padgett 2016-05-05 17:45:24 UTC
https://github.com/openshift/origin/pull/8763

Comment 4 Yadan Pei 2016-05-26 01:47:53 UTC
Since target release is 3.2.1, will check when 3.2.1 puddle is ready

Comment 5 Yadan Pei 2016-05-26 09:44:33 UTC
Move to MODIFIED and wait for 3.2.1 puddle

Comment 6 Yadan Pei 2016-05-26 09:55:41 UTC
Here are steps to check if code are merged to enterprise-3.2 branch


Clone openshift/ose repo 

$ cd ose
$ git log --pretty="%h %an %cd - %s" --date=local enterprise-3.2 | grep '0858b1f'
$ git log --pretty="%h %an %cd - %s" --date=local master | grep '0858b1f'
0858b1f Samuel Padgett Mon May 9 21:36:20 2016 - Fix timing issue scaling deployments

Could see from the result, PR #209 is not merged to enterprise-3.2 branch, could not test now

Comment 8 Yadan Pei 2016-06-06 08:32:10 UTC
Checked against 
oc v3.2.1.1-1-g33fa4ea
kubernetes v1.2.0-36-g4a3f9c5

scaling deployments works well

Comment 10 errata-xmlrpc 2016-06-27 15:06:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1343