Bug 1333158 - Scaling widget is a little flaky
Summary: Scaling widget is a little flaky
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 3.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.2.1
Assignee: Samuel Padgett
QA Contact: Yadan Pei
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-04 19:31 UTC by Dan McPherson
Modified: 2016-06-27 15:06 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
When scaling deployments in the web console, if multiple scaling requests were made in a short amount of time, it was possible for the operation to result with an incorrect number of replicas. This bug fix addresses a timing issue, and as a result the correct number of replicas are now set in this scenario.
Clone Of:
Environment:
Last Closed: 2016-06-27 15:06:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1343 0 normal SHIPPED_LIVE Red Hat OpenShift Enterprise 3.2.1.1 bug fix and enhancement update 2016-06-27 19:04:05 UTC

Description Dan McPherson 2016-05-04 19:31:42 UTC
Description of problem:

Using the scaling widget is a little flaky.  There are times when you ask it to scale up 4 or 5 positions.  Let's say from 1 to 5.  The widget acknowledges that it is scaling to 5 in the smaller text.  But at some point the widget remove the smaller text and decides to only scale to 4 and stops.  What's likely happening is the widget is getting the expected value from the server (potentially from an eventually consistent replica) and overriding the request from the client.


Version-Release number of selected component (if applicable):


How reproducible: 30% of the time I try to recreate it I can get some sort of failure.


Steps to Reproduce:
1. Keep requesting scale up operations on a 1 or 2 second interval.


Additional info:

It would probably make sense to not poll the server for the target value until after a longer period of inactivity on the client.

Comment 1 Samuel Padgett 2016-05-05 17:24:39 UTC
I can reproduce. It happens if you scale in the window between when we make a scaling request and we get an updated rc.spec.replicas in the watch callback.

Comment 2 Samuel Padgett 2016-05-05 17:45:24 UTC
https://github.com/openshift/origin/pull/8763

Comment 4 Yadan Pei 2016-05-26 01:47:53 UTC
Since target release is 3.2.1, will check when 3.2.1 puddle is ready

Comment 5 Yadan Pei 2016-05-26 09:44:33 UTC
Move to MODIFIED and wait for 3.2.1 puddle

Comment 6 Yadan Pei 2016-05-26 09:55:41 UTC
Here are steps to check if code are merged to enterprise-3.2 branch


Clone openshift/ose repo 

$ cd ose
$ git log --pretty="%h %an %cd - %s" --date=local enterprise-3.2 | grep '0858b1f'
$ git log --pretty="%h %an %cd - %s" --date=local master | grep '0858b1f'
0858b1f Samuel Padgett Mon May 9 21:36:20 2016 - Fix timing issue scaling deployments

Could see from the result, PR #209 is not merged to enterprise-3.2 branch, could not test now

Comment 8 Yadan Pei 2016-06-06 08:32:10 UTC
Checked against 
oc v3.2.1.1-1-g33fa4ea
kubernetes v1.2.0-36-g4a3f9c5

scaling deployments works well

Comment 10 errata-xmlrpc 2016-06-27 15:06:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1343


Note You need to log in before you can comment on or make changes to this bug.