Bug 866642 - rhc domain request is slow
rhc domain request is slow
Status: CLOSED UPSTREAM
Product: OpenShift Origin
Classification: Red Hat
Component: Command Line Interface (Show other bugs)
2.x
Unspecified Unspecified
high Severity medium
: ---
: ---
Assigned To: John (J5) Palmieri
libra bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-10-15 15:12 EDT by Mike McGrath
Modified: 2015-05-14 22:06 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-10-17 13:05:01 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Mike McGrath 2012-10-15 15:12:52 EDT
Description of problem:

I've got 7 applications, some with mongo or jenkins-client.  When running "rhc domain" the command takes about 16 seconds to complete.  It looks like it's pausing at every application and then calling back to the server whereas before it retrieved all of this information immediately.

This is with rubygem-rhc-0.98.16-1.fc17.noarch
Comment 1 Clayton Coleman 2012-10-15 21:38:44 EDT
I would expect for rhc domain we are making 1 call to the API description, 1 call to get the list of applications for the domain, and then 1 call for each application to retrieve the list of cartridges.
Comment 2 John (J5) Palmieri 2012-10-16 09:56:21 EDT
It prints out the information as it gets it so the pause you see is the time it takes to get a list of cartridges for each application.  Here is how it works:

Login: #roundtrip

Grab domain: # roundtrip
  output domain_header
  domain.applications.each |app|: # roundtrip
    output app_header
    app.cartridges.each |cart|: # roundtrip
      output cart_info


The old code simply got the user_info during login which had a hash of all the applications since we only had support for one domain in the backend.

There are are a number of issues contributing to the slowness here.

1) First we have three extra round trips - this is due to the complexity of the data model increasing.  Persistent connections will eventually negate the issue of multiple connect handshakes.  As for the complexity of the model, it is there for a reason

2) Cartridges now contain a list of their properties which the old code only had the name and type of the cartridge.  Since the backend has to dynamically retrieve these I suspect there is extra overhead for each property added.  We only display the connection url here but we still take the hit for grabbing every property.  This is most likely the biggest slowdown.

3) there is a psychological component - I bet if we load all the data first and display it the delay won't be as noticeable.  The pauses contribute to the feeling that the application is slow.  An even better "fix" is to load into highline's scrollable buffer thereby the pauses are not noticed as the text loads off screen while the user reads.

The issues of using the buffer is we would have to have a mainloop or use threads so the user could interact while it was still loading.  Given that highline has trouble between versions with simple input I would experiment with this option before relying on it in production code.
Comment 3 John (J5) Palmieri 2012-10-16 11:55:35 EDT
buffers aren't an option without a heavy refactor.  Highline doesn't have a buffer API.  It simply has the ability to set paging to x number of lines. Since Highline isn't actually buffering output paging only happens if you output more than x lines in one say statement.  Multiple say statements are flushed immediately and no state is kept as to how many lines have been displayed.
Comment 4 John Poelstra 2012-10-16 13:40:17 EDT
Need to create a user story to encompass all the work to fix this issue properly
Comment 5 John (J5) Palmieri 2012-10-17 13:05:01 EDT
added as story US3025

Note You need to log in before you can comment on or make changes to this bug.