Bug 961785 - Local gear of a scaling application is always down after inserting a large amount of data to a DB cartridge times out
Summary: Local gear of a scaling application is always down after inserting a large am...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Containers
Version: 2.x
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: ---
: ---
Assignee: Paul Morie
QA Contact: libra bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-05-10 11:39 UTC by Zhe Wang
Modified: 2015-05-14 23:18 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-20 17:39:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Sample application to insert large amout of data to db (3.48 KB, application/octet-stream)
2013-05-10 11:39 UTC, Zhe Wang
no flags Details

Description Zhe Wang 2013-05-10 11:39:38 UTC
Created attachment 746130 [details]
Sample application to insert large amout of data to db

Description of problem:
Given a scaling application with a DB cartridge embedded, when inserting a large amount of data (say, 860M) once, it always shows "Time Out" error, and hereafter, the local gear is down when visiting haproxy-status page.

This test would not be a common use for general users, but I was wondering if we could make nodes more robust.

Version-Release number of selected component (if applicable):
devenv-stage_348
STG(devenv-stage_347)

How reproducible:
always

Steps to Reproduce:
1. create an scaling application with a DB cartridge embedded
rhc app create spy27 python-2.7 mysql-5.1 -s

2. modify its setup.py file to enable enable "MySQL-python" module

3. modify the <app_repo>/wsgi/application file with the code in the attachment.

5. push the change

6. to verify the code works
visit <app_repo>/insert?size=10
then <app_repo>/show

7. insert a large amount of data into this app once by visiting
<app_repo>/insert

8. check the haproxy-status page of this app

(it will insert 500000 records into the MySQL cartridge, around 860M)

Actual results:
In Step 5, the insertion succeeds and it also shows the expected results when accessing <app_repo>/show

However, in step 7, it fails with a 504 time-out error, and the status of the app's local gear is down. Moreover, when visiting its homepage, it will always redirects to the haproxy-proxy page.

Expected results:
The app should be more robust to deal with heavy workload.

Additional info:

Comment 1 openshift-github-bot 2013-05-17 01:40:44 UTC
Commit pushed to master at https://github.com/openshift/origin-server

https://github.com/openshift/origin-server/commit/f9377bc46c521c675dbd7a276faf9f384618d4ed
Bug 961785 - Cartridge URL install failed

* Insufficient output to debug

Comment 2 Paul Morie 2013-08-20 17:39:44 UTC
I tested orders of magnitude of ten up to 100k, and didn't experience a timeout.  When I tried to insert 1M records, the request timed out, but the app stayed up.  I don't think this represents a problem with the platform - the app is inserting single rows in a tight loop with autocommit while processing the http request, which isn't very robust.  It would improve performance to turn off autocommit before doing the inserts.


Note You need to log in before you can comment on or make changes to this bug.