This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 961785 - Local gear of a scaling application is always down after inserting a large amount of data to a DB cartridge times out
Local gear of a scaling application is always down after inserting a large am...
Status: CLOSED NOTABUG
Product: OpenShift Online
Classification: Red Hat
Component: Containers (Show other bugs)
2.x
Unspecified Unspecified
low Severity medium
: ---
: ---
Assigned To: Paul Morie
libra bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-10 07:39 EDT by Zhe Wang
Modified: 2015-05-14 19:18 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-08-20 13:39:44 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Sample application to insert large amout of data to db (3.48 KB, application/octet-stream)
2013-05-10 07:39 EDT, Zhe Wang
no flags Details

  None (edit)
Description Zhe Wang 2013-05-10 07:39:38 EDT
Created attachment 746130 [details]
Sample application to insert large amout of data to db

Description of problem:
Given a scaling application with a DB cartridge embedded, when inserting a large amount of data (say, 860M) once, it always shows "Time Out" error, and hereafter, the local gear is down when visiting haproxy-status page.

This test would not be a common use for general users, but I was wondering if we could make nodes more robust.

Version-Release number of selected component (if applicable):
devenv-stage_348
STG(devenv-stage_347)

How reproducible:
always

Steps to Reproduce:
1. create an scaling application with a DB cartridge embedded
rhc app create spy27 python-2.7 mysql-5.1 -s

2. modify its setup.py file to enable enable "MySQL-python" module

3. modify the <app_repo>/wsgi/application file with the code in the attachment.

5. push the change

6. to verify the code works
visit <app_repo>/insert?size=10
then <app_repo>/show

7. insert a large amount of data into this app once by visiting
<app_repo>/insert

8. check the haproxy-status page of this app

(it will insert 500000 records into the MySQL cartridge, around 860M)

Actual results:
In Step 5, the insertion succeeds and it also shows the expected results when accessing <app_repo>/show

However, in step 7, it fails with a 504 time-out error, and the status of the app's local gear is down. Moreover, when visiting its homepage, it will always redirects to the haproxy-proxy page.

Expected results:
The app should be more robust to deal with heavy workload.

Additional info:
Comment 1 openshift-github-bot 2013-05-16 21:40:44 EDT
Commit pushed to master at https://github.com/openshift/origin-server

https://github.com/openshift/origin-server/commit/f9377bc46c521c675dbd7a276faf9f384618d4ed
Bug 961785 - Cartridge URL install failed

* Insufficient output to debug
Comment 2 Paul Morie 2013-08-20 13:39:44 EDT
I tested orders of magnitude of ten up to 100k, and didn't experience a timeout.  When I tried to insert 1M records, the request timed out, but the app stayed up.  I don't think this represents a problem with the platform - the app is inserting single rows in a tight loop with autocommit while processing the http request, which isn't very robust.  It would improve performance to turn off autocommit before doing the inserts.

Note You need to log in before you can comment on or make changes to this bug.