Bug 963490
Summary: | Scaling updates in HAProxy are delayed | |||
---|---|---|---|---|
Product: | OpenShift Online | Reporter: | Matt Hicks <mhicks> | |
Component: | Containers | Assignee: | Mrunal Patel <mpatel> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | libra bugs <libra-bugs> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 2.x | CC: | dmcphers, xtian | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 968994 (view as bug list) | Environment: | ||
Last Closed: | 2013-06-11 04:03:57 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 968994, 969007 |
Description
Matt Hicks
2013-05-15 22:02:38 UTC
Sorry, line 5 should read 5. Wait 45 seconds and then add 3 more gears rhc cartridge-scale jbosseap --app scale --min 4 --max 4 This has changed drastically between v1 and v2. For v1 the flow was: Create new gears and they were started Publish to the new gears with the deployment artifacts from the head gear Restart the gears with the new code Add gears to haproxy config I think this restarting was causing your complaints. New v2 logic: 1) Create new gears but do not start them 2) Add gears to haproxy config and haproxy views them as down 3) Publish to the new gears with the deployment artifacts from the head gear 4) Start the new gears 5) haproxy sees the gears as available and starts routing to them It's not blazing fast still with eap but I think this resolves your issues. I would probably argue 2) should probably be moved to the end but don't think it's breaking anything. Tested it on devenv_3238 1. Create a scale jbosseap app 2. Disable auto-scaling 3. Limit the scaling to a single gear rhc cartridge-scale scalejbosseap3 -c jbosseap-6.0 --min 1 --max 1 4. Send concurrent request ab -n 30000 -c 6 https://scalejbosseap3-domx1.dev.rhcloud.com 5.Wait 45 seconds and then add 2 more gears [root@ip-10-202-27-91 markers]# rhc cartridge-scale -a scalejbosseap3 -c jbosseap-6.0 --min 3 --max 3 RESULT: jbosseap-6.0 (JBoss Enterprise Application Platform 6.0) -------------------------------------------------------- Scaling: x3 (minimum: 3, maximum: 3) on small gears This gear costs an additional $0.03 per gear after the first 3 gears. Success: Scaling values updated 6. Check gears status and check haproxy-status page # rhc app show --gears -a scalejbosseap3 ID State Cartridges Size SSH URL -------------------------------- ------- ------------------------ ----- --------------------------------------------------------------------------------------- 519624d4a33f822eaf000001 started jbosseap-6.0 haproxy-1.4 small 519624d4a33f822eaf000001.rhcloud.com ab445cd4bef011e296af12313b1218ad started jbosseap-6.0 haproxy-1.4 small ab445cd4bef011e296af12313b1218ad.rhcloud.com ab85b120bef011e296af12313b1218ad new jbosseap-6.0 haproxy-1.4 small ab85b120bef011e296af12313b1218ad.rhcloud.com According to above, 2 gears are in started status, In haproxy-status page, all the gears are shown as well, only the local gear and ab445cd4bef011e296af12313b1218ad are in started status, gear ab85b120bef011e296af12313b1218ad is in down status, it seems the original issue in the bug does not happen According to comment 3, the original issue does not happen again, the started gear could be shown in haproxy-status page timely Filed a new bug 965028 to track the issue about the 3rd scaled up gear is down. |