Description of problem: Launch devenv-stage_249, create some apps on it, upgrade to devenv_2548 and do migration 2.0.20. After that try to access the scalable jbossas and jbosseap app, it cannot be accessed. SSH login to the app and check the process by ps -ef. Cannot find the haproxy process Version-Release number of selected component (if applicable): From devenv-stage_249 to devenv_2458 How reproducible: always Steps to Reproduce: 1.Launch devenv-stage_249 2.Create scalable jbossas and jbosseap app 3.Upgrade server to devenv_2458 and do migration 4.Check the two apps Actual results: 1.App cannot be accessed from web browser. (503 error) 2.SSH login to the app and cannot find the haproxy process [jbosseap1s-bmeng1dev.dev.rhcloud.com ~]\> ps -ef UID PID PPID C STIME TTY TIME CMD 529 5543 5421 0 05:02 ? 00:00:00 sshd: 82d5b5a92d8747c3b559c55b04651eeb@pts/1 529 5581 5543 0 05:02 ? 00:00:00 /bin/bash --init-file /usr/bin/rhcsh -i 529 7361 5581 0 05:04 ? 00:00:00 ps -ef 529 11561 1 0 01:58 ? 00:00:00 /bin/sh /var/lib/openshift/82d5b5a92d8747c3b559c55b04651eeb//jbosseap-6.0/jbosseap-6.0/bin/standalone.sh 529 12175 11561 0 01:58 ? 00:00:53 /usr/lib/jvm/jre-1.7.0/bin/java -D[Standalone] -client -Xmx256m -XX:MaxPermSize=128m -XX:+AggressiveOpts -Dorg.apache.tomcat.util.LOW_MEMORY=t Expected results: App works fine. Additional info: After do a restart action to the app, the haproxy process can be started and app can be accessed.
How do I do upgrade to devenv_2458?
Upgrade to devenv_2458 1.Launch devenv-stage_249 and devenv_2458 2.Copy the /root/devenv-local and /etc/yum.repos/local.repo from devenv_2458 to devenv-stage_249 3.Modify the /etc/yum.repos/devenv.repo to use the candidate mirrors on devnev-stage_249 4.Prepare test data on devenv-stage_249 5.Do upgrade with devenv-local repo enabled #yum update --enablerepo devenv-local 6.Reboot instance 7.Do migration 2.0.20 #rhc-admin-migrate --version 2.0.20
Also for scalable jbossews app
Migration itself broken after update from ami-ac67e3c5. Not even seeing the 2 gears on the box. I'll retest once that's resolved. [root@ip-10-202-27-64 ~]# rhc-admin-migrate --version 2.0.20 **Notice: C extension not loaded. This is required for optimum MongoDB Ruby driver performance. You can install the extension as follows: gem install bson_ext If you continue to receive this message after installing, make sure that the bson_ext gem is in your load path and that the bson_ext and mongo gems are of the same version. Mocha deprecation warning: Test::Unit or MiniTest must be loaded *before* Mocha. Mocha deprecation warning: If you're integrating with another test library, you should probably require 'mocha_standalone' instead of 'mocha' /opt/rh/ruby193/root/usr/share/gems/gems/systemu-1.2.0/lib/systemu.rb:28: Use RbConfig instead of obsolete and deprecated Config. Getting all RHLogins... Gathering gears for user: bdecoste76c with uuid: 64a7cce91cc9418183e96c1fa4b2d347 RHLogins.length: 1 ##################################################### ##################################################### ##################################################### Summary: # of users: 1 # of gears: 0 # of failures: 0 Gear counts per thread: [] Additional timings: Time gathering users: 0.005s Total execution time: 0.005s #####################################################
Fixed on master https://github.com/openshift/origin-server/pull/865
Upgrade from devenv-stage_249 to devenv_2475. This issue still reproduced for scalable jbossas and jbosseap app. Haproxy process does not exist for this two kinds of cartridge. Web page return 503 error. And the migration script only handled the jenkins gear this time.
Test again upgrade from devenv-stage_249 to devenv_2476. The problem cannot be reproduced. Please assign the bug back and i will verify it.
Checked on devenv_2486, Move bug to verified.
This bug is reproduced when upgrading instance from devenv-stage_249 to devenv-stage_254 Upgrade steps: 1. Launch devenv-stage_249 2. Create test data against instance 3. SSH into instance, do yum update(no need to modify or enable any repos since this upgrade is performed from devenv-stage_249 to devenv-stage_254) 4. reboot instance after upgrade 5. rhc-admin-migrate --version 2.0.20 6. After migrate, access all existing apps. My scalable jbossas and jbosseap applications return 503 error when trying to access their url. After sshing into haproxy gear, there was no haproxy process running [jbosseap1s-jhou1.dev.rhcloud.com ~]\> ps -ef UID PID PPID C STIME TTY TIME CMD 524 11553 1 0 00:30 ? 00:00:00 /bin/sh /var/lib/openshift/9897ba8450524da6b19663dacdb0bc36//jbosseap-6.0/jbosseap-6.0/bin/standalone.sh 524 12089 11553 1 00:30 ? 00:00:12 /usr/lib/jvm/jre-1.7.0/bin/java -D[Standalone] -client -Xmx256m -XX:MaxPermSize=128m -XX:+AggressiveOpts -Dorg.apache.tomca 524 24015 23984 0 00:42 ? 00:00:00 sshd: 9897ba8450524da6b19663dacdb0bc36@pts/2 524 24016 24015 0 00:42 pts/2 00:00:00 /bin/bash --init-file /usr/bin/rhcsh -i 524 24710 24016 0 00:44 pts/2 00:00:00 ps -ef Both apps became available on being restarted(rhc app restart). So reopen this bug since this problem is still seen. Additional info: I also got a scalable jbossews application, which didn't have this issue.
Reproduced it once again. Actually, this is reproduced once upgrade is performed, so it's not related to migrate(which migrates only jenkins in sprint 2.0.20). The haproxy process is missing for jbossas/jbosseap applications. Wired thing is, not all of the jbossas/jbosseap apps have this issue, to better reproduce, created multiple apps(3 jbossas apps and 3 jbosseap apps will be enough). I have attached upgrade log and httpd/error_log in order to dig more My app's internal ip is 127.0.252.1, add I discovered following error in /var/log/httpd/error_log: [Mon Nov 19 04:25:30 2012] [error] proxy: ap_get_scoreboard_lb(162) failed in child 14150 for worker http://127.0.252.1:18001/swydws/ [Mon Nov 19 04:25:30 2012] [error] proxy: ap_get_scoreboard_lb(163) failed in child 14150 for worker http://127.0.252.1:8080/
Created attachment 647662 [details] upgrade log
Created attachment 647663 [details] /var/log/httpd/error_log
What size ami are you creating?
Have you added SwitchYard configuration to your application? The 18001 port and swydws context are SwitchYard.
I can't recreate this issue using devenv-stage_249 upgraded to devenv-stage_254. After a reboot it can take a while for the scaled EAP to come up and haproxy is started after both eap instances but it does come up. Need to confirm that: 1) SwitchYard is or isn't deployed to the eap app used to test 2) There was a yum clean all before the yum update
I see the ProxyPass in zzzzz_proxy.conf. Didn't realize this had been added. ProxyPass /swydws/ http://$IP:18001/swydws/ status=I ProxyPassReverse /swydws/ http://$IP:18001/swydws/
Need to confirm the ami size, a yum clean all, and if haproxy comes up after several minutes after an upgrade and reboot.
(In reply to comment #17) > Need to confirm the ami size, a yum clean all, and if haproxy comes up after > several minutes after an upgrade and reboot. ami size: Medium SwitchYard: Didn't deploy swithyard to both jbossas and jbosseap applications. yum clean all: I didn't run yum clean all when I reopened this bug. Tried it again with yum clean all, and all my jboss apps seems well after yum update and reboot Haproxy: haproxy comes up after several minutes after upgrade and reboot. Didn't reproduce this again. Maybe the reason is I didn't run "yum clean all" before upgrade, or maybe the haproxy just haven't come up when I test on that ami. This issue is never reproduce on INT and STG. So, I'm moving it to verified.