Description of problem: If trying to scale up gears for a scalable nodejs-0.6 app, the gears won't be running well with incorrect status shown in haproxy-status page. Version-Release number of selected component (if applicable): Upgrading devenv-stage_353 to devenv_3238 and migrate How reproducible: always Steps to Reproduce: 1. Launch devenv-stage_353 2. Create a scalable nodejs-0.6 app 3. Add V1 marker to node touch /var/lib/openshift/.settings/v1_cartridge_format 4. scp -r ~/devenv-local and /etc/yum.repos.d/* from devenv_3235 instance to same location on devenv-stage_353 instance 5. yum update -y --enablerepo devenv-local 6. Run oo-admin-clear-pending-ops 7. Remove the v1 marker from the node rm -f /var/lib/openshift/.settings/v1_cartridge_format 8. Clear the broker cache rake tmp:clear 9. Restart the rhc-broker and mcollective 10. Run migrate-mongo-2.0.28 11. Run rhc-admin-migrate --version 2.0.28 12. Add disable_auto_scaling marker to the app and git push 13. SSH into the app and scale-up: haproxy_ctld -u Actual results: The scaled-up gears are not running correctly, please refer to the screen shot for the haproxy-status. Expected results: It should work correctly when scaling up gears. Additional info:
Created attachment 749295 [details] scalable nodejs app
There is no automatic fix for this. We will document the workaround. The problem is that server.js in v1 app is referencing INTERNAL_IP/INTERNAL_PORT. The workaround is to replace OPENSHIFT_INTERNAL_IP/OPENSHIFT_INTERNAL_PORT in the application by the correct variables, viz., OPENSHIFT_NODEJS_IP/OPENSHIFT_NODEJS_PORT in this case.
Tried the workaround on devenv_3247 migrated from devenv-stage_353, it works as below: Steps: 1. Launch devenv-stage_353 2. Create a scalable nodejs-0.6 app 3. Add V1 marker to node touch /var/lib/openshift/.settings/v1_cartridge_format 4. scp -r ~/devenv-local and /etc/yum.repos.d/* from devenv_3247 instance to same location on devenv-stage_353 instance 5. yum update -y --enablerepo devenv-local 6. Run oo-admin-clear-pending-ops 7. Remove the v1 marker from the node rm -f /var/lib/openshift/.settings/v1_cartridge_format 8. Restart mcollective #service mcollective restart 9. Clear the broker cache #oo-admin-broker-cache --clear --console 10. Restart the rhc-broker #server rhc-broker restart 12. Run migrate-mongo-2.0.28 13. Run rhc-admin-migrate --version 2.0.28 14. Add disable_auto_scaling marker to the app 15. Modify the server.js in local repo manually, replace OPENSHIFT_INTERNAL_IP/OPENSHIFT_INTERNAL_PORT with OPENSHIFT_NODEJS_IP/OPENSHIFT_NODEJS_PORT 16. git push the changes 17. SSH into the app and scale-up: haproxy_ctld -u Actual results: The gears are running well after scaling up.