Description of problem: OSE-1.2 broker returns 502 when trying to scale up via REST even though the operation was successful. Version-Release number of selected component (if applicable): latest puddle How reproducible: always. Steps to Reproduce: 1. add a scale php app via rest 2. embed mysql cartridge 3. disable auto scaling 4. scale up manually 5. app gets scaled up successfully, but the return status is 502. NOTE: the same procedure/script didn't get errors when run against PROD adn devenv Actual results: <p>The proxy server received an invalid response from an upstream server.<br /> The proxy server could not handle the request <em><a href="/broker/rest/domains/ose1/applications/u417mkr808/events.json">POST /broker/rest/domains/ose1/applications/u417mkr808/events.json</a></em>.<p> Reason: <strong>Error reading from remote server</strong></p></p> <hr> <address>Apache/2.2.15 (Red Hat) Server at 10.14.7.166 Port 443</address> </body></html> ] to JSON: 757: unexpected token at '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>502 Proxy Error</title> </head><body> <h1>Proxy Error</h1> <p>The proxy server received an invalid response from an upstream server.<br /> The proxy server could not handle the request <em><a href="/broker/rest/domains/ose1/applications/u417mkr808/events.json">POST /broker/rest/domains/ose1/applications/u417mkr808/events.json</a></em>.<p> Reason: <strong>Error reading from remote server</strong></p></p> <hr> <address>Apache/2.2.15 (Red Hat) Server at 10.14.7.166 Port 443</address> </body></html> ' /home/peter/workspace/repos/ostest/lib/rest.rb:84:in `rescue in block (2 levels) in _request' /home/peter/workspace/repos/ostest/lib/rest.rb:77:in `block (2 levels) in _request' /usr/local/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:228:in `call' /usr/local/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:228:in `process_result' /usr/local/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:178:in `block in transmit' /usr/share/ruby/net/http.rb:745:in `start' /usr/local/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:172:in `transmit' /usr/local/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:64:in `execute' /home/peter/workspace/repos/ostest/lib/rest.rb:72:in `block in _request' /home/peter/workspace/repos/ostest/lib/rest.rb:660:in `block in try_again_when_timeout' /home/peter/workspace/repos/ostest/lib/rest.rb:658:in `each' /home/peter/workspace/repos/ostest/lib/rest.rb:658:in `try_again_when_timeout' /home/peter/workspace/repos/ostest/lib/rest.rb:65:in `_request' /home/peter/workspace/repos/ostest/lib/rest.rb:42:in `request' /home/peter/workspace/repos/ostest/lib/rest.rb:242:in `block in do_action' /home/peter/workspace/repos/ostest/lib/rest.rb:204:in `each' /home/peter/workspace/repos/ostest/lib/rest.rb:204:in `do_action' /home/peter/workspace/repos/ostest/lib/rest.rb:590:in `app_scale_up' /home/peter/workspace/repos/ostest/lib/rest.rb:879:in `<top (required)>' /usr/share/rubygems/rubygems/custom_require.rb:36:in `require' /usr/share/rubygems/rubygems/custom_require.rb:36:in `require' /home/peter/workspace/repos/ostest/lib/cache.rb:1:in `<top (required)>' /usr/share/rubygems/rubygems/custom_require.rb:36:in `require' /usr/share/rubygems/rubygems/custom_require.rb:36:in `require' /home/peter/workspace/repos/ostest/lib/rest.rb:14:in `<top (required)>' /home/peter/.gem/ruby/1.9.1/gems/debugger-1.3.1/bin/rdebug:124:in `debug_load' /home/peter/.gem/ruby/1.9.1/gems/debugger-1.3.1/bin/rdebug:124:in `debug_program' /home/peter/.gem/ruby/1.9.1/gems/debugger-1.3.1/bin/rdebug:394:in `<top (required)>' /home/peter/bin/rdebug:23:in `load' /home/peter/bin/rdebug:23:in `<main>' Uncaught exception: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>502 Proxy Error</title> </head><body> <h1>Proxy Error</h1> <p>The proxy server received an invalid response from an upstream server.<br /> The proxy server could not handle the request <em><a href="/broker/rest/domains/ose1/applications/u417mkr808/events.json">POST /broker/rest/domains/ose1/applications/u417mkr808/events.json</a></em>.<p> Reason: <strong>Error reading from remote server</strong></p></p> <hr> <address>Apache/2.2.15 (Red Hat) Server at 10.14.7.166 Port 443</address> </body></html> Submit a correction or amendment below (click here to make a fresh posting) After submitting an amendment, you'll be able to view the differences between the old and new posts easily. Syntax highlighting: To highlight particular lines, prefix each line with @@ Expected results: no error. Additional info:
Can't reproduce. Using OSE puddle-1-2-2013-05-13 (OpenShiftEnterprise/1.2/2013-05-13.1), all components installed on one VM: [root@broker ~]# rhc setup [root@broker ~]# rhc create-app -s -t php -a p1 [root@broker ~]# rhc cartridge-add -c mysql -a p1 [root@broker ~]# ssh 51929de96892df03ea000004.com [p1-jl.example.com 51929de96892df03ea000004]\> haproxy_ctld_daemon stop [p1-jl.example.com 51929de96892df03ea000004]\> exit [root@broker ~]# curl -udemo:changeme -ki 'https://localhost/broker/rest/domains/jl/applications/p1/events' \ -H "Content-Type: application/json" \ -X POST \ -d'{"event": "scale-up"}' HTTP/1.1 200 Date: Tue, 14 May 2013 20:49:53 GMT Server: Apache/2.2.15 (Red Hat) X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.17 X-OpenShift-Identity: demo X-OAuth-Scopes: session X-UA-Compatible: IE=Edge,chrome=1 ETag: "efa78ed76bfeec99b380e0074cbdfcc6" Cache-Control: max-age=0, private, must-revalidate X-Request-Id: 9919ea175876beccc12390372ff4e083 X-Runtime: 52.262270 X-Rack-Cache: invalidate, pass Status: 200 Content-Length: 6421 Content-Type: application/json; charset=utf-8 Connection: close {"data":{"aliases":[],"app_url":"http://p1-jl.example.com/", --SNIP-- ,"name":"p1","scalable":true,"ssh_url":"ssh://51929de96892df03ea000004.com","uuid":"51929de96892df03ea000004"},"errors":{},"messages":[{"exit_code":0,"field":null,"severity":"INFO","text":"Application p1 has scaled up"}],"status":"ok","supported_api_versions":[1.0,1.1,1.2,1.3,1.4],"type":"application","version":"1.4"} Will retry tomorrow with reporter's VM and script, and also in a separate up-to-date puddle instance.
Was able to reproduce in reporter's environment, issue was broker REST call taking > 60s to return. Node proxy config used default ProxyTimeout value taken from server config TimeOut setting of 60. Updating node proxy config to specifically set ProxyTimeout to 300, to match broker-specific config.
Commit pushed to master at https://github.com/openshift/origin-server https://github.com/openshift/origin-server/commit/cafd9d454b01de74350b1f93287ad3fe903f303b <node/httpd conf> Bug 962938 - Set ProxyTimeout for node HTTPD config Add ProxyTimeout setting to override server config default (picked up from TimeOut, which is 60 in httpd.conf). Also add comment referencing broker-specific setting for future reference.
After checked 1.2/2013-05-15.1, found the above patch is not merged into current puddle, so waiting for a new puddle.
The patch already merged into 1.2/2013-05-16.1 puddle, and this issue does not happened in my environment. @peter, could you have a double check on your environment? If this issue is fixed in your environment, pls help verify this bug. Thanks.
verified with the latest puddle status, res = rest_api.app_scale_up('u417mkr808') print res (rdb:1) status #<Net::HTTPOK 200 readbody=true> my app was able to scale up successfully w/o traceback.
Closing all bugs introduced, fixed, and verified during 1.2 release work (thus never shipped).