Description of problem: Routing Daemon does routing daemon does not update LB when head gear is moved. Version-Release number of selected component (if applicable): 2.2.x How reproducible: 100% Steps to Reproduce: # rhc app create jbosseap-6 -a lbtest -s # rhc app enable-ha lbtest # oo-app-info -a lbtest # oo-admin-move --gear_uuid 562a5b526892df4798000066 -i node2.example.com # oo-app-info -a lbtest # oo-admin-move --gear_uuid 562a55a96892df479800003d -i node1.example.com Actual results: Nothing is shown in the /var/log/openshift/routing-daemon.log when moving the gears. Expected results: The move to generate an event the routing daemon picks up on. Additional info:
The entry added is wrong. The gear was moved from node1(10.14.6.138) to node2(10.14.6.150). The old entry(10.14.6.138:40587) should be deleted,a new entery like '10.14.150:xxxx' should be added. However, it didn't deleted old one, it add one entry same as the old one. 1) Before move: [root@broker ~]# cat /etc/nginx/conf.d/sphp-yes.conf upstream sphp-yes { server 10.14.6.138:40587; } server { listen 8200; server_name ha-sphp-yes-ha.dev.rhcloud.com; #Replace dev.rhcloud.com according to your domain setting location / { proxy_pass http://sphp-yes; } } 2) Moving: [root@broker ~]# oo-admin-move --gear_uuid yes-sphp-1 -i node2.ose22-auto.com.cn URL: http://sphp-yes.ose22-auto.com.cn Login: gpei App UUID: 57b15d1582611dbb53000001 Gear UUID: 57b15d1582611dbb53000001 DEBUG: Source district uuid: 57a8243482611d52e5000001 DEBUG: Destination district uuid: 57a8243482611d52e5000001 DEBUG: Getting existing app 'sphp' status before moving DEBUG: Gear component 'php-5.3' was running DEBUG: Unpublishing routing information for gear 'yes-sphp-1' {:action=>:remove_public_endpoint, :app_name=>"sphp", :namespace=>"yes", :gear_id=>"57b15d1582611dbb53000001", :public_address=>"10.14.6.138", :public_port=>40586} {:action=>:remove_public_endpoint, :app_name=>"sphp", :namespace=>"yes", :gear_id=>"57b15d1582611dbb53000001", :public_address=>"10.14.6.138", :public_port=>40587} DEBUG: Stopping existing app cartridge 'php-5.3' before moving DEBUG: Stopping existing app cartridge 'haproxy-1.4' before moving DEBUG: Force stopping existing app before moving DEBUG: Gear platform is 'linux' DEBUG: Creating new account for gear 'yes-sphp-1' on node2.ose22-auto.com.cn DEBUG: Moving content for app 'sphp', gear 'yes-sphp-1' to node2.ose22-auto.com.cn Agent pid 1199 unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 1199 killed; DEBUG: Moving system components for app 'sphp', gear 'yes-sphp-1' to node2.ose22-auto.com.cn Agent pid 1572 unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 1572 killed; DEBUG: Starting cartridge 'haproxy-1.4' in 'sphp' after move on node2.ose22-auto.com.cn DEBUG: Starting cartridge 'php-5.3' in 'sphp' after move on node2.ose22-auto.com.cn DEBUG: Fixing DNS and mongo for gear 'yes-sphp-1' after move DEBUG: Changing server identity of 'yes-sphp-1' from 'node1.ose22-auto.com.cn' to 'node2.ose22-auto.com.cn' DEBUG: Updating routing information for gear 'yes-sphp-1' after move {:action=>:add_public_endpoint, :app_name=>"sphp", :namespace=>"yes", :gear_id=>"57b15d1582611dbb53000001", :public_port_name=>"php-5.3", :public_address=>"10.14.6.138", :public_port=>40586, :protocols=>["http", "ws"], :types=>["web_framework"], :mappings=> [{"frontend"=>"", "backend"=>""}, {"frontend"=>"/health", "backend"=>""}]} {:action=>:add_public_endpoint, :app_name=>"sphp", :namespace=>"yes", :gear_id=>"57b15d1582611dbb53000001", :public_port_name=>"haproxy-1.4", :public_address=>"10.14.6.138", :public_port=>40587, :protocols=>["http", "ws"], :types=>["load_balancer"], :mappings=> [{"frontend"=>"", "backend"=>""}, {"frontend"=>"/health", "backend"=>"/configuration/health"}]} Added routing endpoint for sphp-yes DEBUG: Deconfiguring old app 'sphp' on node1.ose22-auto.com.cn after move Successfully moved gear with uuid 'yes-sphp-1' of app 'sphp' from 'node1.ose22-auto.com.cn' to 'node2.ose22-auto.com.cn' 3) After move: [root@broker ~]# cat /etc/nginx/conf.d/sphp-yes.conf upstream sphp-yes { server 10.14.6.138:40587; server 10.14.6.138:40587; } server { listen 8200; server_name ha-sphp-yes-ha.dev.rhcloud.com; #Replace dev.rhcloud.com according to your domain setting location / { proxy_pass http://sphp-yes; } }
The new entry was added well. But the old entry wasn't deleted. For example: 10.14.6.138:53856; should be deleted from the following configuration after oo-admin-move upstream sphp-yes { server 10.14.6.138:53856; server 10.14.6.150:53857; } server { listen 8200; server_name ha-sphp-yes-ha.dev.rhcloud.com; #Replace dev.rhcloud.com according to your domain setting location / { proxy_pass http://sphp-yes; } }
@Ryan, three records in nginx configure file, but there are two gears for sphp. [root@broker ~]# cat /etc/nginx/conf.d/sphp-yes.conf upstream sphp-yes { server 10.14.6.138:40481; server 10.14.6.138:65497; server 10.14.6.150:40482; } server { listen 8200; server_name ha-sphp-yes-ha.dev.rhcloud.com; #Replace dev.rhcloud.com according to your domain setting location / { proxy_pass http://sphp-yes; } } [root@broker ~]# rhc app show sphp --gears ID State Cartridges Size SSH URL ---------- ------- ------------------- ----- ------------------------------------------- yes-sphp-1 started haproxy-1.4 php-5.3 small yes-sphp-1.com.cn yes-sphp-2 started haproxy-1.4 php-5.3 small yes-sphp-2.com.cn [root@broker ~]# oo-app-info -a sphp Loading broker environment... Done. ================================================================================ Login: gpei Plan: () App Name: sphp App UUID: 57bbbf1682611d3c82000061 Creation Time: 2016-08-23 03:12:22 AM URL: http://sphp-yes.ose22-auto.com.cn Group Instance[0]: Components: Cartridge Name: php-5.3 Component Name: php-5.3 Gear[0] Server Identity: node1.ose22-auto.com.cn Gear UUID: yes-sphp-1 Gear UID: 1490 Gear[1] Server Identity: node1.ose22-auto.com.cn Gear UUID: yes-sphp-2 Gear UID: 6493 Current DNS ----------- sphp-yes.ose22-auto.com.cn is an alias for node1.ose22-auto.com.cn. node1.ose22-auto.com.cn has address 10.14.6.138 ================================================================================ BTW, I run the scripts /root/listen.rb
@Tim & @Anping, I just followed the instruction from comment #19 and I have the exact output as expected from @Tim.
moving it to VERIFIED now but will check with Anping tonight...could possibly move this bug back to ON_QA depending on the discussion with Anping.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-1773.html
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days