Bug 1274852
Summary: | Routing Daemon does not update LB when head gear is moved. | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Ryan Howe <rhowe> |
Component: | Node | Assignee: | Sally <somalley> |
Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | 2.2.0 | CC: | aos-bugs, erich, gpei, jokerman, mmccomas, nicholas_schuetz, pruan, rhowe, rthrashe, somalley, tiwillia |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | rubygem-openshift-origin-msg-broker-mcollective-1.36.2.2-1.el6op, rubygem-openshift-origin-controller-1.38.6.3-1.el6op | Doc Type: | Bug Fix |
Doc Text: |
Cause:
In a highly available environment with a nginx or f5 load balancer, when an HA gear was moved to another node, an update was not received by the load balancer and the gear's routing information was not modified by the routing-daemon.
Fix: Added calls during oo-admin-move to publish and unpublish routing information.
Result: The routing-daemon logs now contain information regarding the changes to an HA gear's routing information upon gear moves.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2016-08-24 19:43:15 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Ryan Howe
2015-10-23 16:40:02 UTC
The entry added is wrong. The gear was moved from node1(10.14.6.138) to node2(10.14.6.150). The old entry(10.14.6.138:40587) should be deleted,a new entery like '10.14.150:xxxx' should be added. However, it didn't deleted old one, it add one entry same as the old one. 1) Before move: [root@broker ~]# cat /etc/nginx/conf.d/sphp-yes.conf upstream sphp-yes { server 10.14.6.138:40587; } server { listen 8200; server_name ha-sphp-yes-ha.dev.rhcloud.com; #Replace dev.rhcloud.com according to your domain setting location / { proxy_pass http://sphp-yes; } } 2) Moving: [root@broker ~]# oo-admin-move --gear_uuid yes-sphp-1 -i node2.ose22-auto.com.cn URL: http://sphp-yes.ose22-auto.com.cn Login: gpei App UUID: 57b15d1582611dbb53000001 Gear UUID: 57b15d1582611dbb53000001 DEBUG: Source district uuid: 57a8243482611d52e5000001 DEBUG: Destination district uuid: 57a8243482611d52e5000001 DEBUG: Getting existing app 'sphp' status before moving DEBUG: Gear component 'php-5.3' was running DEBUG: Unpublishing routing information for gear 'yes-sphp-1' {:action=>:remove_public_endpoint, :app_name=>"sphp", :namespace=>"yes", :gear_id=>"57b15d1582611dbb53000001", :public_address=>"10.14.6.138", :public_port=>40586} {:action=>:remove_public_endpoint, :app_name=>"sphp", :namespace=>"yes", :gear_id=>"57b15d1582611dbb53000001", :public_address=>"10.14.6.138", :public_port=>40587} DEBUG: Stopping existing app cartridge 'php-5.3' before moving DEBUG: Stopping existing app cartridge 'haproxy-1.4' before moving DEBUG: Force stopping existing app before moving DEBUG: Gear platform is 'linux' DEBUG: Creating new account for gear 'yes-sphp-1' on node2.ose22-auto.com.cn DEBUG: Moving content for app 'sphp', gear 'yes-sphp-1' to node2.ose22-auto.com.cn Agent pid 1199 unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 1199 killed; DEBUG: Moving system components for app 'sphp', gear 'yes-sphp-1' to node2.ose22-auto.com.cn Agent pid 1572 unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 1572 killed; DEBUG: Starting cartridge 'haproxy-1.4' in 'sphp' after move on node2.ose22-auto.com.cn DEBUG: Starting cartridge 'php-5.3' in 'sphp' after move on node2.ose22-auto.com.cn DEBUG: Fixing DNS and mongo for gear 'yes-sphp-1' after move DEBUG: Changing server identity of 'yes-sphp-1' from 'node1.ose22-auto.com.cn' to 'node2.ose22-auto.com.cn' DEBUG: Updating routing information for gear 'yes-sphp-1' after move {:action=>:add_public_endpoint, :app_name=>"sphp", :namespace=>"yes", :gear_id=>"57b15d1582611dbb53000001", :public_port_name=>"php-5.3", :public_address=>"10.14.6.138", :public_port=>40586, :protocols=>["http", "ws"], :types=>["web_framework"], :mappings=> [{"frontend"=>"", "backend"=>""}, {"frontend"=>"/health", "backend"=>""}]} {:action=>:add_public_endpoint, :app_name=>"sphp", :namespace=>"yes", :gear_id=>"57b15d1582611dbb53000001", :public_port_name=>"haproxy-1.4", :public_address=>"10.14.6.138", :public_port=>40587, :protocols=>["http", "ws"], :types=>["load_balancer"], :mappings=> [{"frontend"=>"", "backend"=>""}, {"frontend"=>"/health", "backend"=>"/configuration/health"}]} Added routing endpoint for sphp-yes DEBUG: Deconfiguring old app 'sphp' on node1.ose22-auto.com.cn after move Successfully moved gear with uuid 'yes-sphp-1' of app 'sphp' from 'node1.ose22-auto.com.cn' to 'node2.ose22-auto.com.cn' 3) After move: [root@broker ~]# cat /etc/nginx/conf.d/sphp-yes.conf upstream sphp-yes { server 10.14.6.138:40587; server 10.14.6.138:40587; } server { listen 8200; server_name ha-sphp-yes-ha.dev.rhcloud.com; #Replace dev.rhcloud.com according to your domain setting location / { proxy_pass http://sphp-yes; } } The new entry was added well. But the old entry wasn't deleted. For example: 10.14.6.138:53856; should be deleted from the following configuration after oo-admin-move upstream sphp-yes { server 10.14.6.138:53856; server 10.14.6.150:53857; } server { listen 8200; server_name ha-sphp-yes-ha.dev.rhcloud.com; #Replace dev.rhcloud.com according to your domain setting location / { proxy_pass http://sphp-yes; } } @Ryan, three records in nginx configure file, but there are two gears for sphp. [root@broker ~]# cat /etc/nginx/conf.d/sphp-yes.conf upstream sphp-yes { server 10.14.6.138:40481; server 10.14.6.138:65497; server 10.14.6.150:40482; } server { listen 8200; server_name ha-sphp-yes-ha.dev.rhcloud.com; #Replace dev.rhcloud.com according to your domain setting location / { proxy_pass http://sphp-yes; } } [root@broker ~]# rhc app show sphp --gears ID State Cartridges Size SSH URL ---------- ------- ------------------- ----- ------------------------------------------- yes-sphp-1 started haproxy-1.4 php-5.3 small yes-sphp-1.com.cn yes-sphp-2 started haproxy-1.4 php-5.3 small yes-sphp-2.com.cn [root@broker ~]# oo-app-info -a sphp Loading broker environment... Done. ================================================================================ Login: gpei Plan: () App Name: sphp App UUID: 57bbbf1682611d3c82000061 Creation Time: 2016-08-23 03:12:22 AM URL: http://sphp-yes.ose22-auto.com.cn Group Instance[0]: Components: Cartridge Name: php-5.3 Component Name: php-5.3 Gear[0] Server Identity: node1.ose22-auto.com.cn Gear UUID: yes-sphp-1 Gear UID: 1490 Gear[1] Server Identity: node1.ose22-auto.com.cn Gear UUID: yes-sphp-2 Gear UID: 6493 Current DNS ----------- sphp-yes.ose22-auto.com.cn is an alias for node1.ose22-auto.com.cn. node1.ose22-auto.com.cn has address 10.14.6.138 ================================================================================ BTW, I run the scripts /root/listen.rb @Tim & @Anping, I just followed the instruction from comment #19 and I have the exact output as expected from @Tim. moving it to VERIFIED now but will check with Anping tonight...could possibly move this bug back to ON_QA depending on the discussion with Anping. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-1773.html The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |