Bug 1286302 - rhel-osp-director: No "active" entry in "L3 agents hosting a router" listing after replacing a controller in HA deployment.
Summary: rhel-osp-director: No "active" entry in "L3 agents hosting a router" listing ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 8.0 (Liberty)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: async
: 8.0 (Liberty)
Assignee: Miguel Angel Ajo
QA Contact: Alexander Chuzhoy
URL:
Whiteboard:
Depends On: 1313529 1326507 1338623
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-27 22:33 UTC by Alexander Chuzhoy
Modified: 2016-05-23 07:15 UTC (History)
14 users (show)

Fixed In Version: openstack-neutron-7.0.0-2.el7
Doc Type: Bug Fix
Doc Text:
Previously, using 'neutron-netns-cleanup' when manually taking down a node from an HA cluster would not properly clean up processes in the neutron L3-HA routers. Consequently, when the node was connected again to the cluster, and services were re-created, the processes would not properly respawn with the right connectivity. As a result, even if the processes were alive, they were disconnected; this sometimes led to a situation where no L3-HA router was able to take the 'ACTIVE' role. With this update, the 'neutron-netns-cleanup' scripts and related OCF resources have been fixed to kill the relevant keepalived processes and child processes. As a result, nodes can be taken off the cluster and back, and the resources will be properly cleaned up when taken off the cluster, and restored when taken back.
Clone Of:
Environment:
Last Closed: 2016-05-12 16:24:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1063 0 normal SHIPPED_LIVE openstack-neutron bug fix advisory 2016-05-12 20:19:11 UTC

Description Alexander Chuzhoy 2015-11-27 22:33:47 UTC
rhel-osp-director: No "active" entry in "L3 agents hosting a router" listing  after replacing a controller in  HA deployment.

Environment:
openstack-neutron-openvswitch-2015.1.0-12.el7ost.noarch
openstack-tripleo-0.0.7-0.1.1664e566.el7ost.noarch
openstack-heat-engine-2015.1.0-4.el7ost.noarch
openstack-tripleo-common-0.0.1.dev6-1.git49b57eb.el7ost.noarch
openstack-heat-api-cloudwatch-2015.1.0-4.el7ost.noarch
instack-undercloud-2.1.2-22.el7ost.noarch
openstack-ironic-common-2015.1.0-9.el7ost.noarch
openstack-heat-templates-0-0.6.20150605git.el7ost.noarch
openstack-tripleo-heat-templates-0.8.6-45.el7ost.noarch
openstack-neutron-common-2015.1.0-12.el7ost.noarch
openstack-heat-common-2015.1.0-4.el7ost.noarch
openstack-neutron-2015.1.0-12.el7ost.noarch

Steps to reproduce:
1, Deploy HA overcloud with network isolation.
2. Launch an instance with floating IP and make sure you're able to ping the floating IP.
3. Replace a controller by following this procedure:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/Replacing_Controller_Nodes.html
4. Attempt to ping the floating IP.

Result:
not reachable
The following output still shows the replaced controller in the list, while the new one isn't there:

[stack@instack ~]$ neutron l3-agent-list-hosting-router r1
+--------------------------------------+------------------------------------+----------------+-------+----------+
| id                                   | host                               | admin_state_up | alive | ha_state |
+--------------------------------------+------------------------------------+----------------+-------+----------+
| 3215d1b7-cbb7-4f75-88dd-3f701bbe1585 | overcloud-controller-0.localdomain | True           | :-)   | standby  |
| c72307ec-355e-4ae1-a198-76eb14d548a1 | overcloud-controller-1.localdomain | True           | xxx   | standby  |
| c5bddb05-781f-4c54-956a-b91bb2d8efab | overcloud-controller-2.localdomain | True           | :-)   | standby  |
+--------------------------------------+------------------------------------+----------------+-------+----------+



Here's the output from pcs status:
[root@overcloud-controller-2 ~]# pcs status        
Cluster name: tripleo_cluster                      
Last updated: Fri Nov 27 17:33:25 2015             
Last change: Fri Nov 27 17:02:50 2015              
Stack: corosync                                    
Current DC: overcloud-controller-0 (1) - partition with quorum
Version: 1.1.12-a14efad                                       
3 Nodes configured                                            
112 Resources configured                                      


Online: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]

Full list of resources:

 ip-192.0.2.6   (ocf::heartbeat:IPaddr2):       Started overcloud-controller-0 
 Clone Set: haproxy-clone [haproxy]                                            
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 ip-192.168.200.180     (ocf::heartbeat:IPaddr2):       Started overcloud-controller-2 
 ip-192.168.100.10      (ocf::heartbeat:IPaddr2):       Started overcloud-controller-3 
 Master/Slave Set: galera-master [galera]                                              
     Masters: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ] 
 ip-192.168.110.10      (ocf::heartbeat:IPaddr2):       Started overcloud-controller-0 
 ip-192.168.100.11      (ocf::heartbeat:IPaddr2):       Started overcloud-controller-2 
 ip-192.168.120.10      (ocf::heartbeat:IPaddr2):       Started overcloud-controller-3 
 Master/Slave Set: redis-master [redis]                                                
     Masters: [ overcloud-controller-0 ]                                               
     Slaves: [ overcloud-controller-2 overcloud-controller-3 ]                         
 Clone Set: mongod-clone [mongod]                                                      
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ] 
 Clone Set: rabbitmq-clone [rabbitmq]                                                  
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ] 
 Clone Set: memcached-clone [memcached]                                                
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ] 
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]                  
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ] 
 Clone Set: neutron-l3-agent-clone [neutron-l3-agent]                                  
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ] 
 Clone Set: openstack-ceilometer-alarm-notifier-clone [openstack-ceilometer-alarm-notifier]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]     
 Clone Set: openstack-heat-engine-clone [openstack-heat-engine]                            
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]     
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]                      
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]     
 Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]                          
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]     
 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]                                
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]     
 Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]                            
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]     
 Clone Set: openstack-heat-api-clone [openstack-heat-api]                                  
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]     
 Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]                  
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]     
 Clone Set: openstack-nova-api-clone [openstack-nova-api]                                  
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]     
 Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-keystone-clone [openstack-keystone]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: delay-clone [delay]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: neutron-server-clone [neutron-server]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: httpd-clone [httpd]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-ceilometer-alarm-evaluator-clone [openstack-ceilometer-alarm-evaluator]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]
 openstack-cinder-volume        (systemd:openstack-cinder-volume):      Started overcloud-controller-0
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]

Failed actions:
    rabbitmq_monitor_10000 on overcloud-controller-0 'not running' (7): call=590, status=complete, exit-reason='none', last-rc-change='Fri Nov 27 16:41:55 2015', queued=0ms, exec=0ms


PCSD Status:
  overcloud-controller-0: Online
  overcloud-controller-2: Online
  overcloud-controller-3: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled


Expected result:
The launched instance should be reachabe via its floating IP.

Comment 10 Hugh Brock 2016-02-05 11:59:13 UTC
This is fixed in a later version of netns-cleanup @  https://review.gerrithub.io/#/c/248931/1/neutron-netns-cleanup.init. If that is backportable for 8.0 then we should do that. If not we should document the workaround as Marios describes above. Either way this is not a director bug AFAICT. I have reassigned it to Neutron.

Comment 12 Assaf Muller 2016-02-05 16:26:33 UTC
Miguel, I see that it's in Mitaka and Liberty in Delorean, can you look in to availability of the fix in OSP 8? I was sure we had this closed.

Comment 13 Assaf Muller 2016-02-05 16:31:53 UTC
I see that the patch is available in OSP 8 rhos-8.0-rhel-7 branch of Neutron.

Comment 14 Miguel Angel Ajo 2016-02-16 10:57:43 UTC
Ooops, sorry, I missed this bz assignment. Checking

Comment 15 Miguel Angel Ajo 2016-02-16 11:32:28 UTC
Yes, as @assaf said, this was introduced in the rhos-8.0-rhel-7 branch, specifically in 
openstack-neutron-7.0.0-2.el7

Comment 23 errata-xmlrpc 2016-05-12 16:24:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1063.html


Note You need to log in before you can comment on or make changes to this bug.