Bug 1773582 - [Octavia][Amphora Active Standaby]- After deleting amphora VM the amphora is not recreated.
Summary: [Octavia][Amphora Active Standaby]- After deleting amphora VM the amphora is ...
Keywords:
Status: CLOSED DUPLICATE of bug 1709925
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Assaf Muller
QA Contact: Bruna Bonguardo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-11-18 14:14 UTC by Alexander Stafeyev
Modified: 2019-11-20 15:59 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-20 15:59:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Alexander Stafeyev 2019-11-18 14:14:37 UTC
Description of problem:
We executed the "openstack server delete MASTER_AMPHORA_ID"
The amphora is deleted and not recreated.
Sometimes it is recreated sometimes it is not. 

Version-Release number of selected component (if applicable):
13

How reproducible:
50-75%

Steps to Reproduce:
1. Deploy Openstack with Octavia (active_standby)
2. Create LB objects and run traffic to vip so you see that the LB is working. 
3. execute "openstack server delete MASTER AMPHORA VM ID"

If the VM is recreated, try it several times.

Actual results:
AMphora VM is not recreated sometimes. We waited more than 35 min before deciding it is an issue. 

Expected results:
Amphora VM Should be recreated.

Additional info:
In order to recover from this situation run "openstack loadbalancer failover" command. 

(overcloud) [stack@undercloud-0 ~]$  openstack server list --all 
+--------------------------------------+----------------------------------------------+--------+-----------------------------------------------------------+----------------------------------------+--------+
| ID                                   | Name                                         | Status | Networks                                                  | Image                                  | Flavor |
+--------------------------------------+----------------------------------------------+--------+-----------------------------------------------------------+----------------------------------------+--------+
| f5c582b8-87b1-4949-981c-2ed47bd7c2c9 | amphora-655b11f3-5e69-40e0-a0d1-dce54d62ccab | ACTIVE | int_net=192.168.2.16; lb-mgmt-net=172.24.0.10, 10.0.0.218 | octavia-amphora-13.0-20191031.1.x86_64 |        |
| 14a4626f-b9f3-4d72-a415-58e95eb331bf | amphora-703c9449-990b-4b8b-b2f5-39339003cdee | ACTIVE | int_net=192.168.2.12; lb-mgmt-net=172.24.0.23, 10.0.0.224 | octavia-amphora-13.0-20191031.1.x86_64 |        |
| 46219a32-0e60-4b39-9855-c80984bfda47 | lbtree-server2-xmuv3gvzoyyj                  | ACTIVE | int_net=192.168.2.18                                      | cirros-0.4.0-x86_64-disk.img           | cirros |
| 7b0d1c62-bdba-4429-8732-b77d6f0798bf | lbtree-server1-6ifxbfupex4j                  | ACTIVE | int_net=192.168.2.9, 10.0.0.211                           | cirros-0.4.0-x86_64-disk.img           | cirros |
+--------------------------------------+----------------------------------------------+--------+-----------------------------------------------------------+----------------------------------------+--------+
(overcloud) [stack@undercloud-0 ~]$ openstack server delete f5c582b8-87b1-4949-981c-2ed47bd7c2c9
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ watch openstack server list --all 
(overcloud) [stack@undercloud-0 ~]$ openstack server delete b78411dd-4e88-4a65-bc03-e2c08fa9ac72
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ watch openstack server list --all 
(overcloud) [stack@undercloud-0 ~]$  openstack server list --all 
+--------------------------------------+----------------------------------------------+--------+-----------------------------------------------------------+----------------------------------------+--------+
| ID                                   | Name                                         | Status | Networks                                                  | Image                                  | Flavor |
+--------------------------------------+----------------------------------------------+--------+-----------------------------------------------------------+----------------------------------------+--------+
| 14a4626f-b9f3-4d72-a415-58e95eb331bf | amphora-703c9449-990b-4b8b-b2f5-39339003cdee | ACTIVE | int_net=192.168.2.12; lb-mgmt-net=172.24.0.23, 10.0.0.224 | octavia-amphora-13.0-20191031.1.x86_64 |        |
| 46219a32-0e60-4b39-9855-c80984bfda47 | lbtree-server2-xmuv3gvzoyyj                  | ACTIVE | int_net=192.168.2.18                                      | cirros-0.4.0-x86_64-disk.img           | cirros |
| 7b0d1c62-bdba-4429-8732-b77d6f0798bf | lbtree-server1-6ifxbfupex4j                  | ACTIVE | int_net=192.168.2.9, 10.0.0.211                           | cirros-0.4.0-x86_64-disk.img           | cirros |
+--------------------------------------+----------------------------------------------+--------+-----------------------------------------------------------+----------------------------------------+--------+
(overcloud) [stack@undercloud-0 ~]$

Comment 2 Carlos Goncalves 2019-11-20 09:52:01 UTC
This might be caused by BZ #1709925. Newly created load balancers have amphorae correctly configured with the controller_ip_port_list value. Once an amphora failover occurs, the new amphora does not have the controller_ip_port_list value set, thus it does not check in with any health manager service instance but is, by design, assumed as ACTIVE.

Comment 3 Brian Haley 2019-11-20 15:59:08 UTC
Closing as a duplicate of 1709925 after talking about it in our bug triage call.

*** This bug has been marked as a duplicate of bug 1709925 ***


Note You need to log in before you can comment on or make changes to this bug.