Bug 1966429 - Stale amphora's listed even after deleting loadbalancers with --cascade
Summary: Stale amphora's listed even after deleting loadbalancers with --cascade
Keywords:
Status: CLOSED DUPLICATE of bug 1992691
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 16.1 (Train)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Gregory Thiemonge
QA Contact: Bruna Bonguardo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-01 07:39 UTC by Asma Syed Hameed
Modified: 2022-08-17 15:04 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-07 15:22:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-4321 0 None None None 2022-08-17 15:04:26 UTC

Description Asma Syed Hameed 2021-06-01 07:39:38 UTC
Description of problem:
Our env had around 7000+ LB's and after deleting the LB's with --cascade we still see stale amphora entries. 

Note: The LB's got deleted successfully

Version-Release number of selected component (if applicable):
RHOS-16.1-RHEL-8-20210205.n.0


Steps to Reproduce:
1. Create 7000+ LB's
2. Delete the LB's with --cascade

Actual results:

(overcloud) [stack@undercloud ~]$ openstack loadbalancer list

(overcloud) [stack@undercloud ~]$ openstack loadbalancer amphora list

+--------------------------------------+-----------------+--------+------------+---------------+-------------+
| id                                   | loadbalancer_id | status | role       | lb_network_ip | ha_ip       |
+--------------------------------------+-----------------+--------+------------+---------------+-------------+
| 489c62e1-53cb-4a9a-a6b1-ba4f7a48d43c | None            | ERROR  | STANDALONE | 172.24.0.58   | 10.2.0.81   |
| a8d2a6aa-0d6a-4888-8817-184edff3cdc9 | None            | ERROR  | STANDALONE | 172.24.1.52   | 10.2.6.96   |
| 95155a99-aa06-45ee-bb4c-79153c09b71c | None            | ERROR  | STANDALONE | 172.24.5.178  | 10.2.7.140  |
| 9ea58bae-18df-4746-a69d-4ca7620f7c08 | None            | ERROR  | STANDALONE | 172.24.0.147  | 10.2.13.193 |
| a382f275-ec1e-4ff4-8c82-421ea1409828 | None            | ERROR  | STANDALONE | 172.24.3.194  | 10.2.5.223  |
| 0d6ce9e6-e9b4-4de9-9541-5a283af6445e | None            | ERROR  | STANDALONE | 172.24.1.210  | 10.2.9.56   |
| 60ee7402-7392-4142-a684-84df436608d6 | None            | ERROR  | STANDALONE | 172.24.4.156  | 10.2.11.87  |
| 98250319-1376-4cec-b2d8-23009694bb7f | None            | ERROR  | STANDALONE | 172.24.8.37   | 10.2.11.19  |
| d2f4693a-910f-4e23-ba4f-923e636e7428 | None            | ERROR  | STANDALONE | 172.24.6.100  | 10.2.3.245  |
| 944f3ac9-0635-4645-b4a3-1aaa2e4f9723 | None            | ERROR  | STANDALONE | 172.24.6.128  | 10.2.9.120  |
| aaa08a81-ece1-466e-ad71-939993e4c87d | None            | ERROR  | STANDALONE | 172.24.9.66   | 10.2.2.182  |
| ee1ca66d-2cda-4f53-bf4f-866a7a69c9b4 | None            | ERROR  | STANDALONE | 172.24.5.238  | 10.2.8.55   |
| 519fe471-9e4a-49d1-9134-8f02e6690f1a | None            | ERROR  | STANDALONE | 172.24.7.169  | 10.2.12.116 |
| eff6d75d-f0d4-4423-801d-0cabde36db1d | None            | ERROR  | STANDALONE | 172.24.1.6    | 10.2.7.183  |
| ad82ac90-e6a1-4c43-9763-b32b696909f5 | None            | ERROR  | STANDALONE | 172.24.1.236  | 10.2.7.88   |
| e6598369-ebde-417b-8f90-1cedffcd95d6 | None            | ERROR  | STANDALONE | 172.24.0.92   | 10.2.15.164 |
| 16cc3a23-9fa2-42e0-8cd8-559352b34052 | None            | ERROR  | STANDALONE | 172.24.0.168  | 10.2.10.85  |
| 76ebf402-3a7a-4156-a983-353fa169bcbf | None            | ERROR  | STANDALONE | 172.24.3.149  | 10.2.13.67  |
| 6059f30c-5328-4dec-9ba4-bf629b8d3fcb | None            | ERROR  | STANDALONE | 172.24.2.62   | 10.2.13.176 |
| 641a1f24-c5bc-49f6-9f11-c7eb9979dc7f | None            | ERROR  | STANDALONE | 172.24.0.26   | 10.2.6.141  |
| 4c7c24b1-a202-4a7f-91e2-2de230cbdd18 | None            | ERROR  | STANDALONE | 172.24.0.249  | 10.2.9.116  |
| 0d654c7b-5139-415d-8b6c-8b3413cd30f4 | None            | ERROR  | STANDALONE | 172.24.1.12   | 10.2.8.108  |
| c740a456-196e-4b81-bece-175de8d235dd | None            | ERROR  | STANDALONE | 172.24.4.62   | 10.2.7.44   |
| 99a7197c-b590-4a51-9bad-740fda9b392d | None            | ERROR  | STANDALONE | 172.24.0.156  | 10.2.11.213 |
| 423968d6-235b-4b45-a041-c554abf3820a | None            | ERROR  | STANDALONE | 172.24.5.93   | 10.2.4.17   |
| 10b543f9-0a50-4be6-9251-a9b8a83075d7 | None            | ERROR  | STANDALONE | 172.24.0.106  | 10.2.13.136 |
| 0f7d788d-b9b6-4151-8060-ce51c45be9dd | None            | ERROR  | STANDALONE | 172.24.5.14   | 10.2.13.133 |
| 63bae10f-8f61-4441-a3ca-ba3d755a1a43 | None            | ERROR  | STANDALONE | 172.24.4.180  | 10.2.11.141 |
| de3783b5-cd63-4d9d-b1f4-e76a7f763736 | None            | ERROR  | STANDALONE | 172.24.5.75   | 10.2.2.233  |
| 81cc2ebc-986a-40c3-8599-91c933adb979 | None            | ERROR  | STANDALONE | 172.24.0.85   | 10.2.4.117  |
| e94fafc2-48bd-443a-8a63-326f52ac9fdc | None            | ERROR  | STANDALONE | 172.24.1.95   | 10.2.0.116  |
| 80b58281-130d-45cd-8060-59d4a87bf852 | None            | ERROR  | STANDALONE | 172.24.2.251  | 10.2.8.203  |
| f30cc6b9-b75b-43fc-ab62-941fd6ae1775 | None            | ERROR  | STANDALONE | 172.24.0.158  | 10.2.11.89  |
| c0a4a271-9b86-4c78-82eb-59b73afa051c | None            | ERROR  | STANDALONE | 172.24.0.183  | 10.2.10.224 |
| 048c436e-d6e2-47ed-b3d5-44ca35d60629 | None            | ERROR  | STANDALONE | 172.24.6.66   | 10.2.6.107  |
| 23e32baa-daec-476d-8153-05cc03418609 | None            | ERROR  | STANDALONE | 172.24.4.225  | 10.2.10.205 |
| bf373e9c-ad97-4b91-bb6a-01ad3911959a | None            | ERROR  | STANDALONE | 172.24.3.216  | 10.2.15.150 |
| c3ce831a-4996-4702-9796-dd9b7dd681c1 | None            | ERROR  | STANDALONE | 172.24.5.125  | 10.2.10.189 |
| 8562a14f-e5ee-48fc-9cab-e05ee5b2f3b2 | None            | ERROR  | STANDALONE | 172.24.2.229  | 10.2.7.91   |
| 11bc617a-8f95-46bb-b91a-8f9118c5dbf9 | None            | ERROR  | STANDALONE | 172.24.5.240  | 10.2.7.59   |
| 977488b3-ae7b-4578-a1b8-8c7ec5a460aa | None            | ERROR  | STANDALONE | 172.24.5.56   | 10.2.6.88   |
| 5c5effcf-4e98-4592-a7f8-7e1e80a4d3f5 | None            | ERROR  | STANDALONE | 172.24.4.24   | 10.2.14.187 |
| 57af0523-7ab8-4cfc-9db8-dde7f5d5ff92 | None            | ERROR  | STANDALONE | 172.24.6.128  | 10.2.10.57  |
| b67a6fcb-0ed4-46d4-ab71-ad378d64ed8a | None            | ERROR  | STANDALONE | 172.24.5.194  | 10.2.4.177  |
| a2e72654-12cc-4d83-9463-867aa4413d7c | None            | ERROR  | STANDALONE | 172.24.3.209  | 10.2.3.113  |
| d33b42fe-4038-4015-8b02-fcda88dde4aa | None            | ERROR  | STANDALONE | 172.24.2.107  | 10.2.9.233  |
| 3c810012-718a-4859-83f8-6d7923e4a6a3 | None            | ERROR  | STANDALONE | 172.24.5.66   | 10.2.2.172  |
| 6e9b90e6-e908-4541-a3ae-e75706be8f9a | None            | ERROR  | STANDALONE | 172.24.8.27   | 10.2.13.64  |
| 34748b48-701c-4751-acb4-0d0f0fce390c | None            | ERROR  | STANDALONE | 172.24.8.68   | 10.2.4.31   |
| f407d6eb-d8eb-46a6-877e-fe669f6f9d9d | None            | ERROR  | STANDALONE | 172.24.6.95   | 10.2.12.202 |
| c936843b-5793-4648-8502-a0fdd8614dce | None            | ERROR  | STANDALONE | 172.24.7.205  | 10.2.5.129  |
| 9989b5c1-823e-4adf-b782-167a3a859c87 | None            | ERROR  | STANDALONE | 172.24.6.162  | 10.2.15.32  |

Expected results:
should we deleted successfully

Comment 2 Gregory Thiemonge 2021-06-28 06:49:56 UTC
This is weird, there are amphorae that share the same management IP address, one is in ERROR, the second is ALLOCATED

| 99a7197c-b590-4a51-9bad-740fda9b392d | 34a9c851-2e32-484e-86f5-4079f778c171 | ERROR     | NULL                                 | 172.24.0.156  | 10.2.11.86  | 10.2.11.213 | 04a4061b-36ee-4d86-8ef4-381faa861f28 | 1b0e4b91-73c7-42d7-a030-735d769ff940 | STANDALONE | 2021-06-10 14:57:33 |         1 | NULL           |       1 |          NULL | nova        | 2021-05-11 14:57:33 | 2021-05-27 15:53:31 | 0d8164aa-65f3-42b4-9fbf-289e522533fb | 65             |
| db011293-c33e-467b-a3b7-5bf3b34532fb | 68d7ed4a-3e53-4f18-a245-0cfaef66f264 | ALLOCATED | dc371928-24c5-4320-8b7f-eb3baf8bde6a | 172.24.0.156  | 10.2.5.48   | 10.2.5.41   | 9fcaf80f-ef20-4ecf-96db-06f984451ad3 | 3aee0bf2-247a-4109-b93a-3cbaae9e4530 | STANDALONE | 2021-06-25 15:38:28 |         0 | NULL           |       1 |          NULL | nova        | 2021-05-26 15:38:28 | 2021-05-26 15:41:21 | 0d8164aa-65f3-42b4-9fbf-289e522533fb | 65             |

update_at looks good for both, but it doesn't mean the compute of the amphora in ERROR still exists.

I found logs about the creation of the amphorae that are now in ERROR (they were created without any issue and they were associated with a load balancer), but some logs are missing, I cannot see what happened after.
Some findings:
- the amphorae are no longer associated with a load balancer (load_balancer_id is NULL)
- the load_balancer_id no longer exists in the amphora list dump (it has probably been deleted)

There are also a lot of errors about failed deletions of LB and failed failovers, it looks like the VIP network/ports were deleted:

controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks [-] Unable to unplug vip from load balancer 1f9a6a4c-d454-463e-8497-1a971c1993a8: octavia.network.base.PluggedVIPNotFound: Can't unplug vip because vip subnet 8e8df4ce-a293-4078-a724-d28ee1a04338 was not found
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks Traceback (most recent call last):
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py", line 193, in _get_resource
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     resource_type)(resource_id)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 844, in show_subnet
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     return self.get(self.subnet_path % (subnet), params=_params)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 354, in get
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     headers=headers, params=params)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 331, in retry_request
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     headers=headers, params=params)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 294, in do_request
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     self._handle_fault_response(status_code, replybody, resp)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 269, in _handle_fault_response
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     exception_handler_v20(status_code, error_body)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 93, in exception_handler_v20
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     request_ids=request_ids)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks neutronclient.common.exceptions.NotFound: Subnet 8e8df4ce-a293-4078-a724-d28ee1a04338 could not be found.
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks Neutron server returns request_ids: ['req-1c02a76f-38f1-4086-815b-7a1ebac60d68']
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks 
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks During handling of the above exception, another exception occurred:
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks 
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks Traceback (most recent call last):
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 571, in unplug_vip
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     subnet = self.get_subnet(vip.subnet_id)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py", line 246, in get_subnet
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     return self._get_resource('subnet', subnet_id, context=context)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/octavia/network/drivers/neutron/base.py", line 201, in _get_resource
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     [w.capitalize() for w in resource_type.split('_')]))(message)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks octavia.network.base.SubnetNotFound: subnet not found (subnet id: 8e8df4ce-a293-4078-a724-d28ee1a04338).
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks 
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks During handling of the above exception, another exception occurred:
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks 
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks Traceback (most recent call last):
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 430, in execute
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     self.network_driver.unplug_vip(loadbalancer, loadbalancer.vip)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks   File "/usr/lib/python3.6/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 576, in unplug_vip
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks     raise base.PluggedVIPNotFound(msg)
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks octavia.network.base.PluggedVIPNotFound: Can't unplug vip because vip subnet 8e8df4ce-a293-4078-a724-d28ee1a04338 was not found
controller-0/octavia/worker.log.6:2021-05-06 13:04:53.558 78 ERROR octavia.controller.worker.v1.tasks.network_tasks 


2021-05-06 13:04:53.663 78 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port be94b98b-7905-4458-b1ce-8d500eb3e860 could not be found.", "detail": ""}} _handle_fault_response /usr/
lib/python3.6/site-packages/neutronclient/v2_0/client.py:259
2021-05-06 13:04:53.663 78 DEBUG octavia.network.drivers.neutron.allowed_address_pairs [-] VIP instance port be94b98b-7905-4458-b1ce-8d500eb3e860 already deleted. Skipping. deallocate_vip /usr/lib/python3.6/site-packages/octavia/network/dr
ivers/neutron/allowed_address_pairs.py:345
2021-05-06 13:04:53.738 78 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port 445dffc3-1cf8-4ee7-94f7-362f3b6c58a0 could not be found.", "detail": ""}} _handle_fault_response /usr/
lib/python3.6/site-packages/neutronclient/v2_0/client.py:259
2021-05-06 13:04:53.738 78 WARNING octavia.network.drivers.neutron.allowed_address_pairs [-] Can't deallocate VIP because the vip port 445dffc3-1cf8-4ee7-94f7-362f3b6c58a0 cannot be found in neutron. Continuing cleanup.: octavia.network.base.PortNotFound: port not found (port id: 445dffc3-1cf8-4ee7-94f7-362f3b6c58a0).




2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs [-] Error creating a port on network 7ffb336b-0450-4d43-a950-ae5eca853f5c due to Network 7ffb336b-0450-4d43-a950-ae5eca853f5c could not be found.
Neutron server returns request_ids: ['req-a1fcb84e-ff8d-4f95-9e53-be1bab8df675'].: neutronclient.common.exceptions.NetworkNotFoundClient: Network 7ffb336b-0450-4d43-a950-ae5eca853f5c could not be found.
Neutron server returns request_ids: ['req-a1fcb84e-ff8d-4f95-9e53-be1bab8df675']
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs Traceback (most recent call last):
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs   File "/usr/lib/python3.6/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 839, in create_port
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs     new_port = self.neutron_client.create_port({constants.PORT: port})
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 803, in create_port
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs     return self.post(self.ports_path, body=body)
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 359, in post
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs     headers=headers, params=params)
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 294, in do_request
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs     self._handle_fault_response(status_code, replybody, resp)
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 269, in _handle_fault_response
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs     exception_handler_v20(status_code, error_body)
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs   File "/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 93, in exception_handler_v20
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs     request_ids=request_ids)
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs neutronclient.common.exceptions.NetworkNotFoundClient: Network 7ffb336b-0450-4d43-a950-ae5eca853f5c could not be found.
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs Neutron server returns request_ids: ['req-a1fcb84e-ff8d-4f95-9e53-be1bab8df675']
2021-05-10 07:26:53.933 78 ERROR octavia.network.drivers.neutron.allowed_address_pairs

Comment 7 Gregory Thiemonge 2021-09-01 12:23:28 UTC
I think I figured out what the issue was:

TLDR: this is probably a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1992691 which is targeted for OSP16.1z8


There's a bug in Octavia, the housekeeping service may try to rotate the certificates of amphorae in DELETE state.
Basically, the housekeeping service is trying to reach some amphorae that have been deleted, with deleted compute servers, in order to perform some maintenance tasks.

It triggers some issues present in the logs:

housekeeping tries to connect to an amphora but it reports that the amphora has an incorrect certificate (1560 occurrences in the logs):

2021-05-23 06:10:08.711 7 ERROR urllib3.connection [-] Certificate did not match expected hostname: 489c62e1-53cb-4a9a-a6b1-ba4f7a48d43c. Certificate: {'subject': ((('commonName', 'a403e8bd-d00b-4d95-94aa-bc81275aa907'),),), 'issuer': ((('countryName', 'US'),), (('stateOrProvinceName', 'Denial'),), (('localityName', 'Springfield'),), (('organizationName', 'Dis'),), (('commonName', 'www.example.com'),)), 'version': 3, 'serialNumber': 'EA63F8F42CA74B62BDDBF9C89DB55A50', 'notBefore': 'May 21 16:43:01 2021 GMT', 'notAfter': 'Jun 20 16:43:01 2021 GMT', 'subjectAltName': (('DNS', 'a403e8bd-d00b-4d95-94aa-bc81275aa907'),)}: ssl.CertificateError: hostname '489c62e1-53cb-4a9a-a6b1-ba4f7a48d43c' doesn't match 'a403e8bd-d00b-4d95-94aa-bc81275aa907'

-> The IP address of the amphora in the database has been reused by another amphora, this is confirmed by previous findings in the database dump, 2 amphora entries shared the same IP address.


When the housekeeping fails to update the certificates in the deleted amphora, it updates its status to ERROR. So DELETED amphorae appear in the list of the amphorae in ERROR status.

Comment 8 Gregory Thiemonge 2021-09-01 13:33:38 UTC
Hi,

If you don't have any objections, I will close this BZ as a duplicate of BZ 1992691

Comment 9 Gregory Thiemonge 2021-09-07 15:22:20 UTC
Marked as a duplicate of BZ 1992691

*** This bug has been marked as a duplicate of bug 1992691 ***


Note You need to log in before you can comment on or make changes to this bug.