Bug 2001120 - Octavia fails to delete Load Balancer, and sqlalchemy fails to mark Load Balancer to ERROR
Summary: Octavia fails to delete Load Balancer, and sqlalchemy fails to mark Load Bala...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ga
: 17.0
Assignee: Gregory Thiemonge
QA Contact: Nikolai Ilushko
URL:
Whiteboard:
Depends On:
Blocks: 2040691 2040697
TreeView+ depends on / blocked
 
Reported: 2021-09-03 20:44 UTC by Robin Cernin
Modified: 2022-09-21 12:17 UTC (History)
9 users (show)

Fixed In Version: openstack-octavia-8.0.1-0.20211203161902.e4a0136.el8ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2040691 (view as bug list)
Environment:
Last Closed: 2022-09-21 12:17:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack Storyboard 2009651 0 None None None 2021-10-29 06:41:30 UTC
OpenStack Storyboard 2009652 0 None None None 2021-10-29 06:38:17 UTC
OpenStack gerrit 815973 0 None MERGED Fix LB set in ERROR too early in the revert flow 2021-12-09 14:14:38 UTC
OpenStack gerrit 818093 0 None MERGED Fix LB set in ERROR too early in the revert flow 2021-12-09 14:15:17 UTC
OpenStack gerrit 820350 0 None NEW Fix LB set in ERROR too early in MapLoadbalancerToAmphora 2021-12-22 14:11:16 UTC
Red Hat Issue Tracker OSP-8210 0 None None None 2021-11-15 12:51:57 UTC
Red Hat Product Errata RHEA-2022:6543 0 None None None 2022-09-21 12:17:40 UTC

Description Robin Cernin 2021-09-03 20:44:39 UTC
Description of problem:

OSP 13 with ML2/OVS
OCP 3.11 with Kuryr

Please note this is NOT Kuryr bug and we are already implementing workaround(in Kuryr) to this issue by adding retries to DELETE requests sent to Octavia in https://github.com/openshift/kuryr-kubernetes/pull/548

It is still worth mentioning that sometimes when we delete LB in Octavia, we can see that LB not properly marked as DELETED.

In Kuryr logs:
~~~
2021-09-03 07:42:35.297 22599 WARNING kuryr_kubernetes.controller.drivers.lbaasv2 [-] Releasing loadbalancer a708a225-86b8-4a18-9ff1-1d6405e90454 with ERROR status
~~~

In Octavia worker.log:
~~~
2021-09-03 07:42:35.554 25 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer 'a708a225-86b8-4a18-9ff1-1d6405e90454'...
~~~

We can see the request to update DB provisioning_status to DELETED
~~~
2021-09-03 07:42:38.832 25 DEBUG octavia.controller.worker.v1.tasks.database_tasks [req-41433bc3-9b2b-413d-a15b-549984d27533 - dc58480f8d864ea9b00ffef263be3819 - - -] Mark DELETED in DB for load balancer id: a708a225-86b8-4a18-9ff1-1d6405e90454 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:1125
~~~

But looking at the DB
~~~
MariaDB [octavia]> select * from load_balancer where id='a708a225-86b8-4a18-9ff1-1d6405e90454' \G;
*************************** 1. row ***************************
         project_id: dc58480f8d864ea9b00ffef263be3819
                 id: a708a225-86b8-4a18-9ff1-1d6405e90454
               name: momo1/lb-momohttpd-02
        description: NULL
provisioning_status: ERROR
   operating_status: OFFLINE
            enabled: 1
           topology: SINGLE
    server_group_id: NULL
         created_at: 2021-09-03 11:42:25
         updated_at: 2021-09-03 11:42:39
           provider: amphora
          flavor_id: NULL
1 row in set (0.00 sec)
~~~

Interestingly it seems that MarkLBDeletedInDB.revert is never executed:
~~~
class MarkLBDeletedInDB(BaseDatabaseTask):
    """Mark the load balancer deleted in the DB.

    Since sqlalchemy will likely retry by itself always revert if it fails
    """

    def execute(self, loadbalancer):
        """Mark the load balancer as deleted in DB.

        :param loadbalancer: Load balancer object to be updated
        :returns: None
        """

        LOG.debug("Mark DELETED in DB for load balancer id: %s",
                  loadbalancer.id)
        self.loadbalancer_repo.update(db_apis.get_session(),
                                      loadbalancer.id,
                                      provisioning_status=constants.DELETED)

    def revert(self, loadbalancer, *args, **kwargs):
        """Mark the load balancer as broken and ready to be cleaned up.

        :param loadbalancer: Load balancer object that failed to update
        :returns: None
        """

        LOG.warning("Reverting mark load balancer deleted in DB "
                    "for load balancer id %s", loadbalancer.id)
        self.task_utils.mark_loadbalancer_prov_status_error(loadbalancer.id)

~~~

Looking at all the Octavia logs:
~~~
()[octavia@osp13-controller0 /]$ grep "Reverting mark load balancer deleted" /var/log/octavia/*
()[octavia@osp13-controller0 /]$ grep "Failed to update load balancer" *
()[octavia@osp13-controller0 /]$
~~~


Octavia failed to remove the LB properly.

I have tried tuning the DB a little bit with following, it improved but still happened (1 in 8 hours)

~~~
[mysqld]
innodb_buffer_pool_instances = 2
innodb_buffer_pool_size = 5 G
innodb_lock_wait_timeout = 120
net_write_timeout = 120
net_read_timeout = 120
connect_timeout = 120
max_connections = 8192
~~~

Comment 1 Gregory Thiemonge 2021-09-06 07:53:48 UTC
Hi Robin,

Do you have more logs for this issue (full octavia logs or even sosreports)?
Was the LB in ERROR before the DELETE request?

Comment 2 Robin Cernin 2021-09-06 07:59:34 UTC
Yes it was in ERROR state before DELETE. I will reproduce the issue and attach the logs.

Comment 3 Robin Cernin 2021-09-07 01:33:45 UTC
I have finally hit the issue where Kuryr had to retry deleting an LB that was in ERROR state.

[root@osp13-controller0 ~]# grep Deleting /var/log/containers/octavia/worker.log| cut -d "]" -f2-| sort | uniq -c | grep -v "^      1"
      2  Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...

Attaching the logs, with SQLAlchemy DEBUG enabled as well as Octavia DEBUG.

Comment 4 Robin Cernin 2021-09-07 01:37:06 UTC
$ grep "Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'" worker.log
2021-09-06 21:12:57.178 26 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...
2021-09-06 21:17:58.317 26 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...

$ grep 1649aded-36f4-43dd-bf16-8c61f2712096 worker.log
2021-09-06 21:12:40.032 26 INFO octavia.controller.queue.v1.endpoints [-] Creating load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...
2021-09-06 21:12:42.158 26 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] Get load balancer from DB for load balancer id: 1649aded-36f4-43dd-bf16-8c61f2712096  execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:393
2021-09-06 21:12:44.680 26 DEBUG octavia.controller.worker.v1.tasks.network_tasks [-] Setup SG for loadbalancer id: 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py:370
2021-09-06 21:12:46.006 26 DEBUG octavia.controller.worker.v1.tasks.network_tasks [-] Getting subnet for LB: 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py:385
2021-09-06 21:12:47.787 26 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] Allocating an Amphora for load balancer with id 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:514
2021-09-06 21:12:47.806 26 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] No Amphora available for load balancer with id 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:540
     |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': u'59913b5d-1ca0-40c1-a74b-478dc52e1cb1'}
           |__Atom 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_group_id': None, 'flavor': {}, 'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': None}
                    |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': u'edaeff5d-fdd2-4c5b-8b94-76ba1ac81886'}
                       |__Atom 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': <octavia.common.data_models.Vip object at 0x7fcd81452350>, 'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': <octavia.common.data_models.LoadBalancer object at 0x7fcd69d447d0>}
                             |__Atom 'reload-lb-before-allocate-vip' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': <octavia.common.data_models.LoadBalancer object at 0x7fcd68280550>}
                                |__Atom 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': None}
2021-09-06 21:12:56.476 26 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer 1649aded-36f4-43dd-bf16-8c61f2712096 in the database.
2021-09-06 21:12:57.178 26 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...
2021-09-06 21:13:03.551 26 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] Mark DELETED in DB for load balancer id: 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:1125
2021-09-06 21:17:58.317 26 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...
2021-09-06 21:18:10.154 26 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] Mark DELETED in DB for load balancer id: 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:1125

Comment 21 Gregory Thiemonge 2021-10-28 06:12:26 UTC
I reproduced it in my env on OSP13:

One LB is still in ERROR in the DB after a successful DELETE call:

MariaDB [octavia]> select * from load_balancer where id  = '9ee84739-d42e-4ee3-9d24-7712fc027fa0'\G
*************************** 1. row ***************************
         project_id: 77f8afea8d4e46f99574157e5b12b396
                 id: 9ee84739-d42e-4ee3-9d24-7712fc027fa0
               name: lb-138
        description: NULL
provisioning_status: ERROR
   operating_status: OFFLINE
            enabled: 1
           topology: SINGLE
    server_group_id: NULL
         created_at: 2021-10-27 19:33:24
         updated_at: 2021-10-27 19:34:04
           provider: amphora
          flavor_id: NULL
1 row in set (0.00 sec)


2021-10-27 19:33:27.066 25 INFO octavia.controller.queue.v1.endpoints [-] Creating load balancer '9ee84739-d42e-4ee3-9d24-7712fc027fa0'...
[..]
2021-10-27 19:33:27.525 25 DEBUG octavia.controller.worker.v1.controller_worker [-] Flow 'octavia-create-loadbalancer-flow' (3c0c9cae-fcd7-47ff-ac22-43a8a4aff3d4) transitioned into state 'RUNNING' from state 'PENDING' _flow_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:145

Load balancer creation fails because of the quota issue in Nova
MapLoadbalancerToAmphora task is reverted, and in the revert function, the provisioning status of the load balancer is set to ERROR:

2021-10-27 19:33:48.431 25 WARNING octavia.controller.worker.v1.tasks.database_tasks [req-efa7075b-6502-40cf-be03-579ce79f4406 - 77f8afea8d4e46f99574157e5b12b396 - - -] Reverting Amphora allocation for the load balancer 9ee84739-d42e-4ee3-9d24-7712fc027fa0 in the database.
[..]
2021-10-27 19:33:48.444 25 INFO sqlalchemy.engine.base.Engine [req-efa7075b-6502-40cf-be03-579ce79f4406 - 77f8afea8d4e46f99574157e5b12b396 - - -] BEGIN (implicit)
2021-10-27 19:33:48.446 25 INFO sqlalchemy.engine.base.Engine [req-efa7075b-6502-40cf-be03-579ce79f4406 - 77f8afea8d4e46f99574157e5b12b396 - - -] UPDATE load_balancer SET updated_at=%(updated_at)s, provisioning_status=%(provisioning_status)s WHERE load_balancer.id = %(id_1)s
2021-10-27 19:33:48.449 25 INFO sqlalchemy.engine.base.Engine [req-efa7075b-6502-40cf-be03-579ce79f4406 - 77f8afea8d4e46f99574157e5b12b396 - - -] {u'id_1': u'9ee84739-d42e-4ee3-9d24-7712fc027fa0', 'updated_at': datetime.datetime(2021, 10, 27, 19, 33, 48, 446711), 'provisioning_status': 'ERROR'}
2021-10-27 19:33:48.455 25 INFO sqlalchemy.engine.base.Engine [req-efa7075b-6502-40cf-be03-579ce79f4406 - 77f8afea8d4e46f99574157e5b12b396 - - -] COMMIT


Now that the provisioning status is no longer a PENDING_* state, we can delete the Load balancer:

In the api:
2021-10-27 19:33:58.869 21 INFO octavia.api.v2.controllers.load_balancer [req-70c482a0-b85f-41e4-bdd8-2c7f8468413b - 77f8afea8d4e46f99574157e5b12b396 - default default] Sending delete Load Balancer 9ee84739-d42e-4ee3-9d24-7712fc027fa0 to provider amphora

In the worker:
2021-10-27 19:33:58.916 25 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '9ee84739-d42e-4ee3-9d24-7712fc027fa0'...
[..]
2021-10-27 19:34:02.698 25 DEBUG octavia.controller.worker.v1.tasks.database_tasks [req-4e5bb49e-fae6-427d-9fa7-a04160795dd3 - 77f8afea8d4e46f99574157e5b12b396 - - -] Mark DELETED in DB for load balancer id: 9ee84739-d42e-4ee3-9d24-7712fc027fa0 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:1125
[..]
2021-10-27 19:34:02.705 25 INFO sqlalchemy.engine.base.Engine [req-4e5bb49e-fae6-427d-9fa7-a04160795dd3 - 77f8afea8d4e46f99574157e5b12b396 - - -] BEGIN (implicit)
2021-10-27 19:34:02.707 25 INFO sqlalchemy.engine.base.Engine [req-4e5bb49e-fae6-427d-9fa7-a04160795dd3 - 77f8afea8d4e46f99574157e5b12b396 - - -] UPDATE load_balancer SET updated_at=%(updated_at)s, provisioning_status=%(provisioning_status)s WHERE load_balancer.id = %(id_1)s
2021-10-27 19:34:02.708 25 INFO sqlalchemy.engine.base.Engine [req-4e5bb49e-fae6-427d-9fa7-a04160795dd3 - 77f8afea8d4e46f99574157e5b12b396 - - -] {u'id_1': u'9ee84739-d42e-4ee3-9d24-7712fc027fa0', 'updated_at': datetime.datetime(2021, 10, 27, 19, 34, 2, 707414), 'provisioning_status': 'DELETED'}
2021-10-27 19:34:02.716 25 INFO sqlalchemy.engine.base.Engine [req-4e5bb49e-fae6-427d-9fa7-a04160795dd3 - 77f8afea8d4e46f99574157e5b12b396 - - -] COMMIT


But the octavia-create-loadbalancer-flow hasn't finished yet, it is still running on the another controller:

The LoadBalancerIDToErrorOnRevertTask is reverted and sets the provisioning_status to ERROR:

2021-10-27 19:34:03.028 25 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (a1796677-5426-4945-aa7c-a745b0aa6e61) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194
2021-10-27 19:34:04.130 25 INFO sqlalchemy.engine.base.Engine [req-f547a555-8595-4672-8331-d59602206133 - 77f8afea8d4e46f99574157e5b12b396 - - -] BEGIN (implicit)
2021-10-27 19:34:04.132 25 INFO sqlalchemy.engine.base.Engine [req-f547a555-8595-4672-8331-d59602206133 - 77f8afea8d4e46f99574157e5b12b396 - - -] UPDATE load_balancer SET updated_at=%(updated_at)s, provisioning_status=%(provisioning_status)s WHERE load_balancer.id = %(id_1)s
2021-10-27 19:34:04.133 25 INFO sqlalchemy.engine.base.Engine [req-f547a555-8595-4672-8331-d59602206133 - 77f8afea8d4e46f99574157e5b12b396 - - -] {u'id_1': u'9ee84739-d42e-4ee3-9d24-7712fc027fa0', 'updated_at': datetime.datetime(2021, 10, 27, 19, 34, 4, 132545), 'provisioning_status': 'ERROR'}
2021-10-27 19:34:04.135 25 INFO sqlalchemy.engine.base.Engine [req-f547a555-8595-4672-8331-d59602206133 - 77f8afea8d4e46f99574157e5b12b396 - - -] COMMIT
[..]
2021-10-27 19:34:04.173 25 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (a1796677-5426-4945-aa7c-a745b0aa6e61) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None'
[..]
2021-10-27 19:34:04.246 25 WARNING octavia.controller.worker.v1.controller_worker [-] Flow 'octavia-create-loadbalancer-flow' (3c0c9cae-fcd7-47ff-ac22-43a8a4aff3d4) transitioned into state 'REVERTED' from state 'RUNNING'


The issue is that the MapLoadbalancerToAmphora task shouldn't have set the provisioning_status to ERROR on revert, only the first Task of the flow should set this status.

Note that the MapLoadbalancerToAmphora was deleted in Xena, but previous releases are probably affected by this issue.

Comment 26 Gregory Thiemonge 2021-10-29 15:26:21 UTC
Please note that the proposed patch fixes all possible similar behaviors for master. But branches from stable/train to stable/wallaby will require another patch for code that was removed during the xena release cycle.

Comment 28 Nikolai Ilushko 2022-08-25 10:41:20 UTC
Verified by running the following commands: 
# Puddle version:
(overcloud) [stack@undercloud-0 ~]$ cat /var/lib/rhos-release/latest-installed
17.0 -p RHOS-17.0-RHEL-9-20220823.n.2


# Create a network and subnet for the load balancer
~~~
[stack@undercloud-0 ~]$ . overcloudrc

(overcloud) [stack@undercloud-0 ~]$ openstack network create my_net

+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2022-08-25T07:17:24Z                 |
| description               |                                      |
| dns_domain                |                                      |
| id                        | 9eb8000b-ff2b-438f-bbda-6a233ab2f987 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| is_vlan_transparent       | None                                 |
| mtu                       | 1442                                 |
| name                      | my_net                               |
| port_security_enabled     | True                                 |
| project_id                | b47e07429b8e47e8bd239086528e651e     |
| provider:network_type     | geneve                               |
| provider:physical_network | None                                 |
| provider:segmentation_id  | 15649                                |
| qos_policy_id             | None                                 |
| revision_number           | 1                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| updated_at                | 2022-08-25T07:17:25Z                 |
+---------------------------+--------------------------------------+

(overcloud) [stack@undercloud-0 ~]$ openstack subnet create my_subnet --network my_net --subnet-range 127.16.0.0/24 --dns-nameserver 10.0.0.1 --no-dhcp

+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| allocation_pools     | 127.16.0.2-127.16.0.254              |
| cidr                 | 127.16.0.0/24                        |
| created_at           | 2022-08-25T07:22:24Z                 |
| description          |                                      |
| dns_nameservers      | 10.0.0.1                             |
| dns_publish_fixed_ip | None                                 |
| enable_dhcp          | False                                |
| gateway_ip           | 127.16.0.1                           |
| host_routes          |                                      |
| id                   | 4c1c15c2-ec80-449d-87aa-e125257b7428 |
| ip_version           | 4                                    |
| ipv6_address_mode    | None                                 |
| ipv6_ra_mode         | None                                 |
| name                 | my_subnet                            |
| network_id           | 9eb8000b-ff2b-438f-bbda-6a233ab2f987 |
| prefix_length        | None                                 |
| project_id           | b47e07429b8e47e8bd239086528e651e     |
| revision_number      | 0                                    |
| segment_id           | None                                 |
| service_types        | None                                 |
| subnetpool_id        | None                                 |
| tags                 |                                      |
| updated_at           | 2022-08-25T07:22:24Z                 |
+----------------------+--------------------------------------+
~~~

# Create the load balancer
~~~
(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer create --name lb1 --vip-subnet-id my_subnet

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| availability_zone   | None                                 |
| created_at          | 2022-08-25T07:59:08                  |
| description         |                                      |
| flavor_id           | None                                 |
| id                  | 58b92dbb-6243-4b5e-ac30-4fe7e6b85956 |
| listeners           |                                      |
| name                | lb1                                  |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| project_id          | b47e07429b8e47e8bd239086528e651e     |
| provider            | amphora                              |
| provisioning_status | PENDING_CREATE                       |
| updated_at          | None                                 |
| vip_address         | 127.16.0.80                          |
| vip_network_id      | 9eb8000b-ff2b-438f-bbda-6a233ab2f987 |
| vip_port_id         | 750ca90c-0d94-4bbc-9fc0-aa568eb4d2c3 |
| vip_qos_policy_id   | None                                 |
| vip_subnet_id       | 4c1c15c2-ec80-449d-87aa-e125257b7428 |
| tags                |                                      |
+---------------------+--------------------------------------+

(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show lb1

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| availability_zone   | None                                 |
| created_at          | 2022-08-25T07:59:08                  |
| description         |                                      |
| flavor_id           | None                                 |
| id                  | 58b92dbb-6243-4b5e-ac30-4fe7e6b85956 |
| listeners           |                                      |
| name                | lb1                                  |
| operating_status    | ONLINE                               |
| pools               |                                      |
| project_id          | b47e07429b8e47e8bd239086528e651e     |
| provider            | amphora                              |
| provisioning_status | ACTIVE                               |
| updated_at          | 2022-08-25T08:00:20                  |
| vip_address         | 127.16.0.80                          |
| vip_network_id      | 9eb8000b-ff2b-438f-bbda-6a233ab2f987 |
| vip_port_id         | 750ca90c-0d94-4bbc-9fc0-aa568eb4d2c3 |
| vip_qos_policy_id   | None                                 |
| vip_subnet_id       | 4c1c15c2-ec80-449d-87aa-e125257b7428 |
| tags                |                                      |
+---------------------+--------------------------------------+
~~~

# Deleting the load balancer
~~~
openstack load balancer delete lb1

(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show lb1

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| availability_zone   | None                                 |
| created_at          | 2022-08-25T07:59:08                  |
| description         |                                      |
| flavor_id           | None                                 |
| id                  | 58b92dbb-6243-4b5e-ac30-4fe7e6b85956 |
| listeners           |                                      |
| name                | lb1                                  |
| operating_status    | ONLINE                               |
| pools               |                                      |
| project_id          | b47e07429b8e47e8bd239086528e651e     |
| provider            | amphora                              |
| provisioning_status | PENDING_DELETE                       |
| updated_at          | 2022-08-25T08:01:06                  |
| vip_address         | 127.16.0.80                          |
| vip_network_id      | 9eb8000b-ff2b-438f-bbda-6a233ab2f987 |
| vip_port_id         | 750ca90c-0d94-4bbc-9fc0-aa568eb4d2c3 |
| vip_qos_policy_id   | None                                 |
| vip_subnet_id       | 4c1c15c2-ec80-449d-87aa-e125257b7428 |
| tags                |                                      |
+---------------------+--------------------------------------+

A little later:
(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show lb1
Unable to locate lb1 in load balancers
~~~

# In worker.log file:
~~~
2022-08-25 08:01:06.073 20 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '58b92dbb-6243-4b5e-ac30-4fe7e6b85956'...
~~~

~~~
2022-08-25 08:01:14.809 20 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] Mark DELETED in DB for load balancer id: 58b92dbb-6243-4b5e-ac30-4fe7e6b85956 execute /usr/lib/python3.9/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:1121
~~~


# Checking status of the load balancer in the database

## ssh into the controller
~~~
[stack@undercloud-0 ~]$ ssh heat-admin@CONTROLLER-IP

[root@controller-2 octavia]# podman exec -it clustercheck /bin/bash

bash-5.1$ mariadb octavia
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 210265
Server version: 10.5.16-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [octavia]> select * from load_balancer where id='58b92dbb-6243-4b5e-ac30-4fe7e6b85956' \G;
*************************** 1. row ***************************
         project_id: b47e07429b8e47e8bd239086528e651e
                 id: 58b92dbb-6243-4b5e-ac30-4fe7e6b85956
               name: lb1
        description: NULL
provisioning_status: DELETED
   operating_status: ONLINE
            enabled: 1
           topology: SINGLE
    server_group_id: NULL
         created_at: 2022-08-25 07:59:08
         updated_at: 2022-08-25 08:01:14
           provider: amphora
          flavor_id: NULL
  availability_zone: NULL
1 row in set (0.001 sec)
~~~

Comment 32 errata-xmlrpc 2022-09-21 12:17:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Release of components for Red Hat OpenStack Platform 17.0 (Wallaby)), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:6543


Note You need to log in before you can comment on or make changes to this bug.