Bug 2001120
| Summary: | Octavia fails to delete Load Balancer, and sqlalchemy fails to mark Load Balancer to ERROR | |||
|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Robin Cernin <rcernin> | |
| Component: | openstack-octavia | Assignee: | Gregory Thiemonge <gthiemon> | |
| Status: | CLOSED ERRATA | QA Contact: | Nikolai Ilushko <nilushko> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 13.0 (Queens) | CC: | gthiemon, ihrachys, ldavidde, lpeer, majopela, mdemaced, mdulko, nilushko, scohen | |
| Target Milestone: | ga | Keywords: | Triaged | |
| Target Release: | 17.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | openstack-octavia-8.0.1-0.20211203161902.e4a0136.el8ost | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 2040691 (view as bug list) | Environment: | ||
| Last Closed: | 2022-09-21 12:17:06 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 2040691, 2040697 | |||
|
Description
Robin Cernin
2021-09-03 20:44:39 UTC
Hi Robin, Do you have more logs for this issue (full octavia logs or even sosreports)? Was the LB in ERROR before the DELETE request? Yes it was in ERROR state before DELETE. I will reproduce the issue and attach the logs. I have finally hit the issue where Kuryr had to retry deleting an LB that was in ERROR state.
[root@osp13-controller0 ~]# grep Deleting /var/log/containers/octavia/worker.log| cut -d "]" -f2-| sort | uniq -c | grep -v "^ 1"
2 Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...
Attaching the logs, with SQLAlchemy DEBUG enabled as well as Octavia DEBUG.
$ grep "Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'" worker.log
2021-09-06 21:12:57.178 26 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...
2021-09-06 21:17:58.317 26 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...
$ grep 1649aded-36f4-43dd-bf16-8c61f2712096 worker.log
2021-09-06 21:12:40.032 26 INFO octavia.controller.queue.v1.endpoints [-] Creating load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...
2021-09-06 21:12:42.158 26 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] Get load balancer from DB for load balancer id: 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:393
2021-09-06 21:12:44.680 26 DEBUG octavia.controller.worker.v1.tasks.network_tasks [-] Setup SG for loadbalancer id: 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py:370
2021-09-06 21:12:46.006 26 DEBUG octavia.controller.worker.v1.tasks.network_tasks [-] Getting subnet for LB: 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py:385
2021-09-06 21:12:47.787 26 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] Allocating an Amphora for load balancer with id 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:514
2021-09-06 21:12:47.806 26 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] No Amphora available for load balancer with id 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:540
|__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': u'59913b5d-1ca0-40c1-a74b-478dc52e1cb1'}
|__Atom 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_group_id': None, 'flavor': {}, 'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': None}
|__Atom 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': u'edaeff5d-fdd2-4c5b-8b94-76ba1ac81886'}
|__Atom 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': <octavia.common.data_models.Vip object at 0x7fcd81452350>, 'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': <octavia.common.data_models.LoadBalancer object at 0x7fcd69d447d0>}
|__Atom 'reload-lb-before-allocate-vip' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': <octavia.common.data_models.LoadBalancer object at 0x7fcd68280550>}
|__Atom 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'1649aded-36f4-43dd-bf16-8c61f2712096'}, 'provides': None}
2021-09-06 21:12:56.476 26 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer 1649aded-36f4-43dd-bf16-8c61f2712096 in the database.
2021-09-06 21:12:57.178 26 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...
2021-09-06 21:13:03.551 26 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] Mark DELETED in DB for load balancer id: 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:1125
2021-09-06 21:17:58.317 26 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '1649aded-36f4-43dd-bf16-8c61f2712096'...
2021-09-06 21:18:10.154 26 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] Mark DELETED in DB for load balancer id: 1649aded-36f4-43dd-bf16-8c61f2712096 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:1125
I reproduced it in my env on OSP13:
One LB is still in ERROR in the DB after a successful DELETE call:
MariaDB [octavia]> select * from load_balancer where id = '9ee84739-d42e-4ee3-9d24-7712fc027fa0'\G
*************************** 1. row ***************************
project_id: 77f8afea8d4e46f99574157e5b12b396
id: 9ee84739-d42e-4ee3-9d24-7712fc027fa0
name: lb-138
description: NULL
provisioning_status: ERROR
operating_status: OFFLINE
enabled: 1
topology: SINGLE
server_group_id: NULL
created_at: 2021-10-27 19:33:24
updated_at: 2021-10-27 19:34:04
provider: amphora
flavor_id: NULL
1 row in set (0.00 sec)
2021-10-27 19:33:27.066 25 INFO octavia.controller.queue.v1.endpoints [-] Creating load balancer '9ee84739-d42e-4ee3-9d24-7712fc027fa0'...
[..]
2021-10-27 19:33:27.525 25 DEBUG octavia.controller.worker.v1.controller_worker [-] Flow 'octavia-create-loadbalancer-flow' (3c0c9cae-fcd7-47ff-ac22-43a8a4aff3d4) transitioned into state 'RUNNING' from state 'PENDING' _flow_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:145
Load balancer creation fails because of the quota issue in Nova
MapLoadbalancerToAmphora task is reverted, and in the revert function, the provisioning status of the load balancer is set to ERROR:
2021-10-27 19:33:48.431 25 WARNING octavia.controller.worker.v1.tasks.database_tasks [req-efa7075b-6502-40cf-be03-579ce79f4406 - 77f8afea8d4e46f99574157e5b12b396 - - -] Reverting Amphora allocation for the load balancer 9ee84739-d42e-4ee3-9d24-7712fc027fa0 in the database.
[..]
2021-10-27 19:33:48.444 25 INFO sqlalchemy.engine.base.Engine [req-efa7075b-6502-40cf-be03-579ce79f4406 - 77f8afea8d4e46f99574157e5b12b396 - - -] BEGIN (implicit)
2021-10-27 19:33:48.446 25 INFO sqlalchemy.engine.base.Engine [req-efa7075b-6502-40cf-be03-579ce79f4406 - 77f8afea8d4e46f99574157e5b12b396 - - -] UPDATE load_balancer SET updated_at=%(updated_at)s, provisioning_status=%(provisioning_status)s WHERE load_balancer.id = %(id_1)s
2021-10-27 19:33:48.449 25 INFO sqlalchemy.engine.base.Engine [req-efa7075b-6502-40cf-be03-579ce79f4406 - 77f8afea8d4e46f99574157e5b12b396 - - -] {u'id_1': u'9ee84739-d42e-4ee3-9d24-7712fc027fa0', 'updated_at': datetime.datetime(2021, 10, 27, 19, 33, 48, 446711), 'provisioning_status': 'ERROR'}
2021-10-27 19:33:48.455 25 INFO sqlalchemy.engine.base.Engine [req-efa7075b-6502-40cf-be03-579ce79f4406 - 77f8afea8d4e46f99574157e5b12b396 - - -] COMMIT
Now that the provisioning status is no longer a PENDING_* state, we can delete the Load balancer:
In the api:
2021-10-27 19:33:58.869 21 INFO octavia.api.v2.controllers.load_balancer [req-70c482a0-b85f-41e4-bdd8-2c7f8468413b - 77f8afea8d4e46f99574157e5b12b396 - default default] Sending delete Load Balancer 9ee84739-d42e-4ee3-9d24-7712fc027fa0 to provider amphora
In the worker:
2021-10-27 19:33:58.916 25 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '9ee84739-d42e-4ee3-9d24-7712fc027fa0'...
[..]
2021-10-27 19:34:02.698 25 DEBUG octavia.controller.worker.v1.tasks.database_tasks [req-4e5bb49e-fae6-427d-9fa7-a04160795dd3 - 77f8afea8d4e46f99574157e5b12b396 - - -] Mark DELETED in DB for load balancer id: 9ee84739-d42e-4ee3-9d24-7712fc027fa0 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:1125
[..]
2021-10-27 19:34:02.705 25 INFO sqlalchemy.engine.base.Engine [req-4e5bb49e-fae6-427d-9fa7-a04160795dd3 - 77f8afea8d4e46f99574157e5b12b396 - - -] BEGIN (implicit)
2021-10-27 19:34:02.707 25 INFO sqlalchemy.engine.base.Engine [req-4e5bb49e-fae6-427d-9fa7-a04160795dd3 - 77f8afea8d4e46f99574157e5b12b396 - - -] UPDATE load_balancer SET updated_at=%(updated_at)s, provisioning_status=%(provisioning_status)s WHERE load_balancer.id = %(id_1)s
2021-10-27 19:34:02.708 25 INFO sqlalchemy.engine.base.Engine [req-4e5bb49e-fae6-427d-9fa7-a04160795dd3 - 77f8afea8d4e46f99574157e5b12b396 - - -] {u'id_1': u'9ee84739-d42e-4ee3-9d24-7712fc027fa0', 'updated_at': datetime.datetime(2021, 10, 27, 19, 34, 2, 707414), 'provisioning_status': 'DELETED'}
2021-10-27 19:34:02.716 25 INFO sqlalchemy.engine.base.Engine [req-4e5bb49e-fae6-427d-9fa7-a04160795dd3 - 77f8afea8d4e46f99574157e5b12b396 - - -] COMMIT
But the octavia-create-loadbalancer-flow hasn't finished yet, it is still running on the another controller:
The LoadBalancerIDToErrorOnRevertTask is reverted and sets the provisioning_status to ERROR:
2021-10-27 19:34:03.028 25 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (a1796677-5426-4945-aa7c-a745b0aa6e61) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194
2021-10-27 19:34:04.130 25 INFO sqlalchemy.engine.base.Engine [req-f547a555-8595-4672-8331-d59602206133 - 77f8afea8d4e46f99574157e5b12b396 - - -] BEGIN (implicit)
2021-10-27 19:34:04.132 25 INFO sqlalchemy.engine.base.Engine [req-f547a555-8595-4672-8331-d59602206133 - 77f8afea8d4e46f99574157e5b12b396 - - -] UPDATE load_balancer SET updated_at=%(updated_at)s, provisioning_status=%(provisioning_status)s WHERE load_balancer.id = %(id_1)s
2021-10-27 19:34:04.133 25 INFO sqlalchemy.engine.base.Engine [req-f547a555-8595-4672-8331-d59602206133 - 77f8afea8d4e46f99574157e5b12b396 - - -] {u'id_1': u'9ee84739-d42e-4ee3-9d24-7712fc027fa0', 'updated_at': datetime.datetime(2021, 10, 27, 19, 34, 4, 132545), 'provisioning_status': 'ERROR'}
2021-10-27 19:34:04.135 25 INFO sqlalchemy.engine.base.Engine [req-f547a555-8595-4672-8331-d59602206133 - 77f8afea8d4e46f99574157e5b12b396 - - -] COMMIT
[..]
2021-10-27 19:34:04.173 25 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (a1796677-5426-4945-aa7c-a745b0aa6e61) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None'
[..]
2021-10-27 19:34:04.246 25 WARNING octavia.controller.worker.v1.controller_worker [-] Flow 'octavia-create-loadbalancer-flow' (3c0c9cae-fcd7-47ff-ac22-43a8a4aff3d4) transitioned into state 'REVERTED' from state 'RUNNING'
The issue is that the MapLoadbalancerToAmphora task shouldn't have set the provisioning_status to ERROR on revert, only the first Task of the flow should set this status.
Note that the MapLoadbalancerToAmphora was deleted in Xena, but previous releases are probably affected by this issue.
Please note that the proposed patch fixes all possible similar behaviors for master. But branches from stable/train to stable/wallaby will require another patch for code that was removed during the xena release cycle. Verified by running the following commands:
# Puddle version:
(overcloud) [stack@undercloud-0 ~]$ cat /var/lib/rhos-release/latest-installed
17.0 -p RHOS-17.0-RHEL-9-20220823.n.2
# Create a network and subnet for the load balancer
~~~
[stack@undercloud-0 ~]$ . overcloudrc
(overcloud) [stack@undercloud-0 ~]$ openstack network create my_net
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2022-08-25T07:17:24Z |
| description | |
| dns_domain | |
| id | 9eb8000b-ff2b-438f-bbda-6a233ab2f987 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| mtu | 1442 |
| name | my_net |
| port_security_enabled | True |
| project_id | b47e07429b8e47e8bd239086528e651e |
| provider:network_type | geneve |
| provider:physical_network | None |
| provider:segmentation_id | 15649 |
| qos_policy_id | None |
| revision_number | 1 |
| router:external | Internal |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2022-08-25T07:17:25Z |
+---------------------------+--------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ openstack subnet create my_subnet --network my_net --subnet-range 127.16.0.0/24 --dns-nameserver 10.0.0.1 --no-dhcp
+----------------------+--------------------------------------+
| Field | Value |
+----------------------+--------------------------------------+
| allocation_pools | 127.16.0.2-127.16.0.254 |
| cidr | 127.16.0.0/24 |
| created_at | 2022-08-25T07:22:24Z |
| description | |
| dns_nameservers | 10.0.0.1 |
| dns_publish_fixed_ip | None |
| enable_dhcp | False |
| gateway_ip | 127.16.0.1 |
| host_routes | |
| id | 4c1c15c2-ec80-449d-87aa-e125257b7428 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | my_subnet |
| network_id | 9eb8000b-ff2b-438f-bbda-6a233ab2f987 |
| prefix_length | None |
| project_id | b47e07429b8e47e8bd239086528e651e |
| revision_number | 0 |
| segment_id | None |
| service_types | None |
| subnetpool_id | None |
| tags | |
| updated_at | 2022-08-25T07:22:24Z |
+----------------------+--------------------------------------+
~~~
# Create the load balancer
~~~
(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer create --name lb1 --vip-subnet-id my_subnet
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone | None |
| created_at | 2022-08-25T07:59:08 |
| description | |
| flavor_id | None |
| id | 58b92dbb-6243-4b5e-ac30-4fe7e6b85956 |
| listeners | |
| name | lb1 |
| operating_status | OFFLINE |
| pools | |
| project_id | b47e07429b8e47e8bd239086528e651e |
| provider | amphora |
| provisioning_status | PENDING_CREATE |
| updated_at | None |
| vip_address | 127.16.0.80 |
| vip_network_id | 9eb8000b-ff2b-438f-bbda-6a233ab2f987 |
| vip_port_id | 750ca90c-0d94-4bbc-9fc0-aa568eb4d2c3 |
| vip_qos_policy_id | None |
| vip_subnet_id | 4c1c15c2-ec80-449d-87aa-e125257b7428 |
| tags | |
+---------------------+--------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show lb1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone | None |
| created_at | 2022-08-25T07:59:08 |
| description | |
| flavor_id | None |
| id | 58b92dbb-6243-4b5e-ac30-4fe7e6b85956 |
| listeners | |
| name | lb1 |
| operating_status | ONLINE |
| pools | |
| project_id | b47e07429b8e47e8bd239086528e651e |
| provider | amphora |
| provisioning_status | ACTIVE |
| updated_at | 2022-08-25T08:00:20 |
| vip_address | 127.16.0.80 |
| vip_network_id | 9eb8000b-ff2b-438f-bbda-6a233ab2f987 |
| vip_port_id | 750ca90c-0d94-4bbc-9fc0-aa568eb4d2c3 |
| vip_qos_policy_id | None |
| vip_subnet_id | 4c1c15c2-ec80-449d-87aa-e125257b7428 |
| tags | |
+---------------------+--------------------------------------+
~~~
# Deleting the load balancer
~~~
openstack load balancer delete lb1
(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show lb1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone | None |
| created_at | 2022-08-25T07:59:08 |
| description | |
| flavor_id | None |
| id | 58b92dbb-6243-4b5e-ac30-4fe7e6b85956 |
| listeners | |
| name | lb1 |
| operating_status | ONLINE |
| pools | |
| project_id | b47e07429b8e47e8bd239086528e651e |
| provider | amphora |
| provisioning_status | PENDING_DELETE |
| updated_at | 2022-08-25T08:01:06 |
| vip_address | 127.16.0.80 |
| vip_network_id | 9eb8000b-ff2b-438f-bbda-6a233ab2f987 |
| vip_port_id | 750ca90c-0d94-4bbc-9fc0-aa568eb4d2c3 |
| vip_qos_policy_id | None |
| vip_subnet_id | 4c1c15c2-ec80-449d-87aa-e125257b7428 |
| tags | |
+---------------------+--------------------------------------+
A little later:
(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show lb1
Unable to locate lb1 in load balancers
~~~
# In worker.log file:
~~~
2022-08-25 08:01:06.073 20 INFO octavia.controller.queue.v1.endpoints [-] Deleting load balancer '58b92dbb-6243-4b5e-ac30-4fe7e6b85956'...
~~~
~~~
2022-08-25 08:01:14.809 20 DEBUG octavia.controller.worker.v1.tasks.database_tasks [-] Mark DELETED in DB for load balancer id: 58b92dbb-6243-4b5e-ac30-4fe7e6b85956 execute /usr/lib/python3.9/site-packages/octavia/controller/worker/v1/tasks/database_tasks.py:1121
~~~
# Checking status of the load balancer in the database
## ssh into the controller
~~~
[stack@undercloud-0 ~]$ ssh heat-admin@CONTROLLER-IP
[root@controller-2 octavia]# podman exec -it clustercheck /bin/bash
bash-5.1$ mariadb octavia
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 210265
Server version: 10.5.16-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [octavia]> select * from load_balancer where id='58b92dbb-6243-4b5e-ac30-4fe7e6b85956' \G;
*************************** 1. row ***************************
project_id: b47e07429b8e47e8bd239086528e651e
id: 58b92dbb-6243-4b5e-ac30-4fe7e6b85956
name: lb1
description: NULL
provisioning_status: DELETED
operating_status: ONLINE
enabled: 1
topology: SINGLE
server_group_id: NULL
created_at: 2022-08-25 07:59:08
updated_at: 2022-08-25 08:01:14
provider: amphora
flavor_id: NULL
availability_zone: NULL
1 row in set (0.001 sec)
~~~
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 17.0 (Wallaby)), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2022:6543 |