Bug 1937392 - Load Balancer not reachable from some Subnets
Summary: Load Balancer not reachable from some Subnets
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-networking-ovn
Version: 16.1 (Train)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z7
: 16.1 (Train on RHEL 8.2)
Assignee: ffernand
QA Contact: Eran Kuris
URL:
Whiteboard:
Depends On:
Blocks: 1966644 1972271
TreeView+ depends on / blocked
 
Reported: 2021-03-10 14:43 UTC by Maysa Macedo
Modified: 2022-03-30 14:29 UTC (History)
21 users (show)

Fixed In Version: python-networking-ovn-7.3.1-1.20210624183315.el8ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1966644 1972271 (view as bug list)
Environment:
Last Closed: 2021-12-09 20:18:11 UTC
Target Upstream Version:
Embargoed:
mdemaced: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1931639 0 None None None 2021-06-15 01:17:52 UTC
OpenStack gerrit 796169 0 None MERGED Ensure that load balancer is added to logical switch 2022-06-22 14:09:57 UTC
Red Hat Issue Tracker OSP-547 0 None None None 2021-11-18 11:29:56 UTC
Red Hat Product Errata RHBA-2021:3762 0 None None None 2021-12-09 20:18:44 UTC

Description Maysa Macedo 2021-03-10 14:43:03 UTC
Created attachment 1762335 [details]
Copy of OVN db

Description of problem:

During an OCP installation on top of OSP with ovn-octavia provider configured, a Load-Balancer is created for the DNS Service which has the protocols TCP and UDP configured on the listeners. We notice that Pods(sub-ports) created on the openshift-console Namespace are not able to reach the DNS load-balancer with UDP and consequently does not have the DNS resolved. When new Namespaces are created the Pod on those new Namespaces are able to reach the DNS load-balancer with UDP normally.

Note that Kuryr creates one Network and Subnet per Namespace and connects in to the router.

Here are the details of the issue:

(shiftstack) [stack@undercloud-0 ~]$ oc get po -n openshift-console
NAME                         READY   STATUS             RESTARTS   AGE
console-5f8f9fbfff-kfddb     0/1     CrashLoopBackOff   447        43h
console-5f8f9fbfff-lgw24     0/1     Running            458        44h
console-65df7896fc-pq7ds     0/1     CrashLoopBackOff   457        44h
demo-6cb99dfd4d-d7mng        1/1     Running            0          38h
dnsutils                     1/1     Running            38         38h
downloads-85c899575f-9t49j   1/1     Running            0          44h
downloads-85c899575f-b4p2z   1/1     Running            0          44h
(shiftstack) [stack@undercloud-0 ~]$ oc logs -f console-5f8f9fbfff-kfddb -n openshift-console
W0310 12:39:36.087517       1 main.go:211] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!                                                                      
I0310 12:39:36.087889       1 main.go:288] cookies are secure!
E0310 12:39:41.127946       1 auth.go:235] error contacting auth provider (retrying in 10s): Get "https://kubernetes.default.svc/.well-known/oauth-authorization-server": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0310 12:39:56.129001       1 auth.go:235] error contacting auth provider (retrying in 10s): Get "https://kubernetes.default.svc/.well-known/oauth-authorization-server": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0310 12:40:11.130221       1 auth.go:235] error contacting auth provider (retrying in 10s): Get "https://kubernetes.default.svc/.well-known/oauth-authorization-server": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0310 12:40:26.132323       1 auth.go:235] error contacting auth provider (retrying in 10s): Get "https://kubernetes.default.svc/.well-known/oauth-authorization-server": context deadline exceeded (Client.Timeout exceeded while 

(shiftstack) [stack@undercloud-0 ~]$ oc rsh -n openshift-console console-5f8f9fbfff-lgw24
sh-4.4$ dig +tcp @172.30.0.10 kubernetes.default.svc

; <<>> DiG 9.11.20-RedHat-9.11.20-5.el8 <<>> +tcp @172.30.0.10 kubernetes.default.svc
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 33234
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 65d129c52685ce2c3f28ff8d6048bf2959720cc7b024c626 (good)
;; QUESTION SECTION:
;kubernetes.default.svc.                IN      A

;; AUTHORITY SECTION:
.                       30      IN      SOA     a.root-servers.net. nstld.verisign-grs.com. 2021031000 1800 900 604800 86400                                                                 

;; Query time: 7 msec
;; SERVER: 172.30.0.10#53(172.30.0.10)
;; WHEN: Wed Mar 10 12:44:25 UTC 2021
;; MSG SIZE  rcvd: 154

sh-4.4$ dig @172.30.0.10 kubernetes.default.svc                                                                                                                                              

; <<>> DiG 9.11.20-RedHat-9.11.20-5.el8 <<>> @172.30.0.10 kubernetes.default.svc
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached


(shiftstack) [stack@undercloud-0 ~]$ oc rsh -n openshift-console demo-6cb99dfd4d-d7mng
/home/kuryr $ nc -vzu 172.30.0.10 53
172.30.0.10 (172.30.0.10:53) open

Version-Release number of selected component (if applicable):
OCP:     Image:    quay.io/openshift-release-dev/ocp-release@sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70                                                              
    URL:      https://access.redhat.com/errata/RHSA-2020:5633
    Version:  4.7.0
OSP:
Red Hat OpenStack Platform release 16.1.3 GA (Train)

OVN:
()[root@controller-0 /]# rpm -q ovn2.13
ovn2.13-20.09.0-17.el8fdp.x86_64

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

When the Service and Load Balancer are recreate the UDP dns resolution works fine again.

Comment 1 Maysa Macedo 2021-03-10 14:46:39 UTC
Created attachment 1762336 [details]
Must gather with kuryr details (networks, loadbalancers, ports)

Comment 2 Maysa Macedo 2021-03-10 15:10:30 UTC
These three pods(ports with IPs: 10.128.121.34, 10.128.120.3, 10.128.120.245) are the ones calling the DNS lb(172.30.0.10 on port 53 with UDP). Note that with TCP connection from those Ports to 172.30.0.10:53 works fine.

openshift-console    console-5f8f9fbfff-kfddb        0/1     Running            446        43h   10.128.121.34    ostest-vk8h8-master-1 
openshift-console    console-5f8f9fbfff-lgw24        0/1     CrashLoopBackOff   457        44h   10.128.120.3     ostest-vk8h8-master-0
openshift-console    console-65df7896fc-pq7ds        0/1     Running            457        44h   10.128.120.245   ostest-vk8h8-master-0

Comment 3 Daniel Alvarez Sanchez 2021-03-10 15:14:08 UTC
IIUC, 172.30.0.10 is the VIP of an OVN LB (tcp and udp) and from some pods it works and from some others it won't but also from those who works it would only work on TCP? eg.



Working:

- From console-5f8f9fbfff-lgw24 TCP
- From demo-6cb99dfd4d-d7mng UDP


Non working:

- From console-5f8f9fbfff-lgw24 UDP


is this right?
Can you share which subnets those pods are on? Do you see a pattern? ie. "all tcp connections from this subnet works and all all these won't"

Also, Luis had some interesting traces that could be useful to the core ovn team, is it ok to attach them? 


Last question, any chance you can try with ovn2.13-20.12.0-24.el8fdp?

Comment 5 Maysa Macedo 2021-03-10 15:39:34 UTC
Details about the openshift-console Namespace and the router where is linked:

netId: b81136c3-2805-4d71-ad86-ce62b0b0e0dd
routerId: 67c4273d-b591-48bf-97d6-1d9309183268
subnetCIDR: 10.128.120.0/23
subnetId: 707f24f9-e4cd-46b5-8506-4e9044f0caa4

None of the ports created on the openshift-console namespace/subnet are able to reach the lb 172.30.0.10:53 with UDP, but with TCP it works.

And we can try with ovn2.13-20.12.0-24.el8fdp, just need to check how to update it. Note this udp issue is something we do not always hit.

Comment 6 Maysa Macedo 2021-03-10 15:40:49 UTC
s/reach/connect

Comment 8 ffernand 2021-03-24 14:57:44 UTC
With Numan's help I was able to reproduce the bug in a virtual cluster that had the attached db loaded.
The udp packets in question are not getting into the load balancer datapath because the load balancer
was not added to the logical switch; it was only added to the logical router. So the packet enters the
router pipeline. In the router pipeline since it doesn't know the mac of 172.30..., ovn-controller generates
arp request for it.

In this particular case, this was the missing part:

# LB=$(ovn-nbctl --bare --columns=_uuid find load_balancer protocol="udp"); echo $LB
f166c378-d6cf-4490-81ff-634ea00b7ce6

# ovn-nbctl show | grep 10.128.120.162 -B3
    port 98e11dc2-34c0-4aca-81ca-8cfccd569027                 <----- LSP
        parent: a03fcdb2-12a3-4e29-a751-19b0a2381a77
        tag: 238
        addresses: ["fa:16:3e:8d:e7:b6 10.128.120.162"]
# ovn-nbctl --bare --columns=external_ids list logical_switch_port 98e11dc2-34c0-4aca-81ca-8cfccd569027 | grep -oP "network_name=([^ ]+)"
network_name=neutron-b81136c3-2805-4d71-ad86-ce62b0b0e0dd   <---- LS

# LS='neutron-b81136c3-2805-4d71-ad86-ce62b0b0e0dd'
# ovn-nbctl  --bare --columns load_balancer list  logical_switch $LS | grep -oP $LB | grep -c .
0 <== not there! :/  BUG!!!! DANGER DANGER DANGER!!! :)

#  ovn-nbctl ls-lb-add $LS $LB  ;  #  add LB to LS
# ovn-nbctl  --bare --columns load_balancer list  logical_switch $LS | grep -oP $LB
f166c378-d6cf-4490-81ff-634ea00b7ce6 <== yay!!!


# Checking LS2
[root@ffovnh ~]# ovn-nbctl show | grep 10.128.115.160 -B3
    port a50380a8-06f7-41e5-b500-8816fbac3a2c
        parent: de01ce75-dc3e-4279-9df5-227e7b0bddc3                 <----- LSP2
        tag: 3186
        addresses: ["fa:16:3e:f4:03:9d 10.128.115.160"]
# ovn-nbctl --bare --columns=external_ids list logical_switch_port de01ce75-dc3e-4279-9df5-227e7b0bddc3 | grep -oP "network_name=([^ ]+)"
network_name=neutron-d9e34c38-5a48-41dc-b433-226fe8d262a7
# LS2='neutron-d9e34c38-5a48-41dc-b433-226fe8d262a7'
# ovn-nbctl  --bare --columns load_balancer list  logical_switch $LS2 | grep -oP $LB   
f166c378-d6cf-4490-81ff-634ea00b7ce6   <== good, that LS was already listed in the LB


This is possibly a bug in Kuryr or a bug in "octavia ovn" driver. I will update the fields
in the BZ to properly reflect that.

Comment 9 ffernand 2021-03-24 15:01:55 UTC
We need to check where/why the load balancer is not getting added to the logical switch, thus causing the issue where packets
end up in the (wrong) logical router datapath.

Comment 10 Maysa Macedo 2021-03-24 15:40:09 UTC
Really nice catch Flavio and Numan! If anything extra is needed let me know.

Comment 24 ffernand 2021-05-21 10:37:30 UTC
Hi Maysa,

Thank you for letting us access your setup and place the extra logs in place.
Here is what we think is happening: as you know, there is a routine called 
LsLbAddCommand which adds the load balancer to a neutron network (aka logical switch in ovn).

The list of logical switches to have LsLbAddCommand is currently obtained by looking at the
logical router, in a function in ovn_octavia_provider called _find_ls_for_lr .

For some reason, the logical switch at the time that _find_ls_for_lr is called is simply
not listed, and the code proceed w/out ever adding it to the load balancer. See
attached log for more details. timestamp 2021-05-20 19:26:56.581

However, later on we see another call to the same logical router where the wanted network is
part of the router! Log at timestamp 2021-05-20 19:32:26.521 but unfortunately that is already
too late and the call for LsLbAddCommand never happens.

Question for you: do you know why the network is not part of the router sooner? If you have any
control over that, I think you can work around the issue by making sure the ls is in the router
before making the lb active.

If that is not something you cannot control, we will need to improve the code to know
to call LsLbAddCommand when there are changes on the logical router. Seems that was just not
designed to handle that right now.

----


2021-05-20 19:26:56.160 19 DEBUG ovn_octavia_provider.helper [-] Updating status to octavia: {'pools': [{'id': 'e48c8699-1535-43af-95b9-e637c41bf96b', 'provisioning_status': 'ACTIVE', 'operating_status': 'OFFLINE'}], 'members': [{'id': 'fef1a7b0-4e0a-46c5-945e-a0f13c33ac1f', 'provisioning_status': 'DELETED'}], 'loadbalancers': [{'id': '31318347-285f-48ac-a83b-6dcf5d9ee8f0', 'provisioning_status': 'ACTIVE'}], 'listeners': [{'id': '78ff134d-830d-43e2-89d0-ffeb8a935fa3', 'provisioning_status': 'ACTIVE'}]} _update_status_to_octavia /usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py:333



LB:   name=     ovn_uuid=256ed81a-bcce-4c4d-a0c1-38da18c930ce
LS:   name=neutron-4f893a30-7280-4e54-b20d-c54b37c1bca7

2021-05-20 19:26:56.581 16 DEBUG ovn_octavia_provider.helper [-] XXX router: neutron-54bbb387-e37f-4371-a0fc-dc8f0c9d323e  ls: ['neutron-c438b2df-e7a0-4de1-89ef-f1b9e2abfba4', 'neutron-39ebaffd-1260-45ee-a1bc-c1b9b9c73c19', 'neutron-7643109a-fea1-4e51-ae36-fd016b18c9ca', 'neutron-4ba5835a-cef7-47c4-937f-7216e00e8561', 'neutron-7be86f13-b6ee-478f-998a-60b4cf0c82ec', 'neutron-54f58dc0-ade1-4c50-94ff-88194a6d59b0', 'neutron-877b6c1d-7a44-4c4d-b8d1-0c6b37e4a5e5', 'neutron-82b086d5-97ae-43b2-a0cd-89ca652df38f', 'neutron-1cf72310-073d-4826-ac6a-d60a06c46d4a', 'neutron-e0307495-e1d4-42f7-be1b-c1a7a94588ab', 'neutron-8fee427b-c299-4d82-9c31-c85ab34e977e', 'neutron-cae1b0dd-6c3c-4ba9-9627-3410621ee1ed', 'neutron-59615366-797a-469f-ac09-ab434f62a566', 'neutron-1341ffce-61c6-4514-b368-f88546366954', 'neutron-eba9deda-e802-4a31-8f2d-bdb05aef786e', 'neutron-4b70147f-1aac-4a0c-9f55-4f3eb417ec00', 'neutron-dabbf0ee-a223-4521-a587-065d124a74b5', 'neutron-8b8b6ddb-98f2-476a-8e2d-2bd4057f5809', 'neutron-07d88327-c0f9-43b2-9909-dfeab1ed1a41', 'neutron-2eefd764-abd2-4c20-add1-21a1464a8174', 'neutron-9c1c5406-56f6-4250-a02a-3f9807355fe4', 'neutron-cd56eb53-0cc2-44fc-b9ea-f9cad2e91949', 'neutron-121a7e38-5500-4d9e-b418-c78913076dad', 'neutron-a7ff0208-7b25-428c-83bd-c\
9e379c0470a', 'neutron-ed01acf7-7ef0-4792-a05c-e9e542136697', 'neutron-a0f7f15e-2271-4c24-9210-abcf8566b852', 'neutron-ff870adc-264c-4f43-aec1-e7248259338d', 'neutron-aac9a80a-7632-4432-a395-0b578116a4bc', 'neutron-fcc5c488-108b-42da-b3a2-e2233b088185', 'neutron-20ae9904-624a-4bb7-adc7-df9fb8a5dc17', 'neutron-41678a81-2124-4d53-acdf-85aa1d4a629d', 'neutron-058eb5b2-6a17-4fa0-8d85-5ac7538e1cb8', 'neutron-c7f58fe2-ecaa-44d5-b4bb-7643751f497d', 'neutron-2868b948-1570-4eb1-839c-05cc5fdcf547', 'neutron-1a3062c8-1494-4d0c-8d1f-1dc76b5c1143', 'neutron-20fad58a-6119-4374-a9b7-4435c4bb2d70', 'neutron-c13e2d3c-6137-48ce-ba32-89759042db97', 'neutron-20766782-aac4-43f8-94f7-a670cf9aef54', 'neutron-26d2b3f5-7251-41d8-8d96-a6a03bbf9e19', 'neutron-bcc3fbf5-e55f-4da4-94f1-72e016db5050', 'neutron-f536c671-d9b1-4cfd-9540-ac71a11477c6', 'neutron-99932a0c-06fd-465f-b880-865934ae4990', 'neutron-e376f0ed-9bf9-4db9-9597-3b8f7e66e528', 'neutron-29adfd6e-f655-4dd1-8f34-1702ef00b081', 'neutron-a98597a0-af68-456d-b009-14e32d167303', 'neutron-b7684f84-a14b-44ff-9bbc-8daf8308fd51', 'neutron-afc9be42-0bd8-4e94-be3c-78ddb3d0d721', 'neutron-cad6f6d0-187f-499c-aa52-41ee85bb90b8', 'neutron-1a30f927-1830-490a-9bda-d43fb5da21f1', 'neutron-a7039537-5af4-4237-98d2-577508803127', 'neutron-47bf1fd9-9ec3-4d2c-b5bc-b70771adade5', 'neutron-57b7e69e-c6ee-40f0-b14e-f61771356655', 'neutron-dbeb176a-d30a-4b0b-8531-ff613b41d2c4', 'neutron-67856fd9-85fd-47f9-8297-84e64e29b6d2', 'neutron-9deebc4c-b440-415d-bdfc-b1796fbf2cb9', 'neutron-c8da2f97-dd23-49ab-9372-c70e5b8a4eaa'] _find_ls_for_lr /usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py:631

2021-05-20 19:26:56.582 16 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): LsLbAddCommand(switch=8f489ed1-5fd5-4118-bb2e-aed9d5e83f46, lb=256ed81a-bcce-4c4d-a0c1-38da18c930ce, may_exist=True) do_commit /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
2021-05-20 19:26:56.582 16 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(table=Load_Balancer, record=256ed81a-bcce-4c4d-a0c1-38da18c930ce, col_values=(('external_ids', {'ls_refs': '{"neutron-1a30f927-1830-490a-9bda-d43fb5da21f1": 1}'}),)) do_commit /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
2021-05-20 19:26:56.583 16 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): LrLbAddCommand(router=cbc9ca2a-845e-42c7-8567-773c178c308d, lb=256ed81a-bcce-4c4d-a0c1-38da18c930ce, may_exist=True) do_commit /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
2021-05-20 19:26:56.583 16 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=3): LsLbAddCommand(switch=neutron-c438b2df-e7a0-4de1-89ef-f1b9e2abfba4, lb=256ed81a-bcce-4c4d-a0c1-38da18c930ce, may_exist=True) do_commit /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89

...

Comment 26 Maysa Macedo 2021-05-21 11:39:28 UTC
Hello Flavio,

Thanks for analyzing this issue.

The Network is not apart of the Router sooner because that is how the OpenShift installation works. With new OpenShift Namespaces getting creating while installation progress, new
Neutron Networks are getting added to the Router, consequently we would expect that existent Load Balancers would also be able to reach that new Network as all those Networks are connected through the same Router.

We don't have control of adding the Network to the Router sooner because there can be OpenShift Namespace created only at the end of the OpenShift installation process.

Comment 27 Maysa Macedo 2021-05-21 11:39:56 UTC
Hello Flavio,

Thanks for analysing this issue.

The Network is not apart of the Router sooner because that is how the OpenShift installation works. With new OpenShift Namespaces getting creating while installation progress, new
Neutron Networks are getting added to the Router, consequently we would expect that existent Load Balancers would also be able to reach that new Network as all those Networks are connected through the same Router.

We don't have control of adding the Network to the Router sooner because there can be OpenShift Namespace created only at the end of the OpenShift installation process.

Comment 28 Maysa Macedo 2021-05-21 11:44:16 UTC
Was this mechanism always present in the ovn driver?

Comment 29 Daniel Alvarez Sanchez 2021-05-21 13:04:37 UTC
(In reply to ffernand from comment #24)

> If that is not something you cannot control, we will need to improve the
> code to know
> to call LsLbAddCommand when there are changes on the logical router. Seems
> that was just not
> designed to handle that right now.
> 

Would a one time event help here waiting for the LS to be added to the router? Seems like something that the driver should handle?

Comment 31 ffernand 2021-05-24 21:58:09 UTC
(In reply to Daniel Alvarez Sanchez from comment #29)
> (In reply to ffernand from comment #24)
> 
> > If that is not something you cannot control, we will need to improve the
> > code to know
> > to call LsLbAddCommand when there are changes on the logical router. Seems
> > that was just not
> > designed to handle that right now.
> > 
> 
> Would a one time event help here waiting for the LS to be added to the
> router? Seems like something that the driver should handle?

Yes, I think the driver needs to handle these cases.

The _find_ls_for_lr function [1] relies on the "neutron:subnet_ids" being in the external_ids of the lrp.
Interestingly, it uses that to lookup the network from neutron and that seems like a lot of work, since
the external_ids also have the "neutron:network_name".

Also, the event that percolates into _find_ls_for_lr is caused by either ROW_CREATE or ROW_DELETE [2] of the
lrp (not the ls). What could be happening is that an UPDATE took place afterwards. In this case 2 updates,
based on the revision number: "neutron:revision_number"="3"   Should the external_id have needed
the update, _find_ls_for_lr would have never acted upon it.

I will add some more instrumentation and ask Maysa for another round.

Maysa> Was this mechanism always present in the ovn driver?

Yes, this code has not changed for a very long time.

[1]: https://github.com/openstack/ovn-octavia-provider/blob/49dbf521d6573a7a71060303512afd3397fdfda7/ovn_octavia_provider/helper.py#L598
[2]: https://github.com/openstack/ovn-octavia-provider/blob/49dbf521d6573a7a71060303512afd3397fdfda7/ovn_octavia_provider/event.py#L41-L44

Comment 65 Itzik Brown 2021-08-02 14:31:36 UTC
Openshift on Openstack
OCP 4.8.3
OSP 16.1.7
Installed and destroyed the Openshift cluster couple of times and looked that there are no DOWN trunk subports and there were no messages in the Kuryr controller logs regarding DOWN ports

Comment 77 errata-xmlrpc 2021-12-09 20:18:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.7 (Train) bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3762


Note You need to log in before you can comment on or make changes to this bug.