Bug 1643900 - Instances with floating IPs residing on two different subnet with their own routers cannot access each other using FIP on a RHOSP with OVN
Summary: Instances with floating IPs residing on two different subnet with their own r...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-networking-ovn
Version: 14.0 (Rocky)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: z1
: 14.0 (Rocky)
Assignee: Daniel Alvarez Sanchez
QA Contact: Roman Safronov
URL:
Whiteboard:
: 1663265 (view as bug list)
Depends On: 1642830
Blocks: 1643902 1643905
TreeView+ depends on / blocked
 
Reported: 2018-10-29 12:11 UTC by Daniel Alvarez Sanchez
Modified: 2019-10-28 15:07 UTC (History)
10 users (show)

Fixed In Version: python-networking-ovn-5.0.2-0.20181217155313.726820a.el7ost
Doc Type: No Doc Update
Doc Text:
Clone Of: 1642830
: 1643902 (view as bug list)
Environment:
Last Closed: 2019-03-18 12:57:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 613584 0 'None' MERGED Clean MAC_Binding entries when (dis)associating a FIP 2020-12-02 18:31:39 UTC
OpenStack gerrit 619268 0 'None' MERGED Clean MAC_Binding entries when (dis)associating a FIP 2020-12-02 18:31:39 UTC
Red Hat Product Errata RHBA-2019:0585 0 None None None 2019-03-18 12:57:19 UTC

Description Daniel Alvarez Sanchez 2018-10-29 12:11:29 UTC
+++ This bug was initially created as a clone of Bug #1642830 +++

Description of problem:
Instances with floating IPs residing on two different subnet with their own routers cannot access each other using FIP on a RHOSP with OVN DVR deployment.


Version-Release number of selected component (if applicable):
RHOSP - 13 with all packages and containers from RH CDN, except for openstack-neutron-server-ovn, which was hot-fixed with hotfix-bz1608951
Environment is having three controllers running in VMs and a single baremetal compute node.

How reproducible:
Try Deploying OCP with FIPs for the OCP nodes and Kuryr on RHOSP with OVN HA DVR. Deployment fails trying to register the nodes with RH CDN, because of some nodes not being able to resolve domain names using DNS server. 

If I can summaries what happens. Here it is.

- Two tenant networks with their own subnets named dns_subnet and ocp_subnet
- both subnets connected to routers in their respective networks.
- both routers are connected to a public, external network.
- a single instance is running on dns_subnet .
- 4 other instances are running on ocp_subnet. 
- all five instances has got floating IPs and are reachable from an external bastion host.
- One of the 4 servers could reach instance running on dns_subnet.
- Rest of the 3 servers could not reach the instance on dns_subnet.
- All 4 servers can ping each other's floating IPs as well as bastion host FIP.
- If the floating IPs are removed from the failing three hosts, then those servers can also start reaching the instance on dns_subnet.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Daniel Alvarez Sanchez on 2018-10-25 09:42:47 EDT ---

I was working on the same issue on a different environment (@ltomasbo's). What I saw is that two FIPs belonging to VMs in the same subnet had different MAC addresses. Not sure why but this was obviously not correct.

I deleted all the Mac_Binding entries from SB database and they got populated correctly afterwards:

During the issue:

_uuid               : 003cb3e0-0c5b-4888-8d32-35631615c4bc
datapath            : 07a76c72-6896-464a-8683-3df145d02434
ip                  : "172.24.5.18"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:7e:ad:6d"

_uuid               : 3340e032-406e-44ae-84c7-9a493d6ca5da
datapath            : 07a76c72-6896-464a-8683-3df145d02434
ip                  : "172.24.5.20"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:38:22:6d"



I thought this shouldn't be like that as the MAC address shouldn't be different, cleared up all entries from Mac_Binding table:

for i in $(ovn-sbctl list mac_bind | grep _uuid | awk {'print $3'}); do sudo ovn-sbctl destroy mac_binding $i; done


And then both source FIPs have the same MAC address:


_uuid               : 799dfd19-30e2-47b8-9691-3fbd32f33d81
datapath            : 07a76c72-6896-464a-8683-3df145d02434
ip                  : "172.24.5.18"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:7e:ad:6d"

_uuid               : 2f8b6937-e00a-4588-a145-ce12e58b53d7
datapath            : 07a76c72-6896-464a-8683-3df145d02434
ip                  : "172.24.5.20"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:7e:ad:6d"


---
Port_Binding

_uuid               : 638de6e4-6bcd-46ad-9f08-92551f0efa96
chassis             : []
datapath            : 4b4884a6-d0dc-4530-9f14-438ae6a43f8f
external_ids        : {"neutron:cidrs"="172.24.5.4/24", "neutron:device_id"="b8b47069-7c10-4921-a595-cfe84d970b8c", "neutron:device_owner"="network:router_gateway", "neutron:ne
twork_name"="neutron-00c64843-74e4-4ab5-9a54-1a6f882f9a87", "neutron:port_name"="", "neutron:project_id"="", "neutron:revision_number"="4", "neutron:security_group_ids"=""}
gateway_chassis     : []
logical_port        : "1ba11619-b8ef-4d0c-a9d5-a63f0ef17e7c"
mac                 : [router]
nat_addresses       : ["fa:16:3e:7e:ad:6d 172.24.5.4 172.24.5.5 172.24.5.18 172.24.5.20 172.24.5.4 172.24.5.4 is_chassis_resident(\"cr-lrp-1ba11619-b8ef-4d0c-a9d5-a63f0ef17e7c\")"]
options             : {peer="lrp-1ba11619-b8ef-4d0c-a9d5-a63f0ef17e7c"}
parent_port         : []
tag                 : []
tunnel_key          : 5
type                : patch


_uuid               : 39526ce5-06a8-4403-8dc1-5243461ad41e
chassis             : 17fca138-e1a4-4aa6-ae37-091d64a71ede
datapath            : c5030e1f-8362-4556-8975-b913519566d2
external_ids        : {}
gateway_chassis     : [78126282-4e07-40e7-8373-d5d948bc80d8]
logical_port        : "cr-lrp-1ba11619-b8ef-4d0c-a9d5-a63f0ef17e7c"
mac                 : ["fa:16:3e:7e:ad:6d 172.24.5.4/24"]
nat_addresses       : []
options             : {distributed-port="lrp-1ba11619-b8ef-4d0c-a9d5-a63f0ef17e7c"}
parent_port         : []
tag                 : []
tunnel_key          : 2
type                : chassisredirect



After this, traffic worked fine.

Can you please try the same and post the results back here:

1. Dump Mac_Binding table contents through ovn-sbctl list Mac_Binding
2. Flush it through: 
for i in $(ovn-sbctl list mac_bind | grep _uuid | awk {'print $3'}); do sudo ovn-sbctl destroy mac_binding $i; done
3. Dump Mac_Binding table contents through ovn-sbctl list Mac_Binding
4. Check if traffic works now

If traffic works, please attach before/after dumps of Mac_Binding table and, if possible, try to spot where the wrong MAC address could come from.

--- Additional comment from Daniel Alvarez Sanchez on 2018-10-25 10:04:48 EDT ---

[stack@master-65c4620f651202f7540607ad6957ba8a3487e0davm-0 logs(keystone_demo)]$ grep 172.24.5.4 * | grep -i Claiming
grep: archive: Is a directory
ovn-controller.log:2018-10-19T14:51:13.830Z|00041|binding|INFO|cr-lrp-3dc992c6-bdc1-4a21-bdba-22c886af14ff: Claiming fa:16:3e:2e:58:86 172.24.5.4/24
ovn-controller.log:2018-10-19T15:01:05.042Z|00063|binding|INFO|cr-lrp-1aad13ee-ac8f-4f57-9e4a-def53743c2a1: Claiming fa:16:3e:c3:94:19 172.24.5.4/24
ovn-controller.log:2018-10-21T15:14:00.002Z|00130|binding|INFO|cr-lrp-b9e7fba1-7f1c-4df5-84e9-c54f17cec81e: Claiming fa:16:3e:38:22:6d 172.24.5.4/24
ovn-controller.log:2018-10-23T10:11:24.984Z|00100|binding|INFO|cr-lrp-1ba11619-b8ef-4d0c-a9d5-a63f0ef17e7c: Claiming fa:16:3e:7e:ad:6d 172.24.5.4/24
ovn-controller.log:2018-10-25T13:45:42.546Z|00123|binding|INFO|cr-lrp-198e5576-b654-4605-80c0-b9cf6d21ea2b: Claiming fa:16:3e:5e:86:02 172.24.5.4/24


Looks like the FIP retained that MAC address that used to have at some point ^ (see entry of the cr-lrp on 2018-10-23).

ip                  : "172.24.5.20"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:7e:ad:6d

Then on 2018-10-25 another cr-lrp was bound to that chassis but the MAC_Binding entry of that port was not updated.

It looks to me like a ovn-controller bug not updating the entry properly on the MAC_Binding table.

--- Additional comment from Daniel Alvarez Sanchez on 2018-10-25 10:53:57 EDT ---

I confirmed that clearing the MAC_Table heals the situation:


[stack@master-65c4620f651202f7540607ad6957ba8a3487e0davm-0 logs(keystone_demo)]$ ovn-sbctl list mac_bind |grep lrp-82af833f-f78b-4f45-9fc8-719db0f9e619  -C1
ip                  : "172.24.5.21"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"
--
ip                  : "172.24.5.6"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:14:48:20"
--
ip                  : "172.24.5.4"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"
--
ip                  : "172.24.5.29"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"
--
ip                  : "172.24.5.11"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"
--
ip                  : "172.24.5.1"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "4a:4b:15:7d:59:4d"
--
ip                  : "172.24.5.9"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:14:48:20"




external_ip         : "172.24.5.21"
external_mac        : []
logical_ip          : "192.168.99.5"
logical_port        : []
type                : dnat_and_snat
--
external_ip         : "172.24.5.4"
external_mac        : []
logical_ip          : "192.168.99.13"
logical_port        : []
type                : dnat_and_snat
--
external_ip         : "172.24.5.9"
external_mac        : []
logical_ip          : "192.168.99.14"
logical_port        : []
type                : dnat_and_snat
--
external_ip         : "172.24.5.8"
external_mac        : []
logical_ip          : "192.168.23.5"
logical_port        : []
type                : dnat_and_snat



[stack@master-65c4620f651202f7540607ad6957ba8a3487e0davm-0 logs(keystone_demo)]$ for i in 21 4 9;  do sudo ovn-sbctl list mac_bind |grep 172.24.5.$i -A2; done
ip                  : "172.24.5.21"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"

ip                  : "172.24.5.4"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"

ip                  : "172.24.5.9"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:14:48:20"    <- only one working



Old entry:
[stack@master-65c4620f651202f7540607ad6957ba8a3487e0davm-0 logs(keystone_demo)]$ grep fa:16:3e:5e:86:02 -r /opt/stack/logs/ovn-controller.log
2018-10-25T13:45:42.546Z|00123|binding|INFO|cr-lrp-198e5576-b654-4605-80c0-b9cf6d21ea2b: Claiming fa:16:3e:5e:86:02 172.24.5.4/24



[stack@master-65c4620f651202f7540607ad6957ba8a3487e0davm-0 logs(keystone_demo)]$ grep fa:16:3e:14:48:20 /opt/stack/logs/ovn-controller.log
2018-10-25T14:03:42.114Z|00126|binding|INFO|cr-lrp-f09b2186-1cb2-4e50-99a5-587f680db8ad: Claiming fa:16:3e:14:48:20 172.24.5.6/24



Port_Binding:

_uuid               : dae11bdb-47d3-471e-8826-9aefb8572700
chassis             : []
datapath            : 4b4884a6-d0dc-4530-9f14-438ae6a43f8f
external_ids        : {"neutron:cidrs"="172.24.5.6/24", "neutron:device_id"="aab4b639-66c7-4eaf-8285-72634c69b46a", "neutron:device_owner"="network:router_gateway", "neutron:ne
twork_name"="neutron-00c64843-74e4-4ab5-9a54-1a6f882f9a87", "neutron:port_name"="", "neutron:project_id"="", "neutron:revision_number"="4", "neutron:security_group_ids"=""}
gateway_chassis     : []
logical_port        : "f09b2186-1cb2-4e50-99a5-587f680db8ad"
mac                 : [router]
nat_addresses       : ["fa:16:3e:14:48:20 172.24.5.4 172.24.5.9 172.24.5.6 172.24.5.6 172.24.5.6 172.24.5.21 is_chassis_resident(\"cr-lrp-f09b2186-1cb2-4e50-99a5-587f680db8ad\")"]
options             : {peer="lrp-f09b2186-1cb2-4e50-99a5-587f680db8ad"}
parent_port         : []
tag                 : []
tunnel_key          : 5
type                : patch




[stack@master-65c4620f651202f7540607ad6957ba8a3487e0davm-0 logs(keystone_demo)]$ sudo ovn-sbctl list mac_bind |grep fa:16:3e:5e:86:02 -B2
ip                  : "172.24.5.21"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"
--
ip                  : "172.24.5.4"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"
--
ip                  : "172.24.5.11"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"
--
ip                  : "172.24.5.29"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"



    nat 06cfd4d8-d02e-4b88-8cde-4bfb11dd61ea
        external ip: "172.24.5.4"
        logical ip: "192.168.99.13"
        type: "dnat_and_snat"
    nat 0abe3ae4-be74-4450-bd46-4aeb934a44ce
        external ip: "172.24.5.9"
        logical ip: "192.168.99.14"
        type: "dnat_and_snat"
    nat cb41a0e0-bbbb-404a-a130-9ad889b21ba9
        external ip: "172.24.5.21"
        logical ip: "192.168.99.5"
        type: "dnat_and_snat"

router 74cc2395-4f3f-416a-a1b2-6218489507d6 (neutron-aab4b639-66c7-4eaf-8285-72634c69b46a) (aka openshift-ansible-openshift.example.com-router)
    port lrp-f09b2186-1cb2-4e50-99a5-587f680db8ad
        mac: "fa:16:3e:14:48:20"
        networks: ["172.24.5.6/24"]



    port lrp-f09b2186-1cb2-4e50-99a5-587f680db8ad
        mac: "fa:16:3e:14:48:20"
        networks: ["172.24.5.6/24"]
        gateway chassis: [466a501a-64f2-4812-a181-ed20485977f9]


Cleaned the mac_binding table and it healed:

[stack@master-65c4620f651202f7540607ad6957ba8a3487e0davm-0 logs(keystone_demo)]$ grep 172.24.5.21 /tmp/mac_binding.before -A2
ip                  : "172.24.5.21"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:5e:86:02"

[stack@master-65c4620f651202f7540607ad6957ba8a3487e0davm-0 logs(keystone_demo)]$ grep 172.24.5.21 /tmp/mac_binding.after -A2
ip                  : "172.24.5.21"
logical_port        : "lrp-82af833f-f78b-4f45-9fc8-719db0f9e619"
mac                 : "fa:16:3e:14:48:20"



I will dig a little bit further on this tomorrow but it's clearly an ovn-controller issue as networking-ovn doesn't touch this table.

--- Additional comment from Daniel Alvarez Sanchez on 2018-10-26 03:05:58 EDT ---

I checked on a DVR setup and the symptoms are the same.
Ping not working from VM1 with FIP 10.46.22.172 to VM2 with FIP 10.46.22.163.
ICMP reply packets are not arriving to the compute node as the NAT actions contain wrong MAC address.

[cloud-user@ansible-host-0 ~]$ ping 10.46.22.163 -c5
PING 10.46.22.163 (10.46.22.163) 56(84) bytes of data.

--- 10.46.22.163 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms




()[root@controller-0 dalvarez]# ovn-sbctl list mac_binding |grep "10.46.22.172" -C3

_uuid               : 79af67e1-1e38-4668-96bc-023222af20e2
datapath            : 6de89959-9c59-4bd8-98d4-2fc01c4ab8d0
ip                  : "10.46.22.172"
logical_port        : "lrp-8b7c7e8d-97d3-43df-9256-f2a11bc8fa2f"
mac                 : "fa:16:3e:d6:b0:63"

--

_uuid               : b78d8dfc-6b7f-4fc5-9ca1-60f376b86497
datapath            : 8194fff2-1cad-4872-867a-7abd794c8db8
ip                  : "10.46.22.172"
logical_port        : "lrp-a0a3e9b7-10ef-4abb-88e3-679c5cad37ce"
mac                 : "fa:16:3e:66:e0:3f"


MAC address should be the same for both as it's DVR and it should be set to the MAC address of the neutron FIP port:

(overcloud) [stack@undercloud-0 ~]$ openstack port list|grep 10.46.22.172
| 752d8b3b-0003-4c07-a602-414a420caf05 |                                                                               | fa:16:3e:d6:b0:63 | ip_address='10.46.22.172', subnet_id='576bf696-9e04-456b-8330-2de0c3a621bd'   | N/A    |


Now I destroy the MAC_Binding entry:

()[root@controller-0 dalvarez]# ovn-sbctl destroy mac_binding b78d8dfc-6b7f-4fc5-9ca1-60


And ovn-controller has recreated it properly:

()[root@controller-0 dalvarez]# ovn-sbctl list mac_binding |grep "10.46.22.172" -C3

_uuid               : 79af67e1-1e38-4668-96bc-023222af20e2
datapath            : 6de89959-9c59-4bd8-98d4-2fc01c4ab8d0
ip                  : "10.46.22.172"
logical_port        : "lrp-8b7c7e8d-97d3-43df-9256-f2a11bc8fa2f"
mac                 : "fa:16:3e:d6:b0:63"

--

_uuid               : d8bc10eb-b80b-4592-b87a-9d670b69ffa0
datapath            : 8194fff2-1cad-4872-867a-7abd794c8db8
ip                  : "10.46.22.172"
logical_port        : "lrp-a0a3e9b7-10ef-4abb-88e3-679c5cad37ce"
mac                 : "fa:16:3e:d6:b0:63"



Ping now works:


[cloud-user@ansible-host-0 ~]$ ping 10.46.22.163 -c5
PING 10.46.22.163 (10.46.22.163) 56(84) bytes of data.
64 bytes from 10.46.22.163: icmp_seq=1 ttl=62 time=2.42 ms
64 bytes from 10.46.22.163: icmp_seq=2 ttl=62 time=1.36 ms
64 bytes from 10.46.22.163: icmp_seq=3 ttl=62 time=0.629 ms
64 bytes from 10.46.22.163: icmp_seq=4 ttl=62 time=0.631 ms
64 bytes from 10.46.22.163: icmp_seq=5 ttl=62 time=0.544 ms

--- Additional comment from Mohammed Salih on 2018-10-26 05:10:08 EDT ---

Sorry Daniel for late reply. Is there anything that I should do in the environment? In-fact you may access the environment - titan21 and titan22 - if that speeds up your troubleshooting. 

Cheers
M Salih

--- Additional comment from Daniel Alvarez Sanchez on 2018-10-26 05:43:28 EDT ---

I have reported the bug upstream at: https://mail.openvswitch.org/pipermail/ovs-discuss/2018-October/047604.html

I am still trying to confirm if there's something else apart from this as Luis told me that this may happen as well on fresh deployments (in this case, it's another bug as this one happens only when reusing FIPs).

--- Additional comment from Daniel Alvarez Sanchez on 2018-10-26 09:52:49 EDT ---

I have submitted a workaround to upstream master in networking-ovn while we work on the proper fix in openvswitch.

Please, @ltomasbo can you try it locally in your shiftstack setup?

--- Additional comment from Daniel Alvarez Sanchez on 2018-10-26 09:53:42 EDT ---

Workaround: https://review.openstack.org/#/c/613584/

--- Additional comment from Luis Tomas Bolivar on 2018-10-29 08:03:29 EDT ---

Tested the patch (https://review.openstack.org/#/c/613584) on my env and it is solving the issue

Comment 2 Lucas Alvares Gomes 2019-01-15 13:31:07 UTC
*** Bug 1663265 has been marked as a duplicate of this bug. ***

Comment 5 Roman Safronov 2019-02-28 09:52:20 UTC
Verified on puddle: 14.0-RHEL-7/2019-02-22.2
rpm: python-networking-ovn-5.0.2-0.20181217155313.726820a.el7ost.noarch

Used the following scenario (taken from https://bugzilla.redhat.com/show_bug.cgi?id=1643905#c8 as a reproduction scenario):

1. Created internal network internal_A, subnet_A (192.168.3.0/24), router_A. Connected router_A to the external network. Connected internal network internal_A to the router. Created a VM and connected to the internal_A network. Attached an floating IP to the VM.

2. Created internal network internal_B, subnet_B (192.168.4.0/24), router_B. Connected router_B to the external network. Connected internal network internal_B to the router. Created a VM and connected to the internal_B network. Attached an floating IP to the VM.

3. Tried to ping VMs with floating IPs from external network and from other VM with floating IP. All worked.

4. Deleted one of floating IPs. Then I re-created it again (same ip address, used command 'openstack floating ip create --floating-ip-address 10.0.0.220 nova', 'nova' is external network name)

5. Attached the FIP to the other VM on other internal network, connected to other router. New mac_binding entry in north db was created (OK). Ping and ssh to this VM via FIP worked, arp table on hypevisor was updated with corresponding mac for 10.0.0.220.

6. Created internal network internal_C, subnet_C (192.168.5.0/24), router_C. Connected router_C to the external network. Connected internal network internal_C to the router. Created a VM and connected to the internal_C network. 

7. Deleted FIP 10.0.0.220 and recreated it again (see step 4). Then attached it to the VM connected to internal_C network and router_C.

8. Connected to VM on internal_B network (via internal interface in namespace on compute node), tried to ping 10.0.0.220 , ping worked, arp table on hypervisor and ovn mac bindings updated properly. Tested also pings from other VMs (with and without FIPs) to different floating ips, all succeeded.

Comment 8 errata-xmlrpc 2019-03-18 12:57:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0585


Note You need to log in before you can comment on or make changes to this bug.