Description of problem: I noticed that one port has been set as virtual, while there is no other port having the same ip address configured withing same network in address pairs extension. I noticed that issue while working on scale tests, when subnet has a lot of usage. My example is as follows: --------------------------------------------------------- ()[root@controller-0 flows-distribution]# ovn-nbctl list logical_switch_port 6c788bc2-beb1-4f12-bf93-b45433241b90 _uuid : 6c788bc2-beb1-4f12-bf93-b45433241b90 addresses : ["fa:16:3e:23:b5:50 10.2.3.19"] dhcpv4_options : 59f7a3c1-9a6f-45e4-92ec-8a3263650ab0 dhcpv6_options : [] dynamic_addresses : [] enabled : true external_ids : {"neutron:cidrs"="10.2.3.19/24", "neutron:device_id"="", "neutron:device_owner"="", "neutron:network_name"="neutron-b55fccdf-e79e-4967-b87c-e7380d57e678", "neutron:port_name"="s_rally_e0292957_jPyoaKst", "neutron:project_id"="6e447c71b8bf480d80e008bb72015832", "neutron:revision_number"="3", "neutron:security_group_ids"="abb6d545-2647-42ca-a832-4e0945c72249"} ha_chassis_group : [] name : "ec863657-2757-4636-a434-b7a67b41c9cb" options : {requested-chassis="compute-1.redhat.local", virtual-ip="10.2.3.19", virtual-parents="21cf8792-2944-46cc-bc73-97c761a50f25"} parent_name : [] port_security : ["fa:16:3e:23:b5:50 10.2.3.19"] tag : [] tag_request : [] type : virtual up : true ()[root@controller-0 flows-distribution]# --------------------------------------------------------- And the port that has been identified as a parent for previous port: --------------------------------------------------------- ()[root@controller-0 flows-distribution]# ovn-nbctl list logical_switch_port 21cf8792-2944-46cc-bc73-97c761a50f25 _uuid : b8195549-ccb4-4c4c-b228-2e2e095cf017 addresses : ["fa:16:3e:be:07:5c 10.2.3.191"] dhcpv4_options : 59f7a3c1-9a6f-45e4-92ec-8a3263650ab0 dhcpv6_options : [] dynamic_addresses : [] enabled : true external_ids : {"neutron:cidrs"="10.2.3.191/24", "neutron:device_id"="", "neutron:device_owner"="", "neutron:network_name"="neutron-b55fccdf-e79e-4967-b87c-e7380d57e678", "neutron:port_name"="s_rally_e0292957_K7uVoGwV", "neutron:project_id"="6e447c71b8bf480d80e008bb72015832", "neutron:revision_number"="3", "neutron:security_group_ids"="abb6d545-2647-42ca-a832-4e0945c72249"} ha_chassis_group : [] name : "21cf8792-2944-46cc-bc73-97c761a50f25" options : {requested-chassis="compute-1.redhat.local"} parent_name : [] port_security : ["fa:16:3e:be:07:5c 10.2.3.191"] tag : [] tag_request : [] type : "" up : true --------------------------------------------------------- In line: https://github.com/openstack/neutron/blob/cb55643a0695ebc5b41f50f6edb1546bcc676b71/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L209 there is a broken check "and virtual_ip in ps". It validates string instead actual IP address. So for example if there is a port having IP: 10.11.1.200 and there is another port being created having: 10.11.1.20 The second port fixed IP matches python 'in' comparison: '10.11.1.20' in '10.11.1.200' Version-Release number of selected component (if applicable): 13.0-RHEL-7/2020-05-28.2 How reproducible: Steps to Reproduce: 1. neutron port-create nova --fixed-ip ip_address=10.0.0.200 --name port-1 2. neutron port-create nova --fixed-ip ip_address=10.0.0.20 --name port-2 3. ovn-nbctl list logical_switch_port <uuid_of_port-2> Actual results: port-2 LSP type is virtual and pointing to port-1 Expected results: LSP type should be "" Additional info: That breaks connectivity for port-1 and from probabilistic point of view it could break ports pretty often.
Verified on puddle 16.1-RHEL-8/RHOS-16.1-RHEL-8-20200610.n.0 with python3-networking-ovn-7.2.1-0.20200610130354.4dfc438.el8ost.noarch Created ports according to reproduction scenario and verified that logical switch port set to "" rather than "virtual". Launched instances using these ports and made sure that connectivity is working to both ports.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:3148