The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 1778164 - [OVN] Traffic not getting blocked when it should within the same Logical Switch
Summary: [OVN] Traffic not getting blocked when it should within the same Logical Switch
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: ovn2.11
Version: FDP 19.G
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Dumitru Ceara
QA Contact: ying xu
URL:
Whiteboard:
Depends On:
Blocks: 1779115
TreeView+ depends on / blocked
 
Reported: 2019-11-29 11:53 UTC by Daniel Alvarez Sanchez
Modified: 2024-06-13 22:19 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1779115 (view as bug list)
Environment:
Last Closed: 2020-01-21 17:02:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-332 0 None None None 2024-03-25 15:38:20 UTC
Red Hat Product Errata RHBA-2020:0190 0 None None None 2020-01-21 17:02:56 UTC

Description Daniel Alvarez Sanchez 2019-11-29 11:53:54 UTC
Two ports belonging to the same logical switch and with the ACLs described below can ping each other. However, the expected behavior is that the traffic is dropped.


()[root@controller-2 /]# ovn-nbctl list logical_switch_port 094db8e5-a604-49cb-b252-fed967764eb8
_uuid               : 094db8e5-a604-49cb-b252-fed967764eb8
addresses           : ["fa:16:3e:d1:15:fe 10.128.102.2"]
dhcpv4_options      : []
dhcpv6_options      : []
dynamic_addresses   : []
enabled             : true
external_ids        : {"neutron:cidrs"="10.128.102.2/23", "neutron:device_id"="", "neutron:device_owner"="compute:kuryr", "neutron:network_name"="neutron-34a649a7-99a0-49c8-bcfe-07f6ffe86eff", "neutron:port_name"="", "neutron:project_id"="8064de6c4f82419ebc7c417db6d9d29d", "neutron:revision_number"="14", "neutron:security_group_ids"="3ebaa469-4f35-492c-8b91-0ffd61107503"}
ha_chassis_group    : []
name                : "5441c9be-a538-471c-a57a-1cf18b4843d2"
options             : {requested-chassis="compute-0.redhat.local"}
parent_name         : "60ee1e6f-e8b5-44cc-9edf-c9bc7223a4e4"
port_security       : ["fa:16:3e:d1:15:fe 10.128.102.2"]
tag                 : 4067
tag_request         : []
type                : ""
up                  : true




()[root@controller-2 /]# ovn-nbctl list logical_switch_port 9fad28c5-3dc7-4483-9142-9d8bf681467d
_uuid               : 9fad28c5-3dc7-4483-9142-9d8bf681467d
addresses           : ["fa:16:3e:b8:0c:c7 10.128.102.3"]
dhcpv4_options      : []
dhcpv6_options      : []
dynamic_addresses   : []
enabled             : true
external_ids        : {"neutron:cidrs"="10.128.102.3/23", "neutron:device_id"="", "neutron:device_owner"="compute:kuryr", "neutron:network_name"="neutron-34a649a7-99a0-49c8-bcfe-07f6ffe86eff", "neutron:port_name"="", "neutron:project_id"="8064de6c4f82419ebc7c417db6d9d29d", "neutron:revision_number"="13", "neutron:security_group_ids"="3ebaa469-4f35-492c-8b91-0ffd61107503"}
ha_chassis_group    : []
name                : "2a8dd7d8-e942-4022-94c7-946d8b9333ce"
options             : {requested-chassis="compute-0.redhat.local"}
parent_name         : "60ee1e6f-e8b5-44cc-9edf-c9bc7223a4e4"
port_security       : ["fa:16:3e:b8:0c:c7 10.128.102.3"]
tag                 : 4057
tag_request         : []
type                : ""
up                  : true




()[root@controller-2 /]# ovn-nbctl list port_group f06fbd04-8075-4564-80bd-03f43e302ffa
_uuid               : f06fbd04-8075-4564-80bd-03f43e302ffa
acls                : [3a476e19-4688-4ba6-8606-c49f284d6fbe, 937c9fa0-6ee8-4f8a-918e-bf9857c6ac08]
external_ids        : {"neutron:security_group_id"="3ebaa469-4f35-492c-8b91-0ffd61107503"}
name                : "pg_3ebaa469_4f35_492c_8b91_0ffd61107503"
ports               : [094db8e5-a604-49cb-b252-fed967764eb8, 9fad28c5-3dc7-4483-9142-9d8bf681467d, bd7af4d7-2c26-498e-a05a-ecc10d8d5d4a]



()[root@controller-2 /]# ovn-nbctl find port_group name="neutron_pg_drop" | head -n3
_uuid               : 8a6e5451-42fe-4191-bad0-54845cd62bc3
acls                : [87c7ec4d-7e38-4c43-b426-0a7faf1346c4, e575bd08-df38-4e1a-a654-ad042a38e9f4]


Both ports belong to the Port Group 'neutron_pg_drop' which contains low prio ACLs to drop all the traffic:

()[root@controller-2 /]# ovn-nbctl find port_group name="neutron_pg_drop" | grep -c 9fad28c5
1
()[root@controller-2 /]# ovn-nbctl find port_group name="neutron_pg_drop" | grep -c 094db8e5
1


()[root@controller-2 /]# ovn-nbctl list ACL 87c7ec4d-7e38-4c43-b426-0a7faf1346c
_uuid               : 87c7ec4d-7e38-4c43-b426-0a7faf1346c4
action              : drop
direction           : from-lport
external_ids        : {}
log                 : false
match               : "inport == @neutron_pg_drop && ip"
meter               : []
name                : []
priority            : 1001
severity            : []


()[root@controller-2 /]# ovn-nbctl list ACL e575bd08-df38-4e1a-a654-ad042a38e9f4
_uuid               : e575bd08-df38-4e1a-a654-ad042a38e9f4
action              : drop
direction           : to-lport
external_ids        : {}
log                 : false
match               : "outport == @neutron_pg_drop && ip"
meter               : []
name                : []
priority            : 1001
severity            : []


The ACLs applied to the Port Group are the following ones:

()[root@controller-2 /]# ovn-nbctl list ACL 3a476e19-4688-4ba6-8606-c49f284d6fbe
_uuid               : 3a476e19-4688-4ba6-8606-c49f284d6fbe
action              : allow-related
direction           : from-lport
external_ids        : {"neutron:security_group_rule_id"="89b25a81-884d-4c80-b1d0-2d752c4c79de"}
log                 : false
match               : "inport == @pg_3ebaa469_4f35_492c_8b91_0ffd61107503 && ip4"
meter               : []
name                : []
priority            : 1002
severity            : []

()[root@controller-2 /]# ovn-nbctl list ACL 937c9fa0-6ee8-4f8a-918e-bf9857c6ac08
_uuid               : 937c9fa0-6ee8-4f8a-918e-bf9857c6ac08
action              : allow-related
direction           : to-lport
external_ids        : {"neutron:security_group_rule_id"="8f73302c-f6ba-4e01-a488-172eda5106c5"}
log                 : false
match               : "outport == @pg_3ebaa469_4f35_492c_8b91_0ffd61107503 && ip4 && ip4.src == 10.196.0.0/16"
meter               : []
name                : []
priority            : 1002
severity            : []


So, since ip4.src is not within 10.196.0.0/16, the traffic is expected to be dropped.


Both ports are bound to the same chassis and the connection seems to be commited into conntrack.

Comment 3 Dumitru Ceara 2019-12-02 12:41:53 UTC
Fix sent upstream for review: https://patchwork.ozlabs.org/patch/1203132/

Comment 6 ying xu 2019-12-18 09:36:25 UTC
according to comment 3,I reproduce this bug on version ovn2.11.1-20 by using the topo as below:
[root@hp-dl380pg8-13 bz1778164]# ovn-nbctl show
switch 88beaab4-a051-47fe-a715-68f12728b6ab (s3)
    port s3_r1
        type: router
        addresses: ["00:de:ad:ff:01:03 172.16.103.1 2001:db8:103::1"]
        router-port: r1_s3
    port hv0_vm01_vnet1
        addresses: ["00:de:ad:00:01:01 172.16.103.21 2001:db8:103::21"]
    port hv1_vm01_vnet1
        addresses: ["00:de:ad:01:01:01 172.16.103.11 2001:db8:103::11"]
switch ad45813e-5898-4654-accb-7937521a2f7f (s2)
    port s2_r1
        type: router
        addresses: ["00:de:ad:ff:01:02 172.16.102.1 2001:db8:102::1"]
        router-port: r1_s2
    port hv0_vm00_vnet1
        addresses: ["00:de:ad:00:00:01 172.16.102.21 2001:db8:102::21"]
    port hv1_vm00_vnet1
        addresses: ["00:de:ad:01:00:01 172.16.102.11 2001:db8:102::11"]
router 63e929ad-11ab-4834-84f9-b615de34d182 (r1)
    port r1_s3
        mac: "00:de:ad:ff:01:03"
        networks: ["172.16.103.1/24", "2001:db8:103::1/64"]
    port r1_s2
        mac: "00:de:ad:ff:01:02"
        networks: ["172.16.102.1/24", "2001:db8:102::1/64"]
[root@hp-dl380pg8-13 bz1778164]# ovn-sbctl show
Chassis "hv0"
    hostname: "dell-per730-03.rhts.eng.pek2.redhat.com"
    Encap geneve
        ip: "20.0.63.26"
        options: {csum="true"}
    Port_Binding "hv0_vm00_vnet1"
    Port_Binding "hv0_vm01_vnet1"
Chassis "hv1"
    hostname: "hp-dl380pg8-13.rhts.eng.pek2.redhat.com"
    Encap geneve
        ip: "20.0.63.25"
        options: {csum="true"}
    Port_Binding "hv1_vm00_vnet1"
    Port_Binding "hv1_vm01_vnet1"
test step:
1.add a port to acl portgroup,and set some acl rules
2. add a new port to acl portgroup, this new port should follow the acl rules.

ovn-nbctl create Port_Group name=pg1 ports="$hv1_vm00_uuid $hv0_vm00_uuid"
ovn-nbctl create Port_Group name=pg2 ports="$hv1_vm01_uuid"
#no acl rules:
ping -c3 172.16.103.21[root@localhost ~]# ping -c3 172.16.103.11
PING 172.16.103.11 (172.16.103.11) 56(84) bytes of data.
64 bytes from 172.16.103.11: icmp_seq=1 ttl=63 time=0.923 ms
64 bytes from 172.16.103.11: icmp_seq=2 ttl=63 time=0.237 ms
64 bytes from 172.16.103.11: icmp_seq=3 ttl=63 time=0.102 ms

--- 172.16.103.11 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.102/0.420/0.923/0.360 ms

ping -c3 172.16.103.21
PING 172.16.103.21 (172.16.103.21) 56(84) bytes of data.
64 bytes from 172.16.103.21: icmp_seq=1 ttl=63 time=2.51 ms
64 bytes from 172.16.103.21: icmp_seq=2 ttl=63 time=0.503 ms
64 bytes from 172.16.103.21: icmp_seq=3 ttl=63 time=0.426 ms

--- 172.16.103.21 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms

                #add acl rules
                ovn-nbctl acl-add s2 to-lport 1001 "outport == @pg2 && ip4.src == $ip4_1_0_1" drop
                ovn-nbctl acl-add s2 to-lport 1001 "outport == @pg2 && ip6.src == $ip6_1_0_1" drop
                ovn-nbctl acl-add s3 to-lport 1001 "outport == @pg2 && ip4.src == $ip4_1_0_1" drop
                ovn-nbctl acl-add s3 to-lport 1001 "outport == @pg2 && ip6.src == $ip6_1_0_1" drop

[root@localhost ~]# ping 172.16.103.11
PING 172.16.103.11 (172.16.103.11) 56(84) bytes of data.

--- 172.16.103.11 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

[root@localhost ~]# ping -c3 172.16.103.21
PING 172.16.103.21 (172.16.103.21) 56(84) bytes of data.
64 bytes from 172.16.103.21: icmp_seq=1 ttl=63 time=1.29 ms
64 bytes from 172.16.103.21: icmp_seq=2 ttl=63 time=0.368 ms
64 bytes from 172.16.103.21: icmp_seq=3 ttl=63 time=0.350 ms

--- 172.16.103.21 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.350/0.670/1.294/0.441 ms

#add a new port to the pg2 port_group(this port is attached to 172.16.103.21),now ping 172.16.103.21 also pass,but it should be fail.
[root@localhost ~]# ping 172.16.103.11
PING 172.16.103.11 (172.16.103.11) 56(84) bytes of data.

--- 172.16.103.11 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

[root@localhost ~]# ping -c3 172.16.103.21
PING 172.16.103.21 (172.16.103.21) 56(84) bytes of data.
64 bytes from 172.16.103.21: icmp_seq=1 ttl=63 time=1.29 ms
64 bytes from 172.16.103.21: icmp_seq=2 ttl=63 time=0.368 ms
64 bytes from 172.16.103.21: icmp_seq=3 ttl=63 time=0.350 ms

--- 172.16.103.21 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.350/0.670/1.294/0.441 ms


verified on version 2.11.1-24
[root@localhost ~]# ping 172.16.103.11
PING 172.16.103.11 (172.16.103.11) 56(84) bytes of data.

--- 172.16.103.11 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

[root@localhost ~]# ping 172.16.103.21
PING 172.16.103.21 (172.16.103.21) 56(84) bytes of data.

--- 172.16.103.21 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms

Comment 8 errata-xmlrpc 2020-01-21 17:02:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0190


Note You need to log in before you can comment on or make changes to this bug.