Bug 2167839

Summary: [16.2][OVN][HWOFFLOAD][CONNTRACK][TRANSPARENT VLAN] Security groups not working with transparent vlan ports
Product: Red Hat OpenStack Reporter: Miguel Angel Nieto <mnietoji>
Component: openstack-neutronAssignee: Miro Tomaska <mtomaska>
Status: CLOSED WONTFIX QA Contact: Eran Kuris <ekuris>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 16.2 (Train)CC: chrisw, mtomaska, scohen
Target Milestone: ---Flags: mtomaska: needinfo-
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-03-09 09:57:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Miguel Angel Nieto 2023-02-07 16:47:23 UTC
Description of problem:
Not able to ping through transparent vlan ports when using security groups even when ping is enabled.

I have configured 2 vms with a geneve port being used as parent port and a transparent vlan port. These are the vms:

(overcloud) [stack@undercloud-0 ~]$ openstack server list --all-projects
+--------------------------------------+-----------------------------------------+--------+------------------------------------------------------+----------------------------------------+--------------------+
| ID                                   | Name                                    | Status | Networks                                             | Image                                  | Flavor             |
+--------------------------------------+-----------------------------------------+--------+------------------------------------------------------+----------------------------------------+--------------------+
| eb44b0a3-62c6-4fd3-a15e-9a2e2d2dd9ab | tempest-TestNfvOffload-server-794623600 | ACTIVE | mellanox-geneve-provider=20.20.220.176, 10.10.149.80 | rhel-guest-image-8.7-1660.x86_64.qcow2 | nfv_qe_base_flavor |
| f4caec20-079f-4857-8363-654c52c7e330 | tempest-TestNfvOffload-server-847162446 | ACTIVE | mellanox-geneve-provider=20.20.220.115, 10.10.149.93 | rhel-guest-image-8.7-1660.x86_64.qcow2 | nfv_qe_base_flavor |
+--------------------------------------+-----------------------------------------+--------+------------------------------------------------------+----------------------------------------+--------------------+

Inside the vm I have the 2 ports:
eth0: geneve port
eth0.148@eth0: transparent vlan port

[cloud-user@tempest-testnfvoffload-server-794623600 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8942 qdisc mq state UP group default qlen 1000
    link/ether fa:16:3e:0a:4b:01 brd ff:ff:ff:ff:ff:ff
    altname enp0s4
    altname ens4
    inet 20.20.220.176/24 brd 20.20.220.255 scope global dynamic noprefixroute eth0
       valid_lft 42607sec preferred_lft 42607sec
    inet6 fe80::f816:3eff:fe0a:4b01/64 scope link 
       valid_lft forever preferred_lft forever
3: eth0.148@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8938 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:0a:4b:01 brd ff:ff:ff:ff:ff:ff
    inet 60.60.220.102/24 scope global eth0.148
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe0a:4b01/64 scope link 
       valid_lft forever preferred_lft forever

[cloud-user@tempest-testnfvoffload-server-847162446 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8942 qdisc mq state UP group default qlen 1000
    link/ether fa:16:3e:d6:73:2e brd ff:ff:ff:ff:ff:ff
    altname enp0s4
    altname ens4
    inet 20.20.220.115/24 brd 20.20.220.255 scope global dynamic noprefixroute eth0
       valid_lft 42603sec preferred_lft 42603sec
    inet6 fe80::f816:3eff:fed6:732e/64 scope link 
       valid_lft forever preferred_lft forever
3: eth0.148@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8938 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:d6:73:2e brd ff:ff:ff:ff:ff:ff
    inet 60.60.220.101/24 scope global eth0.148
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fed6:732e/64 scope link 
       valid_lft forever preferred_lft forever

Ping from one vm to the other one using transparent vlan port is failing. There is no issue with the other interface.
[cloud-user@tempest-testnfvoffload-server-847162446 ~]$ ping 60.60.220.102
PING 60.60.220.102 (60.60.220.102) 56(84) bytes of data.
From 60.60.220.101 icmp_seq=1 Destination Host Unreachable
From 60.60.220.101 icmp_seq=2 Destination Host Unreachable
From 60.60.220.101 icmp_seq=3 Destination Host Unreachable
From 60.60.220.101 icmp_seq=4 Destination Host Unreachable
From 60.60.220.101 icmp_seq=5 Destination Host Unreachable
From 60.60.220.101 icmp_seq=6 Destination Host Unreachable

Instances have security groups configured
openstack server show eb44b0a3-62c6-4fd3-a15e-9a2e2d2dd9ab
| security_groups                     | name='tempest-TestNfvOffload-1980766414'    

This is the security group used, ping is enabled.
(overcloud) [stack@undercloud-0 ~]$ openstack security group show tempest-TestNfvOffload-1980766414
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field           | Value                                                                                                                                                                                                            |
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at      | 2023-02-07T16:16:53Z                                                                                                                                                                                             |
| description     | tempest-TestNfvOffload-1980766414 description                                                                                                                                                                    |
| id              | 84aa88a3-0ae3-4e7a-8e6f-ac3c93ec54df                                                                                                                                                                             |
| location        | cloud='', project.domain_id=, project.domain_name=, project.id='03a3a582add64c69b26379498b59ba4f', project.name=, region_name='regionOne', zone=                                                                 |
| name            | tempest-TestNfvOffload-1980766414                                                                                                                                                                                |
| project_id      | 03a3a582add64c69b26379498b59ba4f                                                                                                                                                                                 |
| revision_number | 3                                                                                                                                                                                                                |
| rules           | created_at='2023-02-07T16:16:53Z', direction='egress', ethertype='IPv6', id='2a090d46-beaf-4271-bb71-144dabfba17b', updated_at='2023-02-07T16:16:53Z'                                                            |
|                 | created_at='2023-02-07T16:16:53Z', direction='ingress', ethertype='IPv4', id='5eb7e98f-ab5f-45cf-9591-c1c4b23e4ac0', protocol='icmp', updated_at='2023-02-07T16:16:53Z'                                          |
|                 | created_at='2023-02-07T16:16:53Z', direction='ingress', ethertype='IPv4', id='9c8bdc21-13d1-4338-b5d6-840f133c44cb', port_range_max='22', port_range_min='22', protocol='tcp', updated_at='2023-02-07T16:16:53Z' |
|                 | created_at='2023-02-07T16:16:53Z', direction='egress', ethertype='IPv4', id='d2878bc2-e27f-4be4-a567-b11846d7800b', updated_at='2023-02-07T16:16:53Z'                                                            |
| tags            | []                                                                                                                                                                                                               |
| updated_at      | 2023-02-07T16:16:53Z                                                                                                                                                                                             |
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

We can see that the geneve port has the security group configured and port security:
(overcloud) [stack@undercloud-0 ~]$ openstack port show 85a5b5e4-947c-4ea8-91d6-c75726dd8640 | grep "security"
| port_security_enabled   | True                                                                                                                                             |
| security_group_ids      | 84aa88a3-0ae3-4e7a-8e6f-ac3c93ec54df  

I remove the security group and disable port security:
(overcloud) [stack@undercloud-0 ~]$ openstack port set --no-security-group    --disable-port-security 85a5b5e4-947c-4ea8-91d6-c75726dd8640
(overcloud) [stack@undercloud-0 ~]$ openstack port set --no-security-group    --disable-port-security b60b9fa0-5634-4f64-99df-491ffaaf7d99
(overcloud) [stack@undercloud-0 ~]$ openstack port show 85a5b5e4-947c-4ea8-91d6-c75726dd8640 | grep "security"
| port_security_enabled   | False                                                                                                                                            |
| security_group_ids      |      

Then ping start working
[cloud-user@tempest-testnfvoffload-server-847162446 ~]$ ping 60.60.220.102
PING 60.60.220.102 (60.60.220.102) 56(84) bytes of data.
64 bytes from 60.60.220.102: icmp_seq=1 ttl=64 time=25.1 ms
64 bytes from 60.60.220.102: icmp_seq=2 ttl=64 time=0.075 ms
64 bytes from 60.60.220.102: icmp_seq=3 ttl=64 time=0.060 ms
64 bytes from 60.60.220.102: icmp_seq=4 ttl=64 time=0.054 ms



Version-Release number of selected component (if applicable):
RHOS-16.2-RHEL-8-20221201.n.1

How reproducible:
1. Deploy onv hwoffload setup enabling transparent vlan in the configuration. Thee configuration I used is with DVR, but I would say this is unrelated
2. Deploy 2 vms with a geneve port and a transparent vlan port
3. ping from one vm to the other one using transparent vlan port


Actual results:
ping is not working in transparent vlan ports even if security groups allow it


Expected results:
Security groups should allow ping using transparent vlan port


Additional info:

Comment 1 Miguel Angel Nieto 2023-02-08 09:17:22 UTC
I have tried to add the following rule, but it does not work either. Not sure if it needed, but for other kind of ports it is not needed
source overcloudrc;openstack security group rule create --protocol icmp --egress   c58867eb-ef72-4f39-ac4d-d3b58eec8468


I include nb and sb databases

Comment 4 Miro Tomaska 2023-02-16 04:20:59 UTC
Hi Miguel,

Try this. Enable port security and set your port security groups for both VMs neutron ports.
Then in addition to that set --allowed-address[1] for each VM neutron port. I.e.
openstack port set --allowed-address mac-address=<VM port mac>,ip-address=<VM vlan port CIDR> <port uuid>

Try pinging then. Let me if that works


[1]https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/networking_guide/config-allowed-address-pairs_rhosp-network#add-allow-addr-pairs_config-allowed-address-pairs

Comment 6 Miguel Angel Nieto 2023-03-09 09:38:05 UTC
I created this other but for 17.1 where it is happening too
https://bugzilla.redhat.com/show_bug.cgi?id=2176775

I tried --allowed-address (with 17.1) but it didnt work

I will continue debugging in 17.1, as this should be supported there and not in 16.2. Not sure if I should keep this bz or I close it

Comment 7 Miguel Angel Nieto 2023-03-09 09:57:07 UTC
I will not be fixed in 16.2 but in 17.1 for which i opened a different BZ https://bugzilla.redhat.com/show_bug.cgi?id=2176775