Bug 1577998 - [RFE] Support for VLAN aware instances on a SR-IOV VF
Summary: [RFE] Support for VLAN aware instances on a SR-IOV VF
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: OSP Team
QA Contact: Toni Freger
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-14 15:04 UTC by Andreas Karis
Modified: 2023-09-26 03:29 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-26 03:29:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1693240 0 None None None 2020-07-10 13:24:20 UTC
OpenStack gerrit 665467 0 None NEW Add SR-IOV ML2 driver support for VLAN trunking. 2021-02-04 22:48:41 UTC
Red Hat Issue Tracker OSP-2899 0 None None None 2021-12-10 16:15:05 UTC
Red Hat Knowledge Base (Solution) 3031501 0 None None None 2020-07-10 13:25:48 UTC
Red Hat Knowledge Base (Solution) 5216881 0 None None None 2020-07-10 13:26:57 UTC

Description Andreas Karis 2018-05-14 15:04:50 UTC
Description of problem:
Currently, Openstack does not support VLAN aware instances on a SR-IOV VF.  We'd like to have this feature supported in a future OpenStack release.

This upstream bug looks like the feature is not implemented upstream yet https://bugs.launchpad.net/neutron/+bug/1693240

See https://access.redhat.com/solutions/3031501 and https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html-single/networking_guide/#sec-trunk-vlan - no explicit mention to SR-IOV

Comment 3 Assaf Muller 2018-05-21 13:49:04 UTC
I apologize but the linked Launchpad RFE and this RHBZ RFE are not actionable - Please add an extensive description of what is requested here. What is the user trying to do and why? What problem is the user trying to solve? How would the user like to use the proposed VLAN aware VM API for SRIOV?

Comment 4 Marcos Garcia 2018-09-04 14:33:33 UTC
All the comments from the customer are in the case and also were attached as a PDF in the portal. I'll replay their comments here:

---- SUMMARY -----------

>> Also, in the meantime, why do you need Neutron VLAN trunking?? Can't you just have a regular SRIOV port with VLAN as the underlying encapsulation, then add multiple NICs in the VM with vlan tags inside?

Using regular VLANs increases the number of interfaces on the virtual machine which increases the complexity of the VNF and the OS has a limit of 12 interfaces which doesn’t meet the needs of the service.  Additionally, the application is designed to mark the 802.1p bits in the ethernet frames for QoS.   The application would need to be redesigned to mark the IP COS bits and we’d need to change the QoS configuration of our physical switches to support this as well.  Hence the appeal of a trunk port in the absence of flat networks.  We have made the decision to enable flat network support in our PODs but restricted to only the dedicated SR-IOV ports.

 

--- TIMELINE in reverse order (older to newer) ----
--------------------------------------------------
I actually updated the subject to better reflect the ask.  Newton introduced trunk ports on OVS ports.  I would like to use a trunk port on a SR-IOV VF.  Is that supported?  From what I understand, it is not but I’d like confirmation since I haven’t seen it explicitly spelled out.  To be clear, I do not need QinQ on SR-IOV, I wanted to create a trunk port so that the VM can send VLAN tagged traffic that SR-IOV will simply pass along to my physical switch port that is configured as a trunk port.  This would allow VMs to set 802.1q bit (QoS) in the VLAN tag for our Layer 2 switch to honor for our QoS policy.

--------------
Flat networks is exactly what this VNF provider would like to use but it is not something we have enabled on our PODs.  I was exploring alternate solutions that would provide the same results:  allow VNF to add the VLAN header and mark the 802.1p bits so any network switches that cannot see L3 (TOS/DSCP) QoS markings will honor L2 QoS markings.

 

I’m going to test creating a trunk port on a VF and see how that works out.  I’m also looking into adding support for flat networks on only our SR-IOV ports but that is a longer term solution.

 --------------
I tested and have not been able to get this work.  OpenStack lets me configure it, but the packets are not getting through.  I’m still looking deeper into this, but I’m curious if anyone at RH has actually validated if this is supported?
------------------

(VLAN Filtering) It is not even an option:

 

[root@srbhoncihv51 ~]# ethtool -d ens1f0 | grep VLAN

[root@srbhoncihv51 ~]# 

[root@srbhoncihv51 ~]# ethtool -d ens1f0

Offset          Values

------          ------

0x0000:         02 00 00 00 3e 00 00 00 00 00 00 00 00 00 00 00 

0x0010:         0c 00 00 00 0c 00 00 00 00 00 00 00 00 00 00 00 

0x0020:         ff 07 00 00 00 00 00 00 01 08 ff 47 01 00 00 48 

0x0030:         00 00 f9 64 

 

The other thing I noted is that the VM is configured using virtio, which I don’t believe should be the case with SR-IOV

 

[heat-admin@srbhoncihv51 ~]$ sudo virsh domiflist instance-00000014

Interface  Type       Source     Model       MAC

-------------------------------------------------------

tapc7277fc8-98 bridge     qbrc7277fc8-98 virtio      fa:16:3e:4b:29:25

 

 

I followed the following Red Hat Doc:  https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/networking_guide/bridge-mappings#sec-trunk-vlan

 

And here are the steps I did:

    Create a network and subnet

 

This creates a VLAN 5 network and subnet 55.55.55.0/24

 

$ openstack network create --provider-network-type vlan --provider-physical-network sriov_ens1f0 --provider-segment 5 cory_5

$ openstack subnet create --network cory_5 --subnet-range 55.55.55.0/24 cory_subnet_5

 

2.            Create a parent port and attach it to the previously created network

 

(does this mean this is now the native vlan used for untagged traffic or just the first vlan in the trunk and the guest must create a vlan tagged interface)

 

$ openstack port create --network cory_5 cory_parent_trunk_port

 

3.            Create a trunk port and attach it to the parent port that was just created

 

$ openstack network trunk create --parent-port cory_parent_trunk_port cory_parent_trunk

 

4.            Create more networks and subnet for each VLAN to be added to the trunk

 

This example shows the creation of 2 more networks

 

This creates vlan 6 and vlan 7

And subnets 66.66.66.0/24 and 77.77.77.0/24

 

$ openstack network create --provider-network-type vlan --provider-physical-network sriov_ens1f0 --provider-segment 6 cory_6

$ openstack subnet create --network cory_6 --subnet-range 66.66.66.0/24 cory_subnet_6

$ openstack network create --provider-network-type vlan --provider-physical-network sriov_ens1f0 --provider-segment 7 cory_7

$ openstack subnet create --network cory_7 --subnet-range 77.77.77.0/24 cory_subnet_7

 

5.            Add ports (sub ports) to be added to the parent trunk using the mac address of the parent port

 

$ openstack port show cory_parent_trunk_port | grep mac_address

| mac_address           | fa:16:3e:4b:29:25

$ openstack port create --network cory_6 --mac-address fa:16:3e:4b:29:25 cory_subport_trunk_port6

$ openstack port create --network cory_7 --mac-address fa:16:3e:4b:29:25 cory_subport_trunk_port7

 

6.            Associate the ports with the trunk and specify the VLAN ID of each

 

$ openstack network trunk set --subport port=cory_subport_trunk_port6,segmentation-type=vlan,segmentation-id=6 cory_parent_trunk

$ openstack network trunk set --subport port=cory_subport_trunk_port7,segmentation-type=vlan,segmentation-id=7 cory_parent_trunk

 

7.            Spin up a VM using the port ID of the parent port listed in the output of “openstack network trunk show <trunk_name>”

 

Look at the network trunk and find it’s MAC address

 

$ openstack network trunk list

$ openstack network trunk show cory_parent_trunk

 

Using the MAC address, spin up the VM

 

$ openstack server create --image RHEL-7.5 --flavor cory --nic port-id=c7277fc8-98b0-4759-88b7-72dee6d32360 cory_test_sriov_trunking

 

8.            Configure PF Switch Port Facing the SR-IOV interface of the VM

 

Determine which compute node the VM is running on

 

[stack@srbhsvdevdir ~]$ openstack server show 57a7d3e1-851d-4d0a-9312-2797770cc8cd

+--------------------------------------+----------------------------------------------------------+

| Field                                | Value                                                    |

+--------------------------------------+----------------------------------------------------------+

| OS-DCF:diskConfig                    | MANUAL                                                   |

| OS-EXT-AZ:availability_zone          | vepdg-internet                                           |

| OS-EXT-SRV-ATTR:host                 | srbhoncihv51.localdomain                                 |

| OS-EXT-SRV-ATTR:hypervisor_hostname  | srbhoncihv51.localdomain                                 

| OS-EXT-SRV-ATTR:instance_name        | instance-00000014                                        |

| OS-EXT-STS:power_state               | Running                                                  |

| OS-EXT-STS:task_state                | None                                                     |

| OS-EXT-STS:vm_state                  | active                                                   |

| OS-SRV-USG:launched_at               | 2018-05-10T15:07:42.000000                               |

| OS-SRV-USG:terminated_at             | None                                                     |

| accessIPv4                           |                                                          |

| accessIPv6                           |                                                          |

| addresses                            | cory_5=55.55.55.8                                        |

| config_drive                         |                                                          |

| created                              | 2018-05-10T15:06:57Z                                     |

| flavor                               | cory (c93e50ad-249f-42a2-8046-ce2b7fea1f73)              |

| hostId                               | 6f7d1705bd918384885fad0421fd3ade385f7d4e0dd442d283585030 |

| id                                   | 57a7d3e1-851d-4d0a-9312-2797770cc8cd                     |

| image                                | Centos7 (9d5c24e6-50e8-444f-9d94-f74ac32ab25f)           |

| key_name                             | None                                                     |

| name                                 | cory_test_sriov_trunking                                 |

| os-extended-volumes:volumes_attached | []                                                       |

| progress                             | 0                                                        |

| project_id                           | a71cd390a12c47ac96fd9caa66aa6893                         |

| properties                           |                                                          |

| security_groups                      | [{u'name': u'default'}]                                  |

| status                               | ACTIVE                                                   |

| updated                              | 2018-05-10T15:08:46Z                                     |

| user_id                              | 31bf2b5ad83f4fd2b4f72d1b7a6ca2ba                         |

+--------------------------------------+----------------------------------------------------------+

 

Added these VLANs to the physical switch, setup the port as a 802.1q trunk and permitted these 3 VLANs.  Created a L3 IRB interface on the switch and attempted to ping, which failed. 

 

It would be nice to know if this was tested by someone at RH to confirm it works or not because I can’t seem to make it work.


Note You need to log in before you can comment on or make changes to this bug.