Bug 1843935 - [RFE] Neutron QinQ transparent VLAN support for vBNG NFV usecases
Summary: [RFE] Neutron QinQ transparent VLAN support for vBNG NFV usecases
Keywords:
Status: CLOSED DUPLICATE of bug 1846019
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-networking-ovn
Version: 16.2 (Train)
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: beta
: ---
Assignee: Assaf Muller
QA Contact: Eran Kuris
URL:
Whiteboard:
Depends On: 1465393 1846018 1846019 1898082
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-04 13:01 UTC by Sadique Puthen
Modified: 2024-06-13 22:41 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of: 1465393
Environment:
Last Closed: 2021-02-10 10:23:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-7128 0 None None None 2022-08-10 15:06:48 UTC

Comment 1 gregsmith 2020-06-17 14:43:26 UTC
Glad to see people working on this! Please let me know if there are any questions or you need more info. The basic test would be to send a DHCP discover from outside the openstack and have it seen on a physical port of a compute node. Make sure that discover broadcast comes in on a QinQ (aka nested VLANs) VLAN and make sure that QinQ VLAN is configured on the compute node. Then check that a guest VM on the node receieves the discover with the QinQ VLAN intact. Hope that makes sense. reach out any time with questions or comments. Greg S

Comment 4 Anita Tragler 2020-07-08 19:03:43 UTC
Need for transparent VLAN feature
- VLAN aware VMs does not work in this usecase since cloud provider (OpenStack admin) is not aware of the VLAN IDs that the VM application needs or uses
- SR-IOV VF does not work when there are large number of VLAN IDs, limited to number of VFs
- PCI Pass-thru with PF in VF-trusted is an alternative, but this locks up the NIC, need separate NIC port for each VM application. 

ML2/OVS plugin uses VLAN tags to identify destination VMs, this makes it difficult to support transparent VLAN directly to VM if tags are unknown

ML2/OVN plugin uses OVN logical metadata to identify destination VM which is a combination of network ID and logical source and destination port ID
OVN is a better candidate to add support for transparent VLANs

Comment 5 Anita Tragler 2020-07-08 19:09:29 UTC
(In reply to gregsmith from comment #1)
> Glad to see people working on this! Please let me know if there are any
> questions or you need more info. The basic test would be to send a DHCP
> discover from outside the openstack and have it seen on a physical port of a
> compute node. Make sure that discover broadcast comes in on a QinQ (aka
> nested VLANs) VLAN and make sure that QinQ VLAN is configured on the compute
> node. Then check that a guest VM on the node receieves the discover with the
> QinQ VLAN intact. Hope that makes sense. reach out any time with questions
> or comments. Greg S

Thanks Greg for the input, I'm sure our engineering/QE team will verify this
Does Juniper still need this support for vBNG?

Comment 6 gregsmith 2020-07-08 19:59:04 UTC
Hi Anita, Yes this is still relevant for Juniper vBNG. Thanks for following up! Greg S

Comment 9 Jakub Libosvar 2021-02-10 10:23:31 UTC

*** This bug has been marked as a duplicate of bug 1846019 ***


Note You need to log in before you can comment on or make changes to this bug.