Bug 1843935

Summary: [RFE] Neutron QinQ transparent VLAN support for vBNG NFV usecases
Product: Red Hat OpenStack Reporter: Sadique Puthen <sputhenp>
Component: python-networking-ovnAssignee: Assaf Muller <amuller>
Status: CLOSED DUPLICATE QA Contact: Eran Kuris <ekuris>
Severity: high Docs Contact:
Priority: unspecified    
Version: 16.2 (Train)CC: amuller, apevec, atragler, chrisw, dhill, egarver, fbaudin, gregsmith, ihrachys, jlibosva, lhh, majopela, nchandek, nwolf, scohen, sputhenp, srevivo, tfreger
Target Milestone: betaKeywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 1465393 Environment:
Last Closed: 2021-02-10 10:23:31 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1465393, 1846018, 1846019, 1898082    
Bug Blocks:    

Comment 1 gregsmith 2020-06-17 14:43:26 UTC
Glad to see people working on this! Please let me know if there are any questions or you need more info. The basic test would be to send a DHCP discover from outside the openstack and have it seen on a physical port of a compute node. Make sure that discover broadcast comes in on a QinQ (aka nested VLANs) VLAN and make sure that QinQ VLAN is configured on the compute node. Then check that a guest VM on the node receieves the discover with the QinQ VLAN intact. Hope that makes sense. reach out any time with questions or comments. Greg S

Comment 4 Anita Tragler 2020-07-08 19:03:43 UTC
Need for transparent VLAN feature
- VLAN aware VMs does not work in this usecase since cloud provider (OpenStack admin) is not aware of the VLAN IDs that the VM application needs or uses
- SR-IOV VF does not work when there are large number of VLAN IDs, limited to number of VFs
- PCI Pass-thru with PF in VF-trusted is an alternative, but this locks up the NIC, need separate NIC port for each VM application. 

ML2/OVS plugin uses VLAN tags to identify destination VMs, this makes it difficult to support transparent VLAN directly to VM if tags are unknown

ML2/OVN plugin uses OVN logical metadata to identify destination VM which is a combination of network ID and logical source and destination port ID
OVN is a better candidate to add support for transparent VLANs

Comment 5 Anita Tragler 2020-07-08 19:09:29 UTC
(In reply to gregsmith from comment #1)
> Glad to see people working on this! Please let me know if there are any
> questions or you need more info. The basic test would be to send a DHCP
> discover from outside the openstack and have it seen on a physical port of a
> compute node. Make sure that discover broadcast comes in on a QinQ (aka
> nested VLANs) VLAN and make sure that QinQ VLAN is configured on the compute
> node. Then check that a guest VM on the node receieves the discover with the
> QinQ VLAN intact. Hope that makes sense. reach out any time with questions
> or comments. Greg S

Thanks Greg for the input, I'm sure our engineering/QE team will verify this
Does Juniper still need this support for vBNG?

Comment 6 gregsmith 2020-07-08 19:59:04 UTC
Hi Anita, Yes this is still relevant for Juniper vBNG. Thanks for following up! Greg S

Comment 9 Jakub Libosvar 2021-02-10 10:23:31 UTC

*** This bug has been marked as a duplicate of bug 1846019 ***