Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1891290

Summary: Allowed Address Pairs, don't work with DVR, when Instances Placed on Different Compute nodes and having different subnets
Product: Red Hat OpenStack Reporter: Anas <andeshmu>
Component: openstack-neutronAssignee: Assaf Muller <amuller>
Status: CLOSED DUPLICATE QA Contact: Eran Kuris <ekuris>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 13.0 (Queens)CC: amuller, bcafarel, chrisw, scohen
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-02 15:01:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Anas 2020-10-25 09:02:44 UTC
Description of problem:

When adding an Allowed address pair with a DVR setup, there was an issue observed when placing the instances on different compute nodes. Condition total min 3 instances, 2 on the same subnet and 1 on a different subnet. All ports connected to the same router.

Steps to reproduce:

VM1 192.168.0.3/29 ----> connected to a router with gateway 192.168.0.1
VM2 and VM3 , 192.168.0.22 and 21 (/29) ----> connected to a router with gateway 192.168.0.17

VIP/AAP IP 192.168.0.19

Steps to reproduce:
1. Create two networks with one subnet each and connect them to a router.
2. Spawn three instances on, three different (DVR enabled) compute nodes. Two in the same subnet, one in the other.(VM2 and VM3 on the same subnet, and VM1 on a different subnet).
3. Create an Allowed address pair on VM2 and VM3(in the same subnet as VM2 and VM3).
4. Connect all the ports to the same router.
5. Start a ping from VM1 in different subnet to the virtual IP(192.168.0.19). Ping will fail on the VIP, but you will be able to reach VM2 and VM3 subnet and not the VIP


Now this scenario works when all the instances are on the same host also it works when then AAP is associated with FIP.

Actual Results:

From VM1, VIP ip is not reachable, which is on a different subnet. Internal IP's are reachable.

Expected results:

From VM1, VIP ip should be reachable.

Additional info:

Found an upstream bug which is similar to the case, 

https://bugs.launchpad.net/neutron/+bug/1773999