Bug 1583541 - [SRIOV] No Connectivity between SR-IOV instance and non-SRIOV instance on different networks
Summary: [SRIOV] No Connectivity between SR-IOV instance and non-SRIOV instance on dif...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: opendaylight
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z1
: 13.0 (Queens)
Assignee: Victor Pickard
QA Contact: Itzik Brown
URL:
Whiteboard: SRIOV
Depends On:
Blocks: 1528947
TreeView+ depends on / blocked
 
Reported: 2018-05-29 08:25 UTC by Itzik Brown
Modified: 2018-10-25 05:24 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
SRIOV based Compute instances have no connectivity to OVS Compute instances if they are on different networks. The workaround is to use an external router that is connected to both VLAN provider networks.
Clone Of:
Environment:
N/A
Last Closed: 2018-06-18 18:25:27 UTC
Target Upstream Version:


Attachments (Terms of Use)
controller sosreport (49.83 KB, text/plain)
2018-05-29 08:26 UTC, Itzik Brown
no flags Details
compute sosreport (27.25 KB, text/plain)
2018-05-29 08:27 UTC, Itzik Brown
no flags Details
ODL debug logs with annotations (193.25 KB, text/plain)
2018-06-05 13:52 UTC, Victor Pickard
no flags Details

Description Itzik Brown 2018-05-29 08:25:12 UTC
Description of problem:
The scenario:
Two VLAN networks.
The two networks are connected through a router.
One instance with direct port (SR-IOV) on one network.
Second instance with OVS port on the second network.
There is a connectivity to the SR-IOV from the DHCP namespace but not from/to the other instance.

Version-Release number of selected component (if applicable):
opendaylight-8.0.0-10.el7ost.noarch

How reproducible:


Steps to Reproduce:
1. As described
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Itzik Brown 2018-05-29 08:26:13 UTC
Created attachment 1445237 [details]
controller sosreport

Comment 2 Itzik Brown 2018-05-29 08:27:13 UTC
Created attachment 1445239 [details]
compute sosreport

Comment 5 Victor Pickard 2018-06-02 12:33:07 UTC
The SRIOV VM cannot ping the default gateway, which is the router interface. The reason is because the ARP entry for the router i/f is not being resolved.

Even after adding a static ARP entry, ping still failed to the gw. 

The summary is that 2 flows (at least) are missing in the ODL controller for the SRIOV VM port:
  1. An entry in the ARP Responder table (81)
  2. an entry in lport table (17) to sent the pkt to LG GW MAC TABLE (19).

I have details of the analysis below.

Itzik,
Has this test case worked in the past?

I'll have to continue debugging/investigating to figure out how to fix this. Not aware of a workaround at this point. So, will update the doc flags accordingly.


overcloud) [stack@panther17 ~]$ openstack subnet list

+--------------------------------------+---------+--------------------------------------+------------------+
| ID                                   | Name    | Network                              | Subnet           |
+--------------------------------------+---------+--------------------------------------+------------------+
| 74061fd6-11c7-48ce-961e-d5b81810db2e |         | 01248f63-c3fc-4e62-9170-1dcd39d85148 | 192.168.101.0/24 |
| e3aa5292-4001-4301-b994-1f981acaa32e | subnet1 | 4fff4041-9e75-4537-9736-f1905624562d | 192.168.99.0/24  |
| fbb5060c-da46-4cd6-9c64-820ba02b5d99 |         | a9090e01-486d-4e4b-81c1-083449b348ac | 10.0.0.0/24      |
+--------------------------------------+---------+--------------------------------------+------------------+



[root@vmsriov1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.101.1   0.0.0.0         UG    100    0        0 eth0
169.254.169.254 192.168.101.2   255.255.255.255 UGH   100    0        0 eth0
192.168.101.0   0.0.0.0         255.255.255.0   U     100    0        0 eth0


[root@vmsriov1 ~]# ip n ls
192.168.101.1 dev eth0  INCOMPLETE
192.168.101.2 dev eth0 lladdr fa:16:3e:d6:ac:e1 STALE



(overcloud) [stack@panther17 ~]$ openstack port list
+--------------------------------------+------------+-------------------+------------------------------------------------------------------------------+--------+
| ID                                   | Name       | MAC Address       | Fixed IP Addresses                                                           | Status |
+--------------------------------------+------------+-------------------+------------------------------------------------------------------------------+--------+
| 168a3bf1-17c2-4f8e-820d-f5a6c5933be7 |            | fa:16:3e:46:44:27 | ip_address='192.168.99.5', subnet_id='e3aa5292-4001-4301-b994-1f981acaa32e'  | ACTIVE |
| 19c72f43-9ee4-4270-8905-5c70bbf69ff1 |            | fa:16:3e:6e:8f:9c | ip_address='192.168.99.7', subnet_id='e3aa5292-4001-4301-b994-1f981acaa32e'  | ACTIVE |
| 4550cbc2-b72f-449b-8933-580ebe347c4f |            | fa:16:3e:77:6f:15 | ip_address='192.168.101.1', subnet_id='74061fd6-11c7-48ce-961e-d5b81810db2e' | DOWN   |

I looked on the Controller node, and did not see the sriov vm port in the ARP Responder Table (81). So, no ARP response ever sent from the Controller.

Table 0 hit on controller when ping router i/f from sriov vm... lport=18,
==========================================================
cookie=0x8000000, duration=2477.427s, table=0, n_packets=851, n_bytes=77770, priority=10,in_port=2,dl_vlan=592 actions=pop_vlan,write_metadata:0x120000000001/0xfffff
f0000000001,goto_table:17


table 17
========
cookie=0x8040000, duration=2507.980s, table=17, n_packets=874, n_bytes=79242, priority=10,metadata=0x120000000000/0xffffff0000000000 actions=load:0x12->NXM_NX_REG1[0
..19],load:0x138c->NXM_NX_REG7[0..15],write_metadata:0xa00012138c000000/0xfffffffffffffffe,goto_table:43

table 43
========
cookie=0x822002d, duration=13733.745s, table=43, n_packets=223, n_bytes=14030, priority=100,arp,arp_op=1 actions=group:5000

group 5000
==========
group_id=5000,type=all,bucket=actions=CONTROLLER:65535,bucket=actions=resubmit(,48),bucket=actions=resubmit(,81)

taable 81 hit, pkt is dropped
==============================
cookie=0x8220000, duration=13837.961s, table=81, n_packets=294, n_bytes=18574, priority=0 actions=drop


table 81 full
=============
cookie=0xc8ca640d, duration=3380.079s, table=81, n_packets=0, n_bytes=0, priority=100,arp,metadata=0xd138b000000/0xfffffffff000000,arp_tpa=192.168.99.1,arp_op=1 acti
ons=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:3f:11:26->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA
[]->NXM_OF_ARP_TPA[],load:0xfa163e3f1126->NXM_NX_ARP_SHA[],load:0xc0a86301->NXM_OF_ARP_SPA[],load:0->NXM_OF_IN_PORT[],load:0xd00->NXM_NX_REG6[],resubmit(,220)

 cookie=0xc8ca6611, duration=2667.651s, table=81, n_packets=0, n_bytes=0, priority=100,arp,metadata=0x11138c000000/0xfffffffff000000,arp_tpa=192.168.101.1,arp_op=1 ac
tions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:77:6f:15->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_S
PA[]->NXM_OF_ARP_TPA[],load:0xfa163e776f15->NXM_NX_ARP_SHA[],load:0xc0a86501->NXM_OF_ARP_SPA[],load:0->NXM_OF_IN_PORT[],load:0x1100->NXM_NX_REG6[],resubmit(,220)

 cookie=0x8220000, duration=13878.369s, table=81, n_packets=324, n_bytes=20494, priority=0 actions=drop



I added a static arp entry on the sriov vm. 

[root@vmsriov1 ~]# ip n ls
192.168.101.1 dev eth0 lladdr fa:16:3e:77:6f:15 PERMANENT
192.168.101.2 dev eth0 lladdr fa:16:3e:d6:ac:e1 STALE
[root@vmsriov1 ~]# ping 192.168.101.1



Now, I can see ICMP arrive at the controller, but it is not going to table 19. There seems to be a missing flow in table 17 that would send the ICMP pkt to the L3 GW MAC Table (19).


[heat-admin@controller-0 ~]$ sudo ovs-ofctl dump-flows br-int -OOpenflow13 |grep table=17
 cookie=0x8000001, duration=150952.905s, table=17, n_packets=1264, n_bytes=80924, priority=10,metadata=0x80000000000/0xffffff0000000000 actions=load:0x186a0->NXM_NX_REG3[0..24],write_metadata:0x9000080000030d40/0xfffffffffffffffe,goto_table:19
 cookie=0x8040000, duration=150952.905s, table=17, n_packets=1264, n_bytes=80924, priority=10,metadata=0x9000080000000000/0xffffff0000000000 actions=load:0x8->NXM_NX_REG1[0..19],load:0x138a->NXM_NX_REG7[0..15],write_metadata:0xa00008138a000000/0xfffffffffffffffe,goto_table:43
 cookie=0x8040000, duration=150938.400s, table=17, n_packets=3602, n_bytes=328778, priority=10,metadata=0xe0000000000/0xffffff0000000000 actions=load:0xe->NXM_NX_REG1[0..19],load:0x138b->NXM_NX_REG7[0..15],write_metadata:0xa0000e138b000000/0xfffffffffffffffe,goto_table:43
 cookie=0x8040000, duration=150938.027s, table=17, n_packets=3501, n_bytes=307153, priority=10,metadata=0x90000d0000000000/0xffffff0000000000 actions=load:0xd->NXM_NX_REG1[0..19],load:0x138b->NXM_NX_REG7[0..15],write_metadata:0xa0000d138b000000/0xfffffffffffffffe,goto_table:43
 cookie=0x8000001, duration=150938.027s, table=17, n_packets=3501, n_bytes=307153, priority=10,metadata=0xd0000000000/0xffffff0000000000 actions=load:0x186a3->NXM_NX_REG3[0..24],write_metadata:0x90000d0000030d46/0xfffffffffffffffe,goto_table:19
 cookie=0x8040000, duration=150239.023s, table=17, n_packets=73348, n_bytes=7449548, priority=10,metadata=0x120000000000/0xffffff0000000000 actions=load:0x12->NXM_NX_REG1[0..19],load:0x138c->NXM_NX_REG7[0..15],write_metadata:0xa00012138c000000/0xfffffffffffffffe,goto_table:43
 cookie=0x8040000, duration=150225.610s, table=17, n_packets=2031, n_bytes=181090, priority=10,metadata=0x9000110000000000/0xffffff0000000000 actions=load:0x11->NXM_NX_REG1[0..19],load:0x138c->NXM_NX_REG7[0..15],write_metadata:0xa00011138c000000/0xfffffffffffffffe,goto_table:43
 cookie=0x8000001, duration=150225.610s, table=17, n_packets=2031, n_bytes=181090, priority=10,metadata=0x110000000000/0xffffff0000000000 actions=load:0x186a3->NXM_NX_REG3[0..24],write_metadata:0x9000110000030d46/0xfffffffffffffffe,goto_table:19
 cookie=0x8000000, duration=161436.332s, table=17, n_packets=0, n_bytes=0, priority=0,metadata=0x8000000000000000/0xf000000000000000 actions=write_metadata:0x9000000000000000/0xf000000000000000,goto_table:80


sriov vm port
=============
 cookie=0x8040000, duration=150239.023s, table=17, n_packets=73348, n_bytes=7449548, priority=10,metadata=0x120000000000/0xffffff0000000000 actions=load:0x12->NXM_NX_REG1[0..19],load:0x138c->NXM_NX_REG7[0..15],write_metadata:0xa00012138c000000/0xfffffffffffffffe,goto_table:43

should be a flow something like this, but for lport 18, 0x12
============================================================
 cookie=0x8000001, duration=150952.905s, table=17, n_packets=1264, n_bytes=80924, priority=10,metadata=0x80000000000/0xffffff0000000000 actions=load:0x186a0->NXM_NX_REG3[0..24],write_metadata:0x9000080000030d40/0xfffffffffffffffe,goto_table:19

Comment 6 Victor Pickard 2018-06-05 13:51:47 UTC
I've analyzed the ODL code a bit more, and also received feedback from the netvirt community (thanks Daya!) confirming my observations.

SRIOV uses direct ports, meaning that the VM attached to the SRIOV enabled interface is not being connected to OVS. 

For normal OVS ports, ODL will create an interface (vpnInterface) to represent the port, and when that interface goes operationally up, one of the things that happens is that the interface (VM port) will be added to the router-interfaces-map, ArpResponder rules will be programmed, etc.


ODL does not create interfaces (vpnInterfaces) and associated bindings for SRIOV VMs, as the design requires openflow and OVS connectivity, which is not available for direct ports.  You can see the check for port type in NeutronPortChangeListener::handleNeutronPortCreated(), here:

https://github.com/opendaylight/netvirt/blob/master/neutronvpn/impl/src/main/java/org/opendaylight/netvirt/neutronvpn/NeutronPortChangeListener.java#L565

The recommended solution for this configuration is to use L2 Gateway (with hwvtep) to connect the SRIOV VMs to neutron networks.

I'm including the email response from the netvirt dev mailing list:

"odl cannot provide binding for the sr-iov vm’s connected to a vlan, without openflow/ovs being available. This has to be done by the sriov mechanism driver (unless we are talking about the mellanox smartnic h/w and patches). It will not have any interfaces, or service bindings available to provide the service. Hence, u need a hwvtep for odl to learn about all these macs, and do the stitching between these macs leant on vlans and map them to neutron networks (multi-segment).

Thanks,
daya
"



For reference, here is some output for a normal OVS port (VM port), where the subnet has been added to the router. The lport for the VM for this output is 23 (0x17).


openstack router create ext-rtr
openstack router add subnet ext-rtr vx-subnet

openstack server create vmvx1 --config-drive=true  --user-data /opt/tools/centos.yaml --flavor centos-5G --image centos7-vic --nic net-id=$(neutron net-list | grep -w 
vx-net | awk '{print $2}') --availability-zone=nova:compute1 --key-name admin_key --security-group goPacketGo


[stack@compute1 devstack]$ ../tools/flows.sh |grep table=0
 cookie=0x8000001, duration=3743.784s, table=0, n_packets=1213, n_bytes=115221, priority=5,in_port=1 actions=write_metadata:0x10000000001/0xfffff0000000001,goto_table:36
 cookie=0x8000001, duration=3743.276s, table=0, n_packets=1178, n_bytes=112904, priority=5,in_port=2 actions=write_metadata:0x60000000001/0xfffff0000000001,goto_table:36
 cookie=0x8000000, duration=13.892s, table=0, n_packets=0, n_bytes=0, priority=4,in_port=10,vlan_tci=0x0000/0x1fff actions=write_metadata:0x170000000000/0xffffff0000000001,goto_table:17
[stack@compute1 devstack]$


[stack@compute1 devstack]$ ../tools/flows.sh |grep table=17
 cookie=0x6900000, duration=18.474s, table=17, n_packets=0, n_bytes=0, priority=10,metadata=0x170000000000/0xffffff0000000000 actions=write_metadata:0x9000170000000000/0xfffffffffffffffe,goto_table:210
 cookie=0x8000001, duration=17.613s, table=17, n_packets=0, n_bytes=0, priority=10,metadata=0x9000170000000000/0xffffff0000000000 actions=load:0x186a8->NXM_NX_REG3[0..24],write_metadata:0xa000170000030d50/0xfffffffffffffffe,goto_table:19
 cookie=0x8040000, duration=17.613s, table=17, n_packets=0, n_bytes=0, priority=10,metadata=0xa000170000000000/0xffffff0000000000 actions=load:0x17->NXM_NX_REG1[0..19],load:0x138a->NXM_NX_REG7[0..15],write_metadata:0xb00017138a000000/0xfffffffffffffffe,goto_table:43
 cookie=0x8000000, duration=3748.466s, table=17, n_packets=0, n_bytes=0, priority=0,metadata=0x9000000000000000/0xf000000000000000 actions=write_metadata:0xa000000000000000/0xf000000000000000,goto_table:80



[stack@compute1 devstack]$ ../tools/flows.sh |grep table=19
 cookie=0x8220015, duration=3750.371s, table=19, n_packets=23, n_bytes=966, priority=100,arp,arp_op=1 actions=resubmit(,17)
 cookie=0x8220016, duration=3750.370s, table=19, n_packets=2, n_bytes=84, priority=100,arp,arp_op=2 actions=resubmit(,17)
 cookie=0x8000009, duration=20.466s, table=19, n_packets=0, n_bytes=0, priority=20,metadata=0x30d50/0xfffffe,dl_dst=fa:16:3e:22:c0:2f actions=goto_table:21
 cookie=0x1080000, duration=3750.371s, table=19, n_packets=204, n_bytes=18486, priority=0 actions=resubmit(,17)
[stack@compute1 devstack]$


karaf@root()> elanInterface:show
ElanInstance/Tag                    ElanInterface/Tag         OpState         AdminState
----------------------------------------------------------------------------------------------
3ba90f64-a47d-4845-9256-b06cebdf9b61/5002 b026b4f6-6160-4b9f-bbc3-7a7c7d429ca9/23 UP              ENABLED
3ba90f64-a47d-4845-9256-b06cebdf9b61/5002 d3f5267d-d14b-46e0-827c-716b1573c58d/21 UP              ENABLED

bash-4.2$ openstack port list
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------+--------+
| ID                                   | Name | MAC Address       | Fixed IP Addresses                                                         | Status |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------+--------+
| b026b4f6-6160-4b9f-bbc3-7a7c7d429ca9 |      | fa:16:3e:fd:31:40 | ip_address='10.100.5.10', subnet_id='239fa651-ef5d-462b-8d06-bb5c11375d19' | ACTIVE |



[stack@compute1 devstack]$ curl -s -u admin:admin -X GET http://${CIP}:8181/restconf/config/neutron:neutron/routers | python -m json.tool
{
    "routers": {
        "router": [
            {
                "admin-state-up": true,
                "distributed": false,
                "name": "ext-rtr",
                "project-id": "ba5cc0d910bd4a088f5d0c3ac818d3f2",
                "revision-number": 0,
                "status": "ACTIVE",
                "tenant-id": "ba5cc0d9-10bd-4a08-8f5d-0c3ac818d3f2",
                "uuid": "32f46447-1f81-4257-ba4c-fbfbc5050c59"
            }
        ]
    }
}

[stack@compute1 devstack]$ curl -s -u admin:admin -X GET http://${CIP}:8181/restconf/config/neutronvpn:router-interfaces-map | python -m json.tool
{
    "router-interfaces-map": {
        "router-interfaces": [
            {
                "interfaces": [
                    {
                        "interface-id": "d99a6d5b-d1ac-4a3f-a1f5-d5bb0a81e2c0"
                    },
                    {
                        "interface-id": "b026b4f6-6160-4b9f-bbc3-7a7c7d429ca9"
                    },
                    {
                        "interface-id": "d3f5267d-d14b-46e0-827c-716b1573c58d"
                    }
                ],
                "router-id": "32f46447-1f81-4257-ba4c-fbfbc5050c59"
            }
        ]
    }
}

I've also attached debug log createVm.log, which has some analysis notes (for future reference...)

Port added to router interface
*******************************

2018-06-04T23:46:33,277 | TRACE | org.opendaylight.yang.gen.v1.urn.opendaylight.netvirt.neutronvpn.rev150602.router.interfaces.map.router.interfaces.Interfaces_AsyncDataTreeChangeListenerBase-DataTreeChangeHandler-0 | NatRouterInterfaceListener       | 376 - org.opendaylight.netvirt.natservice-impl - 0.7.0.SNAPSHOT | add : Add event - key: InterfacesKey{_interfaceId=b026b4f6-6160-4b9f-bbc3-7a7c7d429ca9}, value: Interfaces{getInterfaceId=b026b4f6-6160-4b9f-bbc3-7a7c7d429ca9, augmentations={}}



Here is where the ARP Responder flow is installed. Note: none of this will work for direct ports, as this is only currently done for 
ovs ports.. addArpResponderFlow()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

2018-06-04T23:46:35,580 | TRACE | ForkJoinPool-1-worker-2 | ArpResponderHandler              | 386 - org.opendaylight.netvirt.vpnmanager-impl - 0.7.0.SNAPSHOT | Creating the ARP Responder flow for VPN Interface b026b4f6-6160-4b9f-bbc3-7a7c7d429ca9

Comment 7 Victor Pickard 2018-06-05 13:52:54 UTC
Created attachment 1447863 [details]
ODL debug logs with annotations

ODL debug logs with some notes.

Comment 8 Ariel Adam 2018-06-06 06:49:07 UTC
We need a formal approval from Nir not to fix this.

Comment 9 Ariel Adam 2018-06-11 05:44:10 UTC
Given that this is an RFE moving it to OSP14 for now

Comment 12 Itzik Brown 2018-06-18 14:06:59 UTC
Update:
If they VM with SR-IOV and one without SR-IOV on the same network it works if they are on different networks connected to a router(Neutron's) - it doesn't.


Note You need to log in before you can comment on or make changes to this bug.