Bug 1299352 - OVS-DPDK - ERROR neutron.agent.common.ovs_lib when nic 's bind to using DPDK-compatible driver
Summary: OVS-DPDK - ERROR neutron.agent.common.ovs_lib when nic 's bind to using DPDK...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 8.0 (Liberty)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ga
: 8.0 (Liberty)
Assignee: Terry Wilson
QA Contact: Ofer Blaut
URL:
Whiteboard:
Depends On:
Blocks: 1266070
TreeView+ depends on / blocked
 
Reported: 2016-01-18 08:01 UTC by Eran Kuris
Modified: 2016-04-26 15:42 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-26 07:23:33 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Eran Kuris 2016-01-18 08:01:56 UTC
Description of problem:
On vlan distribute setup , 1 controller & 1 compute  when I set ovs-dpdk setup and I bind nic to using DPDK-compatible driver  openvswitch-neutron-agent start with error : 
2016-01-10 12:19:05.102 4062 ERROR neutron.agent.common.ovs_lib [req-0af2d26b-1865-450d-bd7f-1acdebc4d106 - - - - -] Unable to execute ['ovs-ofctl', 'add-flows', 'br-vlan', '-']. Exception:
Command: ['ovs-ofctl', 'add-flows', 'br-vlan', '-']
Exit code: 1
Stdin: hard_timeout=0,idle_timeout=0,priority=0,table=0,cookie=0,actions=normal
Stdout:
Stderr: ovs-ofctl: br-vlan is not a bridge or a socket

2016-01-10 12:19:15.121 4062 ERROR neutron.agent.ovsdb.impl_vsctl [req-0af2d26b-1865-450d-bd7f-1acdebc4d106 - - - - -] Unable to execute ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--may-exist', 'add-port', 'br-int', 'int-br-vlan', '--', 'set', 'Interface', 'int-br-vlan', 'type=patch', 'optionseer=nonexistent-peer'].



Version-Release number of selected component (if applicable):
# rpm -qa |grep neutron
openstack-neutron-common-7.0.1-2.el7ost.noarch
openstack-neutron-7.0.1-2.el7ost.noarch
python-neutronclient-3.1.0-1.el7ost.noarch
python-neutron-7.0.1-2.el7ost.noarch
openstack-neutron-openvswitch-7.0.1-2.el7ost.noarch
[root@puma10 ~]# rpm -qa |grep dpdk
openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64
dpdk-2.1.0-5.el7.x86_64
dpdk-tools-2.1.0-5.el7.x86_64
[root@puma10 ~]# rpm -qa |grep openvswitch
openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64
python-openvswitch-2.4.0-1.el7.noarch
openstack-neutron-openvswitch-7.0.1-2.el7ost.noarch
[root@puma10 ~]# rpm -qa |grep packstack
openstack-packstack-puppet-7.0.0-0.8.dev1661.gaf13b7e.el7ost.noarch
openstack-packstack-7.0.0-0.8.dev1661.gaf13b7e.el7ost.noarch

How reproducible:
always


Steps to Reproduce:
1.install with packstack setup 1 controller 1  compute with vlan data type tunnel
2. https://wiki.test.redhat.com/jhsiao/osp-dpdk/steps-after-packstack-config
3.

Actual results:


Expected results:


Additional info:

Comment 2 Assaf Muller 2016-01-18 13:34:15 UTC
Assigned to Terry for root cause analysis.

Comment 3 Terry Wilson 2016-01-25 21:36:19 UTC
From an email thread about this issue:

1) ovs-vswtichd wasn't running. Trying to start it manually resulted
in: http://pastebin.test.redhat.com/341785
2) If you switch from vfio-pci to uio_pci_generic, it starts up fine
3) The nic that is bound to dpdk is a 1Gb nic, not one of the 10Gb
nics in the machine. 1Gb nic support in general is pretty spotty and
not something we would support. The 10Gb nics in the system are Emulex
OneConnect nics, which I'm not sure there are poll mode drivers for.
Someone else on the list might know? If not, it would be good to get
you some supported nics.

A separate bz (https://bugzilla.redhat.com/show_bug.cgi?id=1300378) was opened for where vfio-pci wasn't working for this nic, but I think the test machines now have new supported 10Gb nics, so I think this issue can be closed now.

Comment 4 Eran Kuris 2016-01-26 07:23:33 UTC
yes you can close it. now when the setup is with 10Gb nics this bug is not relevant. Now we have other issues with booting vms ...


Note You need to log in before you can comment on or make changes to this bug.