Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1299352

Summary: OVS-DPDK - ERROR neutron.agent.common.ovs_lib when nic 's bind to using DPDK-compatible driver
Product: Red Hat OpenStack Reporter: Eran Kuris <ekuris>
Component: openstack-neutronAssignee: Terry Wilson <twilson>
Status: CLOSED NOTABUG QA Contact: Ofer Blaut <oblaut>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.0 (Liberty)CC: amuller, chrisw, ekuris, nyechiel, yeylon
Target Milestone: ga   
Target Release: 8.0 (Liberty)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-01-26 07:23:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1266070    

Description Eran Kuris 2016-01-18 08:01:56 UTC
Description of problem:
On vlan distribute setup , 1 controller & 1 compute  when I set ovs-dpdk setup and I bind nic to using DPDK-compatible driver  openvswitch-neutron-agent start with error : 
2016-01-10 12:19:05.102 4062 ERROR neutron.agent.common.ovs_lib [req-0af2d26b-1865-450d-bd7f-1acdebc4d106 - - - - -] Unable to execute ['ovs-ofctl', 'add-flows', 'br-vlan', '-']. Exception:
Command: ['ovs-ofctl', 'add-flows', 'br-vlan', '-']
Exit code: 1
Stdin: hard_timeout=0,idle_timeout=0,priority=0,table=0,cookie=0,actions=normal
Stdout:
Stderr: ovs-ofctl: br-vlan is not a bridge or a socket

2016-01-10 12:19:15.121 4062 ERROR neutron.agent.ovsdb.impl_vsctl [req-0af2d26b-1865-450d-bd7f-1acdebc4d106 - - - - -] Unable to execute ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--may-exist', 'add-port', 'br-int', 'int-br-vlan', '--', 'set', 'Interface', 'int-br-vlan', 'type=patch', 'optionseer=nonexistent-peer'].



Version-Release number of selected component (if applicable):
# rpm -qa |grep neutron
openstack-neutron-common-7.0.1-2.el7ost.noarch
openstack-neutron-7.0.1-2.el7ost.noarch
python-neutronclient-3.1.0-1.el7ost.noarch
python-neutron-7.0.1-2.el7ost.noarch
openstack-neutron-openvswitch-7.0.1-2.el7ost.noarch
[root@puma10 ~]# rpm -qa |grep dpdk
openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64
dpdk-2.1.0-5.el7.x86_64
dpdk-tools-2.1.0-5.el7.x86_64
[root@puma10 ~]# rpm -qa |grep openvswitch
openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64
python-openvswitch-2.4.0-1.el7.noarch
openstack-neutron-openvswitch-7.0.1-2.el7ost.noarch
[root@puma10 ~]# rpm -qa |grep packstack
openstack-packstack-puppet-7.0.0-0.8.dev1661.gaf13b7e.el7ost.noarch
openstack-packstack-7.0.0-0.8.dev1661.gaf13b7e.el7ost.noarch

How reproducible:
always


Steps to Reproduce:
1.install with packstack setup 1 controller 1  compute with vlan data type tunnel
2. https://wiki.test.redhat.com/jhsiao/osp-dpdk/steps-after-packstack-config
3.

Actual results:


Expected results:


Additional info:

Comment 2 Assaf Muller 2016-01-18 13:34:15 UTC
Assigned to Terry for root cause analysis.

Comment 3 Terry Wilson 2016-01-25 21:36:19 UTC
From an email thread about this issue:

1) ovs-vswtichd wasn't running. Trying to start it manually resulted
in: http://pastebin.test.redhat.com/341785
2) If you switch from vfio-pci to uio_pci_generic, it starts up fine
3) The nic that is bound to dpdk is a 1Gb nic, not one of the 10Gb
nics in the machine. 1Gb nic support in general is pretty spotty and
not something we would support. The 10Gb nics in the system are Emulex
OneConnect nics, which I'm not sure there are poll mode drivers for.
Someone else on the list might know? If not, it would be good to get
you some supported nics.

A separate bz (https://bugzilla.redhat.com/show_bug.cgi?id=1300378) was opened for where vfio-pci wasn't working for this nic, but I think the test machines now have new supported 10Gb nics, so I think this issue can be closed now.

Comment 4 Eran Kuris 2016-01-26 07:23:33 UTC
yes you can close it. now when the setup is with 10Gb nics this bug is not relevant. Now we have other issues with booting vms ...