The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 1878026 - [ice] Add ice dpdk interface to openvswitch does not work
Summary: [ice] Add ice dpdk interface to openvswitch does not work
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: DPDK
Version: RHEL 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Fouad Hallal
QA Contact: liting
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-11 05:58 UTC by Hekai Wang
Modified: 2020-09-23 10:30 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-23 10:30:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ovs vswitchd log (1.99 MB, text/plain)
2020-09-11 05:58 UTC, Hekai Wang
no flags Details

Description Hekai Wang 2020-09-11 05:58:58 UTC
Created attachment 1714519 [details]
ovs vswitchd log

Description of problem:
Enable ice as vfio-pci interface 
add interface to openvswitch bridge failed

Version-Release number of selected component (if applicable):
[root@wsfd-advnetlab10 vsperf]# uname -a
Linux wsfd-advnetlab10.anl.lab.eng.bos.redhat.com 4.18.0-234.el8.x86_64 #1 SMP Thu Aug 20 10:25:32 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@wsfd-advnetlab10 vsperf]# modinfo vfio-pci
filename:       /lib/modules/4.18.0-234.el8.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz
description:    VFIO PCI - User Level meta-driver
author:         Alex Williamson <alex.williamson>
license:        GPL v2
version:        0.2
rhelversion:    8.3
srcversion:     94B5D9EA52B93C802B23AE1
depends:        vfio,irqbypass,vfio_virqfd
intree:         Y
name:           vfio_pci
vermagic:       4.18.0-234.el8.x86_64 SMP mod_unload modversions 
sig_id:         PKCS#7
signer:         Red Hat Enterprise Linux kernel signing key
sig_key:        3C:0A:6C:47:A4:66:53:A8:51:A1:C1:37:77:A1:21:8D:A8:5D:B3:33
sig_hashalgo:   sha256
signature:      A1:4A:38:93:95:CD:04:35:7D:C2:95:03:0A:F4:3D:57:46:A5:BC:23:
		51:14:70:DB:E0:A3:96:FB:90:4A:C0:DA:8A:CF:B9:E4:A5:E2:8C:9C:
		08:3E:10:F0:F6:85:16:1F:17:36:CC:83:5B:61:F0:8D:BF:7F:8A:EB:
		DB:90:8A:28:9C:B0:6B:1E:A2:01:A0:DE:4D:12:B1:A3:C4:EC:A9:E9:
		34:8B:D8:B5:42:03:CC:0A:07:81:FE:17:C9:1C:11:FE:0E:BD:75:6F:
		BB:AA:87:B1:A0:E5:1A:AA:20:01:4F:F3:B8:B4:BC:70:B7:E9:FA:3C:
		8F:4A:7B:5B:68:4B:C3:80:1E:66:66:7F:98:C5:B4:22:EC:CD:13:75:
		FF:64:48:68:F3:36:59:C3:C1:62:3A:4C:D4:D5:86:13:C0:85:96:58:
		3F:37:BF:95:C9:F1:B4:1C:02:D6:1D:9F:5C:6A:43:00:43:9F:BF:D9:
		7A:28:CF:AE:5F:A6:9B:E2:3C:48:A4:23:6E:55:C0:71:78:A3:60:00:
		2C:21:4A:4C:E4:01:43:C6:2D:D8:28:50:60:D8:D1:26:F6:7C:2C:A7:
		55:88:55:B3:E1:0F:C5:E2:5D:C9:FB:97:06:6F:2F:B5:14:2C:B4:0F:
		9A:C9:AD:5F:B8:39:23:D7:A5:AF:A0:E3:D4:20:DD:29:1A:36:2A:3C:
		87:27:3A:37:7B:84:56:49:17:89:0B:62:A1:87:2D:A3:7A:03:D6:4B:
		33:98:24:A5:D4:6E:CB:F6:F0:4C:50:36:F2:13:AE:00:41:F7:92:88:
		F3:F9:69:99:CB:2A:59:F5:05:3A:5B:1F:8B:A0:48:F4:05:54:01:97:
		DB:AD:20:87:BC:85:EB:55:B5:00:43:5F:12:1E:35:C3:D4:BA:E2:F3:
		C8:16:B6:E8:D9:89:E7:61:EE:B3:63:E1:2B:E8:4E:BB:3D:FD:95:93:
		EC:0D:F0:E1:CF:C8:CE:08:64:B0:28:FC:F4:78:F3:CF:37:3C:79:B5:
		DF:2D:68:85
parm:           ids:Initial PCI IDs to add to the vfio driver, format is "vendor:device[:subvendor[:subdevice[:class[:class_mask]]]]" and multiple comma separated entries can be specified (string)
parm:           nointxmask:Disable support for PCI 2.3 style INTx masking.  If this resolves problems for specific devices, report lspci -vvvxxx to linux-pci.org so the device can be fixed automatically via the broken_intx_masking flag. (bool)
parm:           disable_idle_d3:Disable using the PCI D3 low power state for idle, unused devices (bool)


[root@wsfd-advnetlab10 vsperf]# ethtool -i ens1f1
driver: ice
version: 0.8.2-k
firmware-version: 1.40 0x80003ab8 1.2735.0
expansion-rom-version: 
bus-info: 0000:3b:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
[root@wsfd-advnetlab10 vsperf]# lspci | grep Eth
01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
19:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
19:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
3b:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
3b:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
5e:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
5e:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]



[root@wsfd-advnetlab10 vsperf]# rpm -qa | grep dpdk
dpdk-tools-19.11-4.el8fdb.1.x86_64
dpdk-19.11-4.el8fdb.1.x86_64

Otherwise, I have already tested other dpdk verison as below . 
Neither of them work

[root@wsfd-advnetlab10 vsperf]# rpm -qa | grep dpdk
dpdk-19.11-5.el8_2.x86_64
dpdk-tools-19.11-5.el8_2.x86_64

[root@wsfd-advnetlab10 vsperf]# rpm -qa | grep dpdk
dpdk-19.11-3.el8.x86_64
dpdk-tools-19.11-3.el8.x86_64

##################################################################

[root@wsfd-advnetlab10 vsperf]# rpm -qa | grep openvswitch
openvswitch2.13-2.13.0-57.el8fdp.x86_64
openvswitch-selinux-extra-policy-1.0-23.el8fdp.noarch
[root@wsfd-advnetlab10 vsperf]# 


[root@wsfd-advnetlab10 vsperf]# lspci -s 3b:00.0 -vv
3b:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
	Subsystem: Intel Corporation Ethernet Network Adapter E810-C-Q2
	Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin A routed to IRQ 112
	NUMA node: 0
	Region 0: Memory at ae000000 (64-bit, prefetchable) [size=32M]
	Region 3: Memory at b2010000 (64-bit, prefetchable) [size=64K]
	Expansion ROM at ab000000 [disabled] [size=1M]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D3 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
		Address: 0000000000000000  Data: 0000
		Masking: 00000000  Pending: 00000000
	Capabilities: [70] MSI-X: Enable- Count=1024 Masked-
		Vector table: BAR=3 offset=00000000
		PBA: BAR=3 offset=00008000
	Capabilities: [a0] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr- NonFatalErr+ FatalErr+ UnsupReq+
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop- FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr+ TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 8GT/s (downgraded), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range AB, TimeoutDis+, NROPrPrP-, LTR-
			 10BitTagComp+, 10BitTagReq-, OBFF Not Supported, ExtFmt+, EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-, TPHComp-, ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis-, LTR-, OBFF Disabled
			 AtomicOpsCtl: ReqEn-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+, EqualizationPhase1+
			 EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
	Capabilities: [e0] Vital Product Data
		Product Name: Intel(r) Ethernet Network Adapter E810-CQDA2
		Read-only fields:
			[V1] Vendor specific: Intel(r) Ethernet Network Adapter E810-CQDA2
			[PN] Part number: K91258-004
			[SN] Serial number: 40A6B718FCD8
			[V2] Vendor specific: 2020
			[RV] Reserved: checksum good, 0 byte(s) reserved
		End
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt+ RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES- TLP+ FCP+ CmpltTO+ CmpltAbrt+ UnxCmplt- RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn+ ECRCChkCap+ ECRCChkEn+
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [150 v1] Device Serial Number d8-fc-18-ff-ff-b7-a6-40
	Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
		IOVCap:	Migration-, Interrupt Message Number: 000
		IOVCtl:	Enable- Migration- Interrupt- MSE- ARIHierarchy+
		IOVSta:	Migration-
		Initial VFs: 128, Total VFs: 128, Number of VFs: 0, Function Dependency Link: 00
		VF offset: 8, stride: 1, Device ID: 1889
		Supported Page Size: 00000553, System Page Size: 00000001
		Region 0: Memory at 00000000b1000000 (64-bit, prefetchable)
		Region 3: Memory at 00000000b2220000 (64-bit, prefetchable)
		VF Migration: offset: 00000000, BIR: 0
	Capabilities: [1a0 v1] Transaction Processing Hints
		Device specific mode supported
		No steering table available
	Capabilities: [1b0 v1] Access Control Services
		ACSCap:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [1d0 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn-, PerformEqu-
		LaneErrStat: 0
	Capabilities: [200 v1] Data Link Feature <?>
	Capabilities: [210 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [250 v1] Lane Margining at the Receiver <?>
	Kernel driver in use: vfio-pci
	Kernel modules: ice

[root@wsfd-advnetlab10 vsperf]# 




How reproducible:
Always

Steps to Reproduce:

modprobe -r vfio-pci
modprobe -r vfio
modprobe vfio-pci
modprobe vfio
/usr/share/dpdk/usertools/dpdk-devbind.py -b vfio-pci 0000:3b:00.0
/usr/share/dpdk/usertools/dpdk-devbind.py -b vfio-pci 0000:3b:00.1
/usr/share/dpdk/usertools/dpdk-devbind.py --status
        
modprobe openvswitch
systemctl stop openvswitch
sleep 3
systemctl start openvswitch
sleep 3
ovs-vsctl --if-exists del-br ovsbr0
sleep 5
ovs-vsctl set Open_vSwitch . other_config={}
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=4096,4096
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x100000100000
ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-iommu-support=true
systemctl restart openvswitch
sleep 3
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:3b:00.0 mtu_request=64
ovs-vsctl add-port ovsbr0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:3b:00.1 mtu_request=64

1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Christian Trautman 2020-09-15 12:20:46 UTC
Hekai,

Production Openvswitch and DPDK kits will not work with the ICE driver.

Please use the beta kit Timothy provided for Openvswitch and retest .

Thanks.

http://download-node-02.eng.bos.redhat.com/brewroot/packages/openvswitch2.13/2.13.0/41.el8fdb/

Comment 4 Zhiqiang Fang 2020-09-16 03:18:12 UTC
Tested on 41.el8fdb, the issue is not seen.

# rpm -qa | grep openv
openvswitch2.13-2.13.0-41.el8fdb.x86_64
openvswitch-selinux-extra-policy-1.0-23.el8fdp.noarch

Logs:

:: [ 22:29:04 ] :: [  BEGIN   ] :: Running 'ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=class=eth,mac=40:a6:b7:18:fc:d8 mtu_request=64'
:: [ 22:29:05 ] :: [   PASS   ] :: Command 'ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=class=eth,mac=40:a6:b7:18:fc:d8 mtu_request=64' (Expected 0, got 0)
:: [ 22:29:05 ] :: [  BEGIN   ] :: Running 'ovs-vsctl add-port ovsbr0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=class=eth,mac=40:a6:b7:18:fc:d9 mtu_request=64'
:: [ 22:29:05 ] :: [   PASS   ] :: Command 'ovs-vsctl add-port ovsbr0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=class=eth,mac=40:a6:b7:18:fc:d9 mtu_request=64' (Expected 0, got 0)


#ovs-vsctl show
340a5fa2-f9d1-4c74-bb49-4841ba275388
    Bridge ovsbr0
        datapath_type: netdev
        Port dpdk1
            Interface dpdk1
                type: dpdk
                options: {dpdk-devargs="class=eth,mac=40:a6:b7:18:fc:d9"}
        Port vhost1
            Interface vhost1
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhost1"}
        Port dpdk0
            Interface dpdk0
                type: dpdk
                options: {dpdk-devargs="class=eth,mac=40:a6:b7:18:fc:d8"}
        Port vhost0
            Interface vhost0
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhost0"}
        Port ovsbr0
            Interface ovsbr0
                type: internal
    ovs_version: "2.13.0"


So the ovs version 2.13.0/41.el8fdb is working as expected.


Note You need to log in before you can comment on or make changes to this bug.