Bug 1522700 - [Extras] Update DPDK to 17.11
[Extras] Update DPDK to 17.11
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: dpdk (Show other bugs)
7.5
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Timothy Redaelli
Jean-Tsung Hsiao
: Extras, Rebase
: 1455140 1497384 1517210 (view as bug list)
Depends On: 1518884 1335825
Blocks: 1413149 1449793 1461182 1490967 1500889
  Show dependency treegraph
 
Reported: 2017-12-06 04:45 EST by Timothy Redaelli
Modified: 2018-04-10 19:59 EDT (History)
12 users (show)

See Also:
Fixed In Version: dpdk-17.11-7.el7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-04-10 19:59:23 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:1065 None None None 2018-04-10 19:59 EDT

  None (edit)
Description Timothy Redaelli 2017-12-06 04:45:13 EST

    
Comment 7 Jean-Tsung Hsiao 2017-12-21 13:15:30 EST
The package has been tested and passed with the following tests:

# PvP 64 bytes zero loss testing between testpmd and Xena as traffic generator --- vhostuser is installed with DPDK-17.11-4.

# P2P 64 bytes zero loss testing between testpmd/host and Trex --- the host is loaded with DPDK-17.11-4.

# Netperf testing between name spaces on the test driver using testpmd/SUT as loopback.
Comment 8 Marcelo Ricardo Leitner 2018-02-16 10:58:54 EST
*** Bug 1455140 has been marked as a duplicate of this bug. ***
Comment 9 Marcelo Ricardo Leitner 2018-02-16 11:04:39 EST
*** Bug 1517210 has been marked as a duplicate of this bug. ***
Comment 10 Flavio Leitner 2018-02-16 11:07:08 EST
*** Bug 1497384 has been marked as a duplicate of this bug. ***
Comment 11 Christian Trautman 2018-02-28 20:38:47 EST
Tested http://download-node-02.eng.bos.redhat.com/brewroot/packages/dpdk/17.11/7.el7/x86_64/dpdk-17.11-7.el7.x86_64.rpm

Ran sr-iov testing and standard pvp testing with ovs-dpdk 2.9 from FDP.

Ran testing against 16.11-2 versus 17.11 and found no degradation with Intel x520 cards for 64 or 1500 bytes performance.
Comment 13 Jean-Tsung Hsiao 2018-03-01 07:06:05 EST
The package has been tested and passed with the following tests:

1. PvP 64 bytes zero loss testing between testpmd and Xena over ixgbe using 2Q/4PMD --- 9.86 Mpps.

2. PvP 64 bytes zero loss testing between testpmd and Trex over 40Gb i40e(XL710) using 2Q/4PMD --- 5.90 Mpps.

Related packages:

Host is using OVS 2.9.0-1 fdP, and guest is using DPDK 17.11-7.

Both host and guest are running under kernel-851.
Comment 15 Jean-Tsung Hsiao 2018-03-01 22:03:58 EST
(In reply to Jean-Tsung Hsiao from comment #13)
> The package has been tested and passed with the following tests:
> 
> 1. PvP 64 bytes zero loss testing between testpmd and Xena over ixgbe using
> 2Q/4PMD --- 9.86 Mpps.
For 1Q/2PMD the Mpps rate is 5.02

> 
> 2. PvP 64 bytes zero loss testing between testpmd and Trex over 40Gb
> i40e(XL710) using 2Q/4PMD --- 5.90 Mpps.
> 

Also, ran P2P 64 bytes zero loss between testpmd/host and Trex over 40Gb XL710 --- 36.10 Mpps.


> Related packages:
> 
> Host is using OVS 2.9.0-1 fdP, and guest is using DPDK 17.11-7.
> 
> Both host and guest are running under kernel-851.
Comment 16 Pei Zhang 2018-03-01 22:54:00 EST
Update:

From Virt QE side, all testings with dpdk has finished and get PASS results.

Versions:
kernel-3.10.0-855.el7.x86_64
qemu-kvm-rhev-2.10.0-21.el7.x86_64
tuned-2.9.0-1.el7.noarch
libvirt-3.9.0-13.el7.x86_64
dpdk-17.11-7.el7.x86_64
microcode-20180108.tgz

Network cards: 10-Gigabit X540-AT2

Intel meltdonw&spectre fixes were applied to both the host&guest.
microcode: revision 0x3b, date = 2017-11-17

Values of related options:
# cat /sys/kernel/debug/x86/pti_enabled
1
# cat /sys/kernel/debug/x86/ibpb_enabled
1
# cat /sys/kernel/debug/x86/ibrs_enabled
0
# cat /sys/kernel/debug/x86/retp_enabled
1

Testing Scenarios:
(1) PVP performance testing -- PASS
(Note: dpdk's testpmd acts as the role of OpenvSwitch in host)

The throughput results looks good as expected:
1_Queue/0_Loss/64Byte_packet throughput: 9.49Mpps
1_Queue/0.002%_Loss/64Byte_packet throughput: 17.06Mpps

(2) PVP live migration testing -- PASS

All 10 ping-pong migration works well as expected:
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       1Mpps      119     13254        15    9352010.0
 1       1Mpps      123     13453        15    9546293.0
 2       1Mpps      131     12844        15    7015119.0
 3       1Mpps      119     12575        14    4898332.0
 4       1Mpps      124     13021        15    4759253.0
 5       1Mpps      125     13461        16    8348222.0
 6       1Mpps      122     12638        14    6433116.0
 7       1Mpps      121     12581        14    5951345.0
 8       1Mpps      128     13078        15    5130945.0
 9       1Mpps      119     13181        13    6856561.0


(3) Guest with device assignment -- PASS

The throughput results looks good as expected:
1_Queue/0_Loss/64Byte_packet throughput: 20.64Mpps

(4) Guest with ovs+dpdk+vhost-user -- PASS

Other versions:
openvswitch-2.9.0-3.el7fdp.x86_64

The throughput results looks good as expected:
2_Queues/0_Loss/64Byte_packet throughput: 21.30Mpps
Comment 21 errata-xmlrpc 2018-04-10 19:59:23 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1065

Note You need to log in before you can comment on or make changes to this bug.