RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1522700 - [Extras] Update DPDK to 17.11
Summary: [Extras] Update DPDK to 17.11
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: dpdk
Version: 7.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Timothy Redaelli
QA Contact: Jean-Tsung Hsiao
URL:
Whiteboard:
: 1455140 1497384 1517210 (view as bug list)
Depends On: 1335825 1518884
Blocks: 1413149 1449793 1461182 1490967 1500889
TreeView+ depends on / blocked
 
Reported: 2017-12-06 09:45 UTC by Timothy Redaelli
Modified: 2018-04-10 23:59 UTC (History)
12 users (show)

Fixed In Version: dpdk-17.11-7.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 23:59:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:1065 0 None None None 2018-04-10 23:59:47 UTC

Description Timothy Redaelli 2017-12-06 09:45:13 UTC

Comment 7 Jean-Tsung Hsiao 2017-12-21 18:15:30 UTC
The package has been tested and passed with the following tests:

# PvP 64 bytes zero loss testing between testpmd and Xena as traffic generator --- vhostuser is installed with DPDK-17.11-4.

# P2P 64 bytes zero loss testing between testpmd/host and Trex --- the host is loaded with DPDK-17.11-4.

# Netperf testing between name spaces on the test driver using testpmd/SUT as loopback.

Comment 8 Marcelo Ricardo Leitner 2018-02-16 15:58:54 UTC
*** Bug 1455140 has been marked as a duplicate of this bug. ***

Comment 9 Marcelo Ricardo Leitner 2018-02-16 16:04:39 UTC
*** Bug 1517210 has been marked as a duplicate of this bug. ***

Comment 10 Flavio Leitner 2018-02-16 16:07:08 UTC
*** Bug 1497384 has been marked as a duplicate of this bug. ***

Comment 11 Christian Trautman 2018-03-01 01:38:47 UTC
Tested http://download-node-02.eng.bos.redhat.com/brewroot/packages/dpdk/17.11/7.el7/x86_64/dpdk-17.11-7.el7.x86_64.rpm

Ran sr-iov testing and standard pvp testing with ovs-dpdk 2.9 from FDP.

Ran testing against 16.11-2 versus 17.11 and found no degradation with Intel x520 cards for 64 or 1500 bytes performance.

Comment 13 Jean-Tsung Hsiao 2018-03-01 12:06:05 UTC
The package has been tested and passed with the following tests:

1. PvP 64 bytes zero loss testing between testpmd and Xena over ixgbe using 2Q/4PMD --- 9.86 Mpps.

2. PvP 64 bytes zero loss testing between testpmd and Trex over 40Gb i40e(XL710) using 2Q/4PMD --- 5.90 Mpps.

Related packages:

Host is using OVS 2.9.0-1 fdP, and guest is using DPDK 17.11-7.

Both host and guest are running under kernel-851.

Comment 15 Jean-Tsung Hsiao 2018-03-02 03:03:58 UTC
(In reply to Jean-Tsung Hsiao from comment #13)
> The package has been tested and passed with the following tests:
> 
> 1. PvP 64 bytes zero loss testing between testpmd and Xena over ixgbe using
> 2Q/4PMD --- 9.86 Mpps.
For 1Q/2PMD the Mpps rate is 5.02

> 
> 2. PvP 64 bytes zero loss testing between testpmd and Trex over 40Gb
> i40e(XL710) using 2Q/4PMD --- 5.90 Mpps.
> 

Also, ran P2P 64 bytes zero loss between testpmd/host and Trex over 40Gb XL710 --- 36.10 Mpps.


> Related packages:
> 
> Host is using OVS 2.9.0-1 fdP, and guest is using DPDK 17.11-7.
> 
> Both host and guest are running under kernel-851.

Comment 16 Pei Zhang 2018-03-02 03:54:00 UTC
Update:

From Virt QE side, all testings with dpdk has finished and get PASS results.

Versions:
kernel-3.10.0-855.el7.x86_64
qemu-kvm-rhev-2.10.0-21.el7.x86_64
tuned-2.9.0-1.el7.noarch
libvirt-3.9.0-13.el7.x86_64
dpdk-17.11-7.el7.x86_64
microcode-20180108.tgz

Network cards: 10-Gigabit X540-AT2

Intel meltdonw&spectre fixes were applied to both the host&guest.
microcode: revision 0x3b, date = 2017-11-17

Values of related options:
# cat /sys/kernel/debug/x86/pti_enabled
1
# cat /sys/kernel/debug/x86/ibpb_enabled
1
# cat /sys/kernel/debug/x86/ibrs_enabled
0
# cat /sys/kernel/debug/x86/retp_enabled
1

Testing Scenarios:
(1) PVP performance testing -- PASS
(Note: dpdk's testpmd acts as the role of OpenvSwitch in host)

The throughput results looks good as expected:
1_Queue/0_Loss/64Byte_packet throughput: 9.49Mpps
1_Queue/0.002%_Loss/64Byte_packet throughput: 17.06Mpps

(2) PVP live migration testing -- PASS

All 10 ping-pong migration works well as expected:
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       1Mpps      119     13254        15    9352010.0
 1       1Mpps      123     13453        15    9546293.0
 2       1Mpps      131     12844        15    7015119.0
 3       1Mpps      119     12575        14    4898332.0
 4       1Mpps      124     13021        15    4759253.0
 5       1Mpps      125     13461        16    8348222.0
 6       1Mpps      122     12638        14    6433116.0
 7       1Mpps      121     12581        14    5951345.0
 8       1Mpps      128     13078        15    5130945.0
 9       1Mpps      119     13181        13    6856561.0


(3) Guest with device assignment -- PASS

The throughput results looks good as expected:
1_Queue/0_Loss/64Byte_packet throughput: 20.64Mpps

(4) Guest with ovs+dpdk+vhost-user -- PASS

Other versions:
openvswitch-2.9.0-3.el7fdp.x86_64

The throughput results looks good as expected:
2_Queues/0_Loss/64Byte_packet throughput: 21.30Mpps

Comment 21 errata-xmlrpc 2018-04-10 23:59:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1065


Note You need to log in before you can comment on or make changes to this bug.