RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1791410 - Unable to run a dpdk workload without privileged=true [rhel-8.1.0.z]
Summary: Unable to run a dpdk workload without privileged=true [rhel-8.1.0.z]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: dpdk
Version: 8.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: David Marchand
QA Contact: Jean-Tsung Hsiao
URL:
Whiteboard:
Depends On: 1785933 1789352
Blocks: 1771572
TreeView+ depends on / blocked
 
Reported: 2020-01-15 18:56 UTC by Oneata Mircea Teodor
Modified: 2020-11-04 12:30 UTC (History)
15 users (show)

Fixed In Version: dpdk-18.11.2-4.el8_1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1789352
Environment:
Last Closed: 2020-11-04 12:29:52 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4898 0 None None None 2020-11-04 12:29:59 UTC

Comment 11 Jean-Tsung Hsiao 2020-08-01 20:00:06 UTC
Hi David,
Looks like it passed even it encountered "failed to obtain UAR mmap offset" issues.
Please take a look at the following command log. Will set the status to VERIFIED if no issue.
Thanks!
Jean


[root@netqe8 ~]# sudo -u testuser testpmd -w 0000:03:00.0 -v -- -i
EAL: Detected 16 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: RTE Version: 'DPDK 19.11.0'
net_mlx5: cannot load glue library: /lib64/libmlx5.so.1: version `MLX5_1.10' not found (required by /usr/lib64/dpdk-pmds-glue/librte_pmd_mlx5_glue.so.19.08.0)
net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)
EAL: Multi-process socket /tmp/dpdk/rte/mp_socket
EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15b3:1007 net_mlx4
net_mlx4: Verbs external allocator is not supported
net_mlx4: Verbs external allocator is not supported
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=267456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
net_mlx4: 0x7fd6f390f100: failed to obtain UAR mmap offset
Port 0: E4:1D:2D:79:B3:11
Configuring Port 1 (socket 0)
net_mlx4: 0x7fd6f39131c0: failed to obtain UAR mmap offset
Port 1: E4:1D:2D:79:B3:12
Checking link statuses...
Done
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 1) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd>quit
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
Done

Shutting down port 1...
Closing ports...
Done

Bye...
[root@netqe8 ~]# uname -r
4.18.0-147.el8.x86_64
[root@netqe8 ~]# rpm -q dpdk
dpdk-19.11-1.el8.x86_64
[root@netqe8 ~]#

Comment 12 David Marchand 2020-08-06 12:39:20 UTC
Hello Jean, 

There is an issue.
The version to test is dpdk-18.11.2-4.el8_1 not a 19.11 dpdk.
Can you recheck please?

Comment 13 Jean-Tsung Hsiao 2020-08-06 17:38:51 UTC
Hi David,
I have re-tested using dpdk-18.11.2-4.el8_1.
It passed.
please check the log below.
Thanks!
Jean

[root@netqe8 ~]# sudo -u testuser testpmd -w 0000:03:00.0 -v -- -iEAL: Detected 16 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: RTE Version: 'DPDK 18.11.2'
EAL: Multi-process socket /tmp/dpdk/rte/mp_socket
EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15b3:1007 net_mlx4
PMD: net_mlx4: PCI information matches, using device "mlx4_0" (VF: false)
PMD: net_mlx4: 2 port(s) detected
PMD: net_mlx4: port 1 MAC address is e4:1d:2d:79:b3:11
PMD: net_mlx4: port 2 MAC address is e4:1d:2d:79:b3:12
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=267456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: E4:1D:2D:79:B3:11
Configuring Port 1 (socket 0)
Port 1: E4:1D:2D:79:B3:12
Checking link statuses...
Done
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 1) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
Done

Shutting down port 1...
Closing ports...
Done

Bye...
[root@netqe8 ~]# rpm -q dpdk
dpdk-18.11.2-4.el8_1.x86_64
[root@netqe8 ~]# uname -r
4.18.0-147.el8.x86_64
[root@netqe8 ~]#

Comment 14 David Marchand 2020-09-03 07:28:12 UTC
Back from PTO, lgtm, thanks Jean.

Comment 22 errata-xmlrpc 2020-11-04 12:29:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (dpdk bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4898


Note You need to log in before you can comment on or make changes to this bug.