RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2021776 - [vDPA]Boot testpmd with vdpa hit error: vhost_vdpa_dma_map(): Failed to send IOTLB update (Bad address)
Summary: [vDPA]Boot testpmd with vdpa hit error: vhost_vdpa_dma_map(): Failed to send ...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: dpdk
Version: 9.0
Hardware: x86_64
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Open vSwitch development team
QA Contact: Yanhui Ma
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-10 07:56 UTC by Pei Zhang
Modified: 2023-03-31 05:33 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-03-27 08:24:26 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-102344 0 None None None 2021-11-10 15:32:34 UTC

Description Pei Zhang 2021-11-10 07:56:57 UTC
Description of problem:
Boot dpdk's testpmd with vdpa in host, testpmd shows error info "vhost_vdpa_dma_map(): Failed to send IOTLB update (Bad address)"

Version-Release number of selected component (if applicable):
5.14.0-11.el9.x86_64
qemu-kvm-6.1.0-6.el9.x86_64
dpdk-20.11-3.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create 2 vhost-vdpa devices

# echo 0 > /sys/bus/pci/devices/0000\:3b\:00.0/sriov_numvfs
# echo 0 > /sys/bus/pci/devices/0000\:3b\:00.1/sriov_numvfs

# modprobe vhost_vdpa
# modprobe mlx5_vdpa

# ulimit -l unlimited

# echo 1 > /sys/bus/pci/devices/0000\:3b\:00.0/sriov_numvfs
# echo 1 > /sys/bus/pci/devices/0000\:3b\:00.1/sriov_numvfs

# echo 0000:3b:00.2 >/sys/bus/pci/drivers/mlx5_core/unbind
# devlink dev eswitch set pci/0000:3b:00.0 mode switchdev

# echo 0000:3b:01.2 >/sys/bus/pci/drivers/mlx5_core/unbind
# devlink dev eswitch set pci/0000:3b:00.1 mode switchdev

# echo 0000:3b:00.2 >/sys/bus/pci/drivers/mlx5_core/bind
# echo 0000:3b:01.2 >/sys/bus/pci/drivers/mlx5_core/bind

# vdpa dev add name vdpa0 mgmtdev pci/0000:3b:00.2
# vdpa dev add name vdpa1 mgmtdev pci/0000:3b:01.2

# vdpa dev show
vdpa0: type network mgmtdev pci/0000:3b:00.2 vendor_id 5555 max_vqs 16 max_vq_size 256
vdpa1: type network mgmtdev pci/0000:3b:01.2 vendor_id 5555 max_vqs 16 max_vq_size 256


2. Boot dpdk's testpmd in host. it shows error: "vhost_vdpa_dma_map(): Failed to send IOTLB update (Bad address).

# dpdk-testpmd \
    -l 2,4,6,8,10 \
    -n 4 \
    --vdev 'virtio_user0,path=/dev/vhost-vdpa-0' \
    --vdev 'virtio_user1,path=/dev/vhost-vdpa-1' \
    -b 0000:3b:00.0 -b 0000:3b:00.1 -b 0000:3b:00.2 -b 0000:3b:00.3 -b 0000:3b:01.2 -b 0000:3b:01.3 \
    -- \
    --nb-cores=4 \
    -i \
    --disable-rss \
    --rxd=1024 --txd=1024 \
    --rxq=1 --txq=1


EAL: Detected 64 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
vhost_vdpa_dma_map(): Failed to send IOTLB update (Bad address)
vhost_vdpa_dma_map(): Failed to send IOTLB update (Bad address)
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 2A:BC:10:C6:BF:45
Configuring Port 1 (socket 0)
Port 1: 86:04:0E:A9:30:33
Checking link statuses...
Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
Error during enabling promiscuous mode for port 1: Operation not supported - ignore
testpmd> 

Also check the host dmesg,it shows error info too: "mlx5_cmd_check:777:(pid 3246): CREATE_RQT(0x916) op_mod(0x0) failed, status bad resource(0x5), syndrome (0x547a1d)".

# dmesg
...
[12372.015465] mlx5_core 0000:3b:00.2: mlx5_vdpa_set_status:1785:(pid 3246): performing device reset
[12372.024608] mlx5_core 0000:3b:00.2: mlx5_vdpa_set_status:1785:(pid 3246): performing device reset
[12372.033691] mlx5_core 0000:3b:00.2: mlx5_vdpa_handle_set_map:524:(pid 3246): memory map update
[12372.042603] mlx5_core 0000:3b:00.2: mlx5_cmd_check:777:(pid 3246): CREATE_RQT(0x916) op_mod(0x0) failed, status bad resource(0x5), syndrome (0x547a1d)
[12372.056064] mlx5_core 0000:3b:00.2: setup_driver:1723:(pid 3246) warning: create_rqt
[12372.063802] mlx5_core 0000:3b:00.2: mlx5_vdpa_set_status:1803:(pid 3246) warning: failed to setup driver
[12372.073394] mlx5_core 0000:3b:01.2: mlx5_vdpa_set_status:1785:(pid 3246): performing device reset
[12372.082483] mlx5_core 0000:3b:01.2: mlx5_vdpa_set_status:1785:(pid 3246): performing device reset
[12372.091547] mlx5_core 0000:3b:01.2: mlx5_vdpa_handle_set_map:524:(pid 3246): memory map update
[12372.100439] mlx5_core 0000:3b:01.2: mlx5_cmd_check:777:(pid 3246): CREATE_RQT(0x916) op_mod(0x0) failed, status bad resource(0x5), syndrome (0x547a1d)
[12372.113899] mlx5_core 0000:3b:01.2: setup_driver:1723:(pid 3246) warning: create_rqt
[12372.121639] mlx5_core 0000:3b:01.2: mlx5_vdpa_set_status:1803:(pid 3246) warning: failed to setup driver


Actual results:
There are errors when booting testpmd with vhost_vdpa

Expected results:
There should be no errors when booting testpmd with vhost_vdpa

Additional info:

Comment 2 Yanhui Ma 2023-03-27 08:24:26 UTC
No this error with following dpdk version. So closed notabug now.

If I am wrong please correct me.

dpdk-21.11.2-1.el9_1.x86_64


[root@dell-per750-21 ~]# dpdk-testpmd -l 2,4,6 -n 4  -d /usr/lib64/librte_net_virtio.so  --vdev 'virtio_user0,path=/dev/vhost-vdpa-0' --vdev 'virtio_user1,path=/dev/vhost-vdpa-1' -b 0000:b1:00.0 -b 0000:b1:00.1 -b 0000:b1:00.2 -b 0000:b1:01.2 -- --nb-cores=2 -i --disable-rss --rxd=512 --txd=512 --rxq=1 --txq=1
EAL: Detected CPU lcores: 24
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
EAL: Registering with invalid input parameter
Port 0: 00:11:22:33:44:03
Configuring Port 1 (socket 0)
EAL: Registering with invalid input parameter
Port 1: 00:11:22:33:44:04
Checking link statuses...
Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
Error during enabling promiscuous mode for port 1: Operation not supported - ignore
testpmd> start


Note You need to log in before you can comment on or make changes to this bug.