Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
No this error with following dpdk version. So closed notabug now.
If I am wrong please correct me.
dpdk-21.11.2-1.el9_1.x86_64
[root@dell-per750-21 ~]# dpdk-testpmd -l 2,4,6 -n 4 -d /usr/lib64/librte_net_virtio.so --vdev 'virtio_user0,path=/dev/vhost-vdpa-0' --vdev 'virtio_user1,path=/dev/vhost-vdpa-1' -b 0000:b1:00.0 -b 0000:b1:00.1 -b 0000:b1:00.2 -b 0000:b1:01.2 -- --nb-cores=2 -i --disable-rss --rxd=512 --txd=512 --rxq=1 --txq=1
EAL: Detected CPU lcores: 24
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
EAL: Registering with invalid input parameter
Port 0: 00:11:22:33:44:03
Configuring Port 1 (socket 0)
EAL: Registering with invalid input parameter
Port 1: 00:11:22:33:44:04
Checking link statuses...
Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
Error during enabling promiscuous mode for port 1: Operation not supported - ignore
testpmd> start
Description of problem: Boot dpdk's testpmd with vdpa in host, testpmd shows error info "vhost_vdpa_dma_map(): Failed to send IOTLB update (Bad address)" Version-Release number of selected component (if applicable): 5.14.0-11.el9.x86_64 qemu-kvm-6.1.0-6.el9.x86_64 dpdk-20.11-3.el9.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create 2 vhost-vdpa devices # echo 0 > /sys/bus/pci/devices/0000\:3b\:00.0/sriov_numvfs # echo 0 > /sys/bus/pci/devices/0000\:3b\:00.1/sriov_numvfs # modprobe vhost_vdpa # modprobe mlx5_vdpa # ulimit -l unlimited # echo 1 > /sys/bus/pci/devices/0000\:3b\:00.0/sriov_numvfs # echo 1 > /sys/bus/pci/devices/0000\:3b\:00.1/sriov_numvfs # echo 0000:3b:00.2 >/sys/bus/pci/drivers/mlx5_core/unbind # devlink dev eswitch set pci/0000:3b:00.0 mode switchdev # echo 0000:3b:01.2 >/sys/bus/pci/drivers/mlx5_core/unbind # devlink dev eswitch set pci/0000:3b:00.1 mode switchdev # echo 0000:3b:00.2 >/sys/bus/pci/drivers/mlx5_core/bind # echo 0000:3b:01.2 >/sys/bus/pci/drivers/mlx5_core/bind # vdpa dev add name vdpa0 mgmtdev pci/0000:3b:00.2 # vdpa dev add name vdpa1 mgmtdev pci/0000:3b:01.2 # vdpa dev show vdpa0: type network mgmtdev pci/0000:3b:00.2 vendor_id 5555 max_vqs 16 max_vq_size 256 vdpa1: type network mgmtdev pci/0000:3b:01.2 vendor_id 5555 max_vqs 16 max_vq_size 256 2. Boot dpdk's testpmd in host. it shows error: "vhost_vdpa_dma_map(): Failed to send IOTLB update (Bad address). # dpdk-testpmd \ -l 2,4,6,8,10 \ -n 4 \ --vdev 'virtio_user0,path=/dev/vhost-vdpa-0' \ --vdev 'virtio_user1,path=/dev/vhost-vdpa-1' \ -b 0000:3b:00.0 -b 0000:3b:00.1 -b 0000:3b:00.2 -b 0000:3b:00.3 -b 0000:3b:01.2 -b 0000:3b:01.3 \ -- \ --nb-cores=4 \ -i \ --disable-rss \ --rxd=1024 --txd=1024 \ --rxq=1 --txq=1 EAL: Detected 64 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: No available hugepages reported in hugepages-2048kB EAL: Probing VFIO support... vhost_vdpa_dma_map(): Failed to send IOTLB update (Bad address) vhost_vdpa_dma_map(): Failed to send IOTLB update (Bad address) EAL: No legacy callbacks, legacy socket not created Interactive-mode selected Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 0) Port 0: 2A:BC:10:C6:BF:45 Configuring Port 1 (socket 0) Port 1: 86:04:0E:A9:30:33 Checking link statuses... Done Error during enabling promiscuous mode for port 0: Operation not supported - ignore Error during enabling promiscuous mode for port 1: Operation not supported - ignore testpmd> Also check the host dmesg,it shows error info too: "mlx5_cmd_check:777:(pid 3246): CREATE_RQT(0x916) op_mod(0x0) failed, status bad resource(0x5), syndrome (0x547a1d)". # dmesg ... [12372.015465] mlx5_core 0000:3b:00.2: mlx5_vdpa_set_status:1785:(pid 3246): performing device reset [12372.024608] mlx5_core 0000:3b:00.2: mlx5_vdpa_set_status:1785:(pid 3246): performing device reset [12372.033691] mlx5_core 0000:3b:00.2: mlx5_vdpa_handle_set_map:524:(pid 3246): memory map update [12372.042603] mlx5_core 0000:3b:00.2: mlx5_cmd_check:777:(pid 3246): CREATE_RQT(0x916) op_mod(0x0) failed, status bad resource(0x5), syndrome (0x547a1d) [12372.056064] mlx5_core 0000:3b:00.2: setup_driver:1723:(pid 3246) warning: create_rqt [12372.063802] mlx5_core 0000:3b:00.2: mlx5_vdpa_set_status:1803:(pid 3246) warning: failed to setup driver [12372.073394] mlx5_core 0000:3b:01.2: mlx5_vdpa_set_status:1785:(pid 3246): performing device reset [12372.082483] mlx5_core 0000:3b:01.2: mlx5_vdpa_set_status:1785:(pid 3246): performing device reset [12372.091547] mlx5_core 0000:3b:01.2: mlx5_vdpa_handle_set_map:524:(pid 3246): memory map update [12372.100439] mlx5_core 0000:3b:01.2: mlx5_cmd_check:777:(pid 3246): CREATE_RQT(0x916) op_mod(0x0) failed, status bad resource(0x5), syndrome (0x547a1d) [12372.113899] mlx5_core 0000:3b:01.2: setup_driver:1723:(pid 3246) warning: create_rqt [12372.121639] mlx5_core 0000:3b:01.2: mlx5_vdpa_set_status:1803:(pid 3246) warning: failed to setup driver Actual results: There are errors when booting testpmd with vhost_vdpa Expected results: There should be no errors when booting testpmd with vhost_vdpa Additional info: