Bug 2232049

Summary: enic card(VIC 1457): testpmd as switch case does not work when testpmd start with --iova-mode=va inside guest
Product: Red Hat Enterprise Linux Fast Datapath Reporter: liting <tli>
Component: DPDKAssignee: Open vSwitch development team <ovs-team>
DPDK sub component: other QA Contact: liting <tli>
Status: NEW --- Docs Contact:
Severity: unspecified    
Priority: unspecified CC: ctrautma, dmarchan, jhsiao, ktraynor
Version: FDP 23.FFlags: dmarchan: needinfo? (tli)
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liting 2023-08-15 02:29:17 UTC
Description of problem:
enic card(VIC 1457): testpmd as switch case does not work when testpmd start with --iova-mode=va inside guest

Version-Release number of selected component (if applicable):
[root@netqe37 /]# uname -r
5.14.0-284.26.1.el9_2.x86_64
[root@netqe37 /]# rpm -qa|grep dpdk
dpdk-22.11-1.el9.x86_64
dpdk-tools-22.11-1.el9.x86_64

[root@netqe37 /]# ethtool -i eno6
driver: enic
version: 5.14.0-284.26.1.el9_2.x86_64
firmware-version: 5.1(3f)
expansion-rom-version: 
bus-info: 0000:1d:00.1
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no


How reproducible:


Steps to Reproduce:
1. bind all enic port to vfio-pci
[root@netqe37 /]# driverctl -v set-override 0000:1d:00.0 vfio-pci
[root@netqe37 /]# driverctl -v set-override 0000:1d:00.1 vfio-pci
[root@netqe37 /]# driverctl -v set-override 0000:1d:00.2 vfio-pci
[root@netqe37 /]# driverctl -v set-override 0000:1d:00.3 vfio-pci

2. start testpmd
[root@netqe37 /]# dpdk-testpmd -l 59,19,58                 -n 4 --socket-mem 1024,1024                 --vdev net_vhost0,iface=/tmp/vhost0,client=1,queues=1,iommu-support=1                 --vdev net_vhost1,iface=/tmp/vhost1,client=1,queues=1,iommu-support=1,queues=1 -- -i 
EAL: Detected CPU lcores: 80
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_enic (1137:43) device: 0000:1d:00.0 (socket 0)
PMD: rte_enic_pmd: Advanced Filters available
PMD: rte_enic_pmd: Flow api filter mode: FLOWMAN Actions: steer tag drop count 
PMD: rte_enic_pmd: vNIC MAC addr 8C:94:1F:8B:BE:3C wq/rq 4096/4096 mtu 1500, max mtu:9158
PMD: rte_enic_pmd: vNIC csum tx/rx yes/yes rss +udp intr mode any type min timer 125 usec loopback tag 0x0000
PMD: rte_enic_pmd: vNIC resources avail: wq 4 rq 8 cq 12 intr 2
EAL: Probe PCI driver: net_enic (1137:43) device: 0000:1d:00.1 (socket 0)
PMD: rte_enic_pmd: Advanced Filters not available
PMD: rte_enic_pmd: Flow api filter mode: USNIC Actions: steer 
PMD: rte_enic_pmd: vNIC MAC addr 8C:94:1F:8B:BE:3D wq/rq 256/512 mtu 1500, max mtu:9158
PMD: rte_enic_pmd: vNIC csum tx/rx yes/yes rss +udp intr mode any type min timer 125 usec loopback tag 0x0000
PMD: rte_enic_pmd: vNIC resources avail: wq 1 rq 4 cq 5 intr 8
EAL: Probe PCI driver: net_enic (1137:43) device: 0000:1d:00.2 (socket 0)
PMD: rte_enic_pmd: Advanced Filters available
PMD: rte_enic_pmd: Flow api filter mode: FLOWMAN Actions: steer tag drop count 
PMD: rte_enic_pmd: vNIC MAC addr 8C:94:1F:8B:BE:3E wq/rq 4096/4096 mtu 1500, max mtu:9158
PMD: rte_enic_pmd: vNIC csum tx/rx yes/yes rss +udp intr mode any type min timer 125 usec loopback tag 0x0000
PMD: rte_enic_pmd: vNIC resources avail: wq 4 rq 8 cq 12 intr 2
EAL: Probe PCI driver: net_enic (1137:43) device: 0000:1d:00.3 (socket 0)
PMD: rte_enic_pmd: Advanced Filters not available
PMD: rte_enic_pmd: Flow api filter mode: USNIC Actions: steer 
PMD: rte_enic_pmd: vNIC MAC addr 8C:94:1F:8B:BE:3F wq/rq 256/512 mtu 1500, max mtu:9158
PMD: rte_enic_pmd: vNIC csum tx/rx yes/yes rss +udp intr mode any type min timer 125 usec loopback tag 0x0000
PMD: rte_enic_pmd: vNIC resources avail: wq 1 rq 4 cq 5 intr 8
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
PMD: rte_enic_pmd: TX Queues - effective number of descs:512
PMD: rte_enic_pmd: Scatter rx mode disabled
PMD: rte_enic_pmd: Rq 0 Scatter rx mode not being used
PMD: rte_enic_pmd: Using 512 rx descriptors (sop 512, data 0)
PMD: rte_enic_pmd: vNIC resources used:  wq 1 rq 2 cq 2 intr 1

Port 0: link state change event
Port 0: 8C:94:1F:8B:BE:3C
Configuring Port 1 (socket 0)
PMD: rte_enic_pmd: TX Queues - effective number of descs:256
PMD: rte_enic_pmd: Scatter rx mode disabled
PMD: rte_enic_pmd: Rq 0 Scatter rx mode not being used
PMD: rte_enic_pmd: Using 512 rx descriptors (sop 512, data 0)
PMD: rte_enic_pmd: vNIC resources used:  wq 1 rq 2 cq 2 intr 1

Port 1: link state change event
Port 1: 8C:94:1F:8B:BE:3D
Configuring Port 2 (socket 0)
PMD: rte_enic_pmd: TX Queues - effective number of descs:512
PMD: rte_enic_pmd: Scatter rx mode disabled
PMD: rte_enic_pmd: Rq 0 Scatter rx mode not being used
PMD: rte_enic_pmd: Using 512 rx descriptors (sop 512, data 0)
PMD: rte_enic_pmd: vNIC resources used:  wq 1 rq 2 cq 2 intr 1

Port 2: link state change event
Port 2: 8C:94:1F:8B:BE:3E
Configuring Port 3 (socket 0)
PMD: rte_enic_pmd: TX Queues - effective number of descs:256
PMD: rte_enic_pmd: Scatter rx mode disabled
PMD: rte_enic_pmd: Rq 0 Scatter rx mode not being used
PMD: rte_enic_pmd: Using 512 rx descriptors (sop 512, data 0)
PMD: rte_enic_pmd: vNIC resources used:  wq 1 rq 2 cq 2 intr 1

Port 3: link state change event
Port 3: 8C:94:1F:8B:BE:3F
Configuring Port 4 (socket 0)
VHOST_CONFIG: (/tmp/vhost0) vhost-user client: socket created, fd: 44
VHOST_CONFIG: (/tmp/vhost0) failed to connect: No such file or directory
VHOST_CONFIG: (/tmp/vhost0) reconnecting...
Port 4: 56:48:4F:53:54:04
Configuring Port 5 (socket 0)
VHOST_CONFIG: (/tmp/vhost1) vhost-user client: socket created, fd: 47
VHOST_CONFIG: (/tmp/vhost1) failed to connect: No such file or directory
VHOST_CONFIG: (/tmp/vhost1) reconnecting...
Port 5: 56:48:4F:53:54:05
Checking link statuses...
Done
testpmd> 


3. virsh define guest with following g1.xml
[root@netqe37 perf]# cat g1.xml 
<domain type='kvm'>
  <name>g1</name>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <hugepages>
      <page size='1048576' unit='KiB'/>
    </hugepages>
    <locked/>
    <access mode='shared'/>
  </memoryBacking>
  <vcpu placement='static'>3</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='41'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <emulatorpin cpuset='0,40'/>
  </cputune>
  <numatune>
    <memory mode='strict' nodeset='0'/>
  </numatune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='q35'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pmu state='off'/>
    <vmport state='off'/>
    <ioapic driver='qemu'/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <feature policy='require' name='tsc-deadline'/>
    <numa>
      <cell id='0' cpus='0-2' memory='8388608' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/g1.qcow2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='none'>
      <alias name='usb'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x8'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x9'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xa'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0xb'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:01:02:03'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='00:de:ad:00:00:01'/>
      <source type='unix' path='/tmp/vhost0' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' tx_queue_size='1024' iommu='off' ats='off'>
  <host mrg_rxbuf='off'/>
  </driver>
      <address type='pci' domain='0x0000' bus='0x3' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='00:de:ad:00:00:02'/>
      <source type='unix' path='/tmp/vhost1' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' tx_queue_size='1024' iommu='off' ats='off'>
  <host mrg_rxbuf='off'/>
  </driver>
      <address type='pci' domain='0x0000' bus='0x4' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
    <iommu model='intel'>
      <driver intremap='on' caching_mode='on' iotlb='on'/>
    </iommu>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'/>
</domain>

[root@netqe37 perf]# virsh define g1.xml 
Domain 'g1' defined from g1.xml

[root@netqe37 perf]# virsh start g1
Domain 'g1' started

[root@netqe37 ~]# chmod 777 /tmp/vhost0
[root@netqe37 ~]# chmod 777 /tmp/vhost1
[root@netqe37 ~]# chmod 777 /tmp/


4. bind the two ports to vfio-pci inside guest
[root@localhost ~]# uname -r
5.14.0-231.el9.x86_64

[root@localhost ~]# cat /proc/cmdline 
BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-231.el9.x86_64 root=UUID=8d36fe4e-526a-4b0c-81a3-60bbd0159e15 ro rhgb quiet crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=UUID=8f80c4dd-25c0-43e8-9479-a44c2e7ef2aa console=ttyS0,115200 isolcpus=1-2 intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=4

[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:01:02:03 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.123/24 brd 192.168.122.255 scope global dynamic noprefixroute enp2s0
       valid_lft 3538sec preferred_lft 3538sec
    inet6 fe80::17fa:3ff5:1214:adf8/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@localhost ~]# driverctl -v list-overrides
0000:03:00.0 vfio-pci (Virtio network device)
0000:04:00.0 vfio-pci (Virtio network device)


5. start testpmd inside guest with --iova-mode=va
[root@localhost ~]# dpdk-testpmd --iova-mode=va -l 0-2 -n 1 --socket-mem 1024 -- -i  --forward-mode=io --burst=32 --rxd=8192 --txd=8192 --max-pkt-len=9600 --mbuf-size=9728  --mbcache=512   --auto-start
EAL: Detected CPU lcores: 3--mbuf-size=9728  --mbcache=512   --auto-start
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:02:00.0 (socket -1)
eth_virtio_pci_init(): Failed to init PCI device
EAL: Requested device 0000:02:00.0 cannot be used
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0 (socket -1)
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 (socket -1)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=180224, size=9728, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 25

6:start testpmd inside guest with --iova-mode=pa
[root@localhost ~]# dpdk-testpmd --iova-mode=pa -l 0-2 -n 1 --socket-mem 1024 -- -i  --forward-mode=io --burst=32 --rxd=8192 --txd=8192 --max-pkt-len=9600 --mbuf-size=9728  --mbcache=512   --auto-start
EAL: Detected CPU lcores: 3--mbuf-size=9728  --mbcache=512   --auto-start
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:02:00.0 (socket -1)
eth_virtio_pci_init(): Failed to init PCI device
EAL: Requested device 0000:02:00.0 cannot be used
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0 (socket -1)
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 (socket -1)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=180224, size=9728, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 25
Port 0: 00:DE:AD:00:00:01
Configuring Port 1 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 29
Port 1: 00:DE:AD:00:00:02
Checking link statuses...
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=8192 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=8192 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=8192 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=8192 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> 


Actual results:
For step5, testpmd start hung up with the --iova-mode=va.
For step6, testpmd start successfully with the --iova-mode=pa.

Expected results:


Additional info:
After step 3, the host testpmd status as following
[root@netqe37 /]# dpdk-testpmd -l 59,19,58                 -n 4 --socket-mem 1024,1024                 --vdev net_vhost0,iface=/tmp/vhost0,client=1,queues=1,iommu-support=1                 --vdev net_vhost1,iface=/tmp/vhost1,client=1,queues=1,iommu-support=1,queues=1 -- -i 
EAL: Detected CPU lcores: 80
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_enic (1137:43) device: 0000:1d:00.0 (socket 0)
PMD: rte_enic_pmd: Advanced Filters available
PMD: rte_enic_pmd: Flow api filter mode: FLOWMAN Actions: steer tag drop count 
PMD: rte_enic_pmd: vNIC MAC addr 8C:94:1F:8B:BE:3C wq/rq 4096/4096 mtu 1500, max mtu:9158
PMD: rte_enic_pmd: vNIC csum tx/rx yes/yes rss +udp intr mode any type min timer 125 usec loopback tag 0x0000
PMD: rte_enic_pmd: vNIC resources avail: wq 4 rq 8 cq 12 intr 2
EAL: Probe PCI driver: net_enic (1137:43) device: 0000:1d:00.1 (socket 0)
PMD: rte_enic_pmd: Advanced Filters not available
PMD: rte_enic_pmd: Flow api filter mode: USNIC Actions: steer 
PMD: rte_enic_pmd: vNIC MAC addr 8C:94:1F:8B:BE:3D wq/rq 256/512 mtu 1500, max mtu:9158
PMD: rte_enic_pmd: vNIC csum tx/rx yes/yes rss +udp intr mode any type min timer 125 usec loopback tag 0x0000
PMD: rte_enic_pmd: vNIC resources avail: wq 1 rq 4 cq 5 intr 8
EAL: Probe PCI driver: net_enic (1137:43) device: 0000:1d:00.2 (socket 0)
PMD: rte_enic_pmd: Advanced Filters available
PMD: rte_enic_pmd: Flow api filter mode: FLOWMAN Actions: steer tag drop count 
PMD: rte_enic_pmd: vNIC MAC addr 8C:94:1F:8B:BE:3E wq/rq 4096/4096 mtu 1500, max mtu:9158
PMD: rte_enic_pmd: vNIC csum tx/rx yes/yes rss +udp intr mode any type min timer 125 usec loopback tag 0x0000
PMD: rte_enic_pmd: vNIC resources avail: wq 4 rq 8 cq 12 intr 2
EAL: Probe PCI driver: net_enic (1137:43) device: 0000:1d:00.3 (socket 0)
PMD: rte_enic_pmd: Advanced Filters not available
PMD: rte_enic_pmd: Flow api filter mode: USNIC Actions: steer 
PMD: rte_enic_pmd: vNIC MAC addr 8C:94:1F:8B:BE:3F wq/rq 256/512 mtu 1500, max mtu:9158
PMD: rte_enic_pmd: vNIC csum tx/rx yes/yes rss +udp intr mode any type min timer 125 usec loopback tag 0x0000
PMD: rte_enic_pmd: vNIC resources avail: wq 1 rq 4 cq 5 intr 8
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
PMD: rte_enic_pmd: TX Queues - effective number of descs:512
PMD: rte_enic_pmd: Scatter rx mode disabled
PMD: rte_enic_pmd: Rq 0 Scatter rx mode not being used
PMD: rte_enic_pmd: Using 512 rx descriptors (sop 512, data 0)
PMD: rte_enic_pmd: vNIC resources used:  wq 1 rq 2 cq 2 intr 1

Port 0: link state change event
Port 0: 8C:94:1F:8B:BE:3C
Configuring Port 1 (socket 0)
PMD: rte_enic_pmd: TX Queues - effective number of descs:256
PMD: rte_enic_pmd: Scatter rx mode disabled
PMD: rte_enic_pmd: Rq 0 Scatter rx mode not being used
PMD: rte_enic_pmd: Using 512 rx descriptors (sop 512, data 0)
PMD: rte_enic_pmd: vNIC resources used:  wq 1 rq 2 cq 2 intr 1

Port 1: link state change event
Port 1: 8C:94:1F:8B:BE:3D
Configuring Port 2 (socket 0)
PMD: rte_enic_pmd: TX Queues - effective number of descs:512
PMD: rte_enic_pmd: Scatter rx mode disabled
PMD: rte_enic_pmd: Rq 0 Scatter rx mode not being used
PMD: rte_enic_pmd: Using 512 rx descriptors (sop 512, data 0)
PMD: rte_enic_pmd: vNIC resources used:  wq 1 rq 2 cq 2 intr 1

Port 2: link state change event
Port 2: 8C:94:1F:8B:BE:3E
Configuring Port 3 (socket 0)
PMD: rte_enic_pmd: TX Queues - effective number of descs:256
PMD: rte_enic_pmd: Scatter rx mode disabled
PMD: rte_enic_pmd: Rq 0 Scatter rx mode not being used
PMD: rte_enic_pmd: Using 512 rx descriptors (sop 512, data 0)
PMD: rte_enic_pmd: vNIC resources used:  wq 1 rq 2 cq 2 intr 1

Port 3: link state change event
Port 3: 8C:94:1F:8B:BE:3F
Configuring Port 4 (socket 0)
VHOST_CONFIG: (/tmp/vhost0) vhost-user client: socket created, fd: 44
VHOST_CONFIG: (/tmp/vhost0) failed to connect: No such file or directory
VHOST_CONFIG: (/tmp/vhost0) reconnecting...
Port 4: 56:48:4F:53:54:04
Configuring Port 5 (socket 0)
VHOST_CONFIG: (/tmp/vhost1) vhost-user client: socket created, fd: 47
VHOST_CONFIG: (/tmp/vhost1) failed to connect: No such file or directory
VHOST_CONFIG: (/tmp/vhost1) reconnecting...
Port 5: 56:48:4F:53:54:05
Checking link statuses...
Done
testpmd> VHOST_CONFIG: (/tmp/vhost0) connected
VHOST_CONFIG: (/tmp/vhost0) new device, handle is 0
VHOST_CONFIG: (/tmp/vhost1) connected
VHOST_CONFIG: (/tmp/vhost1) new device, handle is 1
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: (/tmp/vhost0) negotiated Vhost-user protocol features: 0x10cbf
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_OWNER
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: (/tmp/vhost0) vring call idx:0 file:49
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_ERR
VHOST_CONFIG: (/tmp/vhost0) not implemented
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: (/tmp/vhost0) vring call idx:1 file:50
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_ERR
VHOST_CONFIG: (/tmp/vhost0) not implemented
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: (/tmp/vhost1) negotiated Vhost-user protocol features: 0x10cbf
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_OWNER
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: (/tmp/vhost1) vring call idx:0 file:52
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_ERR
VHOST_CONFIG: (/tmp/vhost1) not implemented
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: (/tmp/vhost1) vring call idx:1 file:53
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_ERR
VHOST_CONFIG: (/tmp/vhost1) not implemented
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost0) set queue enable: 1 to qp idx: 0
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost0) set queue enable: 1 to qp idx: 1
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost0) set queue enable: 1 to qp idx: 0
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost0) set queue enable: 1 to qp idx: 1
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost0) set queue enable: 1 to qp idx: 0
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost0) set queue enable: 1 to qp idx: 1
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: (/tmp/vhost0) negotiated Virtio features: 0x170206783
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_GET_STATUS
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_STATUS
VHOST_CONFIG: (/tmp/vhost0) new device status(0x00000008):
VHOST_CONFIG: (/tmp/vhost0) 	-RESET: 0
VHOST_CONFIG: (/tmp/vhost0) 	-ACKNOWLEDGE: 0
VHOST_CONFIG: (/tmp/vhost0) 	-DRIVER: 0
VHOST_CONFIG: (/tmp/vhost0) 	-FEATURES_OK: 1
VHOST_CONFIG: (/tmp/vhost0) 	-DRIVER_OK: 0
VHOST_CONFIG: (/tmp/vhost0) 	-DEVICE_NEED_RESET: 0
VHOST_CONFIG: (/tmp/vhost0) 	-FAILED: 0
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: (/tmp/vhost0) guest memory region size: 0x80000000
VHOST_CONFIG: (/tmp/vhost0) 	 guest physical addr: 0x0
VHOST_CONFIG: (/tmp/vhost0) 	 guest virtual  addr: 0x7f29c0000000
VHOST_CONFIG: (/tmp/vhost0) 	 host  virtual  addr: 0x7f4e40000000
VHOST_CONFIG: (/tmp/vhost0) 	 mmap addr : 0x7f4e40000000
VHOST_CONFIG: (/tmp/vhost0) 	 mmap size : 0x80000000
VHOST_CONFIG: (/tmp/vhost0) 	 mmap align: 0x40000000
VHOST_CONFIG: (/tmp/vhost0) 	 mmap off  : 0x0
VHOST_CONFIG: (/tmp/vhost0) guest memory region size: 0x180000000
VHOST_CONFIG: (/tmp/vhost0) 	 guest physical addr: 0x100000000
VHOST_CONFIG: (/tmp/vhost0) 	 guest virtual  addr: 0x7f2a40000000
VHOST_CONFIG: (/tmp/vhost0) 	 host  virtual  addr: 0x7f4cc0000000
VHOST_CONFIG: (/tmp/vhost0) 	 mmap addr : 0x7f4c40000000
VHOST_CONFIG: (/tmp/vhost0) 	 mmap size : 0x200000000
VHOST_CONFIG: (/tmp/vhost0) 	 mmap align: 0x40000000
VHOST_CONFIG: (/tmp/vhost0) 	 mmap off  : 0x80000000
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: (/tmp/vhost0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: (/tmp/vhost0) vring kick idx:0 file:56

Port 4: queue state event
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: (/tmp/vhost0) vring call idx:0 file:57

Port 4: queue state event

Port 4: queue state event
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: (/tmp/vhost0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: (/tmp/vhost0) vring kick idx:1 file:49

Port 4: queue state event
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: (/tmp/vhost0) vring call idx:1 file:58

Port 4: queue state event

Port 4: queue state event
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_GET_STATUS
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_STATUS
VHOST_CONFIG: (/tmp/vhost0) new device status(0x0000000f):
VHOST_CONFIG: (/tmp/vhost0) 	-RESET: 0
VHOST_CONFIG: (/tmp/vhost0) 	-ACKNOWLEDGE: 1
VHOST_CONFIG: (/tmp/vhost0) 	-DRIVER: 1
VHOST_CONFIG: (/tmp/vhost0) 	-FEATURES_OK: 1
VHOST_CONFIG: (/tmp/vhost0) 	-DRIVER_OK: 1
VHOST_CONFIG: (/tmp/vhost0) 	-DEVICE_NEED_RESET: 0
VHOST_CONFIG: (/tmp/vhost0) 	-FAILED: 0
VHOST_CONFIG: (/tmp/vhost0) virtio is now ready for processing.
Rx csum will be done in SW, may impact performance.
Port 4: link state change event
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost1) set queue enable: 1 to qp idx: 0
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost1) set queue enable: 1 to qp idx: 1
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost1) set queue enable: 1 to qp idx: 0
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost1) set queue enable: 1 to qp idx: 1
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost1) set queue enable: 1 to qp idx: 0
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: (/tmp/vhost1) set queue enable: 1 to qp idx: 1
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: (/tmp/vhost1) negotiated Virtio features: 0x170206783
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_GET_STATUS
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_STATUS
VHOST_CONFIG: (/tmp/vhost1) new device status(0x00000008):
VHOST_CONFIG: (/tmp/vhost1) 	-RESET: 0
VHOST_CONFIG: (/tmp/vhost1) 	-ACKNOWLEDGE: 0
VHOST_CONFIG: (/tmp/vhost1) 	-DRIVER: 0
VHOST_CONFIG: (/tmp/vhost1) 	-FEATURES_OK: 1
VHOST_CONFIG: (/tmp/vhost1) 	-DRIVER_OK: 0
VHOST_CONFIG: (/tmp/vhost1) 	-DEVICE_NEED_RESET: 0
VHOST_CONFIG: (/tmp/vhost1) 	-FAILED: 0
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: (/tmp/vhost1) guest memory region size: 0x80000000
VHOST_CONFIG: (/tmp/vhost1) 	 guest physical addr: 0x0
VHOST_CONFIG: (/tmp/vhost1) 	 guest virtual  addr: 0x7f29c0000000
VHOST_CONFIG: (/tmp/vhost1) 	 host  virtual  addr: 0x7f4bc0000000
VHOST_CONFIG: (/tmp/vhost1) 	 mmap addr : 0x7f4bc0000000
VHOST_CONFIG: (/tmp/vhost1) 	 mmap size : 0x80000000
VHOST_CONFIG: (/tmp/vhost1) 	 mmap align: 0x40000000
VHOST_CONFIG: (/tmp/vhost1) 	 mmap off  : 0x0
VHOST_CONFIG: (/tmp/vhost1) guest memory region size: 0x180000000
VHOST_CONFIG: (/tmp/vhost1) 	 guest physical addr: 0x100000000
VHOST_CONFIG: (/tmp/vhost1) 	 guest virtual  addr: 0x7f2a40000000
VHOST_CONFIG: (/tmp/vhost1) 	 host  virtual  addr: 0x7f4a40000000
VHOST_CONFIG: (/tmp/vhost1) 	 mmap addr : 0x7f49c0000000
VHOST_CONFIG: (/tmp/vhost1) 	 mmap size : 0x200000000
VHOST_CONFIG: (/tmp/vhost1) 	 mmap align: 0x40000000
VHOST_CONFIG: (/tmp/vhost1) 	 mmap off  : 0x80000000
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: (/tmp/vhost1) vring base idx:0 last_used_idx:0 last_avail_idx:0.
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: (/tmp/vhost1) vring kick idx:0 file:60

Port 5: queue state event
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: (/tmp/vhost1) vring call idx:0 file:61

Port 5: queue state event

Port 5: queue state event
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: (/tmp/vhost1) vring base idx:1 last_used_idx:0 last_avail_idx:0.
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: (/tmp/vhost1) vring kick idx:1 file:52

Port 5: queue state event
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: (/tmp/vhost1) vring call idx:1 file:62

Port 5: queue state event

Port 5: queue state event
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_GET_STATUS
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_STATUS
VHOST_CONFIG: (/tmp/vhost1) new device status(0x0000000f):
VHOST_CONFIG: (/tmp/vhost1) 	-RESET: 0
VHOST_CONFIG: (/tmp/vhost1) 	-ACKNOWLEDGE: 1
VHOST_CONFIG: (/tmp/vhost1) 	-DRIVER: 1
VHOST_CONFIG: (/tmp/vhost1) 	-FEATURES_OK: 1
VHOST_CONFIG: (/tmp/vhost1) 	-DRIVER_OK: 1
VHOST_CONFIG: (/tmp/vhost1) 	-DEVICE_NEED_RESET: 0
VHOST_CONFIG: (/tmp/vhost1) 	-FAILED: 0
VHOST_CONFIG: (/tmp/vhost1) virtio is now ready for processing.
Rx csum will be done in SW, may impact performance.
Port 5: link state change event
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_SET_STATUS
VHOST_CONFIG: (/tmp/vhost0) new device status(0x00000000):
VHOST_CONFIG: (/tmp/vhost0) 	-RESET: 1
VHOST_CONFIG: (/tmp/vhost0) 	-ACKNOWLEDGE: 0
VHOST_CONFIG: (/tmp/vhost0) 	-DRIVER: 0
VHOST_CONFIG: (/tmp/vhost0) 	-FEATURES_OK: 0
VHOST_CONFIG: (/tmp/vhost0) 	-DRIVER_OK: 0
VHOST_CONFIG: (/tmp/vhost0) 	-DEVICE_NEED_RESET: 0
VHOST_CONFIG: (/tmp/vhost0) 	-FAILED: 0
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_GET_VRING_BASE

Port 4: link state change event
VHOST_CONFIG: (/tmp/vhost0) vring base idx:0 file:0
VHOST_CONFIG: (/tmp/vhost0) read message VHOST_USER_GET_VRING_BASE
VHOST_CONFIG: (/tmp/vhost0) vring base idx:1 file:0
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_SET_STATUS
VHOST_CONFIG: (/tmp/vhost1) new device status(0x00000000):
VHOST_CONFIG: (/tmp/vhost1) 	-RESET: 1
VHOST_CONFIG: (/tmp/vhost1) 	-ACKNOWLEDGE: 0
VHOST_CONFIG: (/tmp/vhost1) 	-DRIVER: 0
VHOST_CONFIG: (/tmp/vhost1) 	-FEATURES_OK: 0
VHOST_CONFIG: (/tmp/vhost1) 	-DRIVER_OK: 0
VHOST_CONFIG: (/tmp/vhost1) 	-DEVICE_NEED_RESET: 0
VHOST_CONFIG: (/tmp/vhost1) 	-FAILED: 0
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_GET_VRING_BASE

Port 5: link state change event
VHOST_CONFIG: (/tmp/vhost1) vring base idx:0 file:0
VHOST_CONFIG: (/tmp/vhost1) read message VHOST_USER_GET_VRING_BASE
VHOST_CONFIG: (/tmp/vhost1) vring base idx:1 file:0

Comment 1 liting 2023-08-15 02:58:33 UTC
[root@localhost ~]# dpdk-testpmd --iova-mode=va -l 0-2 -n 1 --socket-mem 1024 -- -i  --forward-mode=io --burst=32 --rxd=8192 --txd=8192 --max-pkt-len=9600 --mbuf-size=9728  --mbcache=512   --auto-start
EAL: Detected CPU lcores: 3--mbuf-size=9728  --mbcache=512   --auto-start
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:02:00.0 (socket -1)
eth_virtio_pci_init(): Failed to init PCI device
EAL: Requested device 0000:02:00.0 cannot be used
EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing
EAL: Requested device 0000:03:00.0 cannot be used
EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing
EAL: Requested device 0000:04:00.0 cannot be used
TELEMETRY: No legacy callbacks, legacy socket not created
testpmd: No probed ethernet devices
Interactive-mode selected
Set io packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=180224, size=9728, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done
Start automatic packet forwarding
io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: native

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Comment 2 liting 2023-08-15 06:16:19 UTC
For rhel8.6, it also has this issue.
https://bugzilla.redhat.com/show_bug.cgi?id=2232049

Comment 3 David Marchand 2023-08-16 06:50:56 UTC
From the logs, the PCI bus drivers want to force PA mode, even though the user requested VA.

Was vfio configured to run in "unsafe" noiommu mode in the guest?
# cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
Y