Bug 2147534 - testpmd start failed when running ovs dpdk pvp case on rhel9.2
Summary: testpmd start failed when running ovs dpdk pvp case on rhel9.2
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: openvswitch
Version: FDP 22.K
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Timothy Redaelli
QA Contact: qding
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-11-24 08:26 UTC by liting
Modified: 2023-07-13 07:25 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-2500 0 None None None 2022-11-24 08:29:52 UTC

Description liting 2022-11-24 08:26:00 UTC
Description of problem:
rhel9.2: testpmd start failed when running ovs dpdk pvp case

Version-Release number of selected component (if applicable):
5.14.0-197.el9.x86_64

How reproducible:


Steps to Reproduce:
1. Build ovs dpdk pvp topo:
Bridge ovsbr0
        datapath_type: netdev
        Port dpdk0
            Interface dpdk0
                type: dpdk
                options: {dpdk-devargs="0000:07:00.0", n_rxq="1", n_rxq_desc="1024", n_txq_desc="1024"}
        Port vhost0
            Interface vhost0
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser/vhost0"}
        Port dpdk1
            Interface dpdk1
                type: dpdk
                options: {dpdk-devargs="0000:07:00.1", n_rxq="1", n_rxq_desc="1024", n_txq_desc="1024"}
        Port ovsbr0
            Interface ovsbr0
                type: internal
        Port vhost1
            Interface vhost1
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser/vhost1"}
    ovs_version: "2.17.4"
2. use following guest xml start guest
<domain type='kvm'>
  <name>g1</name>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <hugepages>
      <page size='1048576' unit='KiB'/>
    </hugepages>
    <locked/>
    <access mode='shared'/>
  </memoryBacking>
  <vcpu placement='static'>3</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='38'/>
    <vcpupin vcpu='2' cpuset='4'/>
    <emulatorpin cpuset='0,36'/>
  </cputune>
  <numatune>
    <memory mode='strict' nodeset='0'/>
  </numatune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='q35'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pmu state='off'/>
    <vmport state='off'/>
    <ioapic driver='qemu'/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <feature policy='require' name='tsc-deadline'/>
    <numa>
      <cell id='0' cpus='0-2' memory='8388608' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/g1.qcow2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='none'>
      <alias name='usb'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x8'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x9'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xa'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0xb'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:01:02:03'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='00:de:ad:00:00:01'/>
      <source type='unix' path='/tmp/vhostuser/vhost0' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' queues='1' rx_queue_size='1024' tx_queue_size='1024' iommu='off' ats='off'>
        <host mrg_rxbuf='off'/>
      </driver>
      <address type='pci' domain='0x0000' bus='0x3' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='00:de:ad:00:00:02'/>
      <source type='unix' path='/tmp/vhostuser/vhost1' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' queues='1' rx_queue_size='1024' tx_queue_size='1024' iommu='off' ats='off'>
        <host mrg_rxbuf='off'/>
      </driver>
      <address type='pci' domain='0x0000' bus='0x4' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
    <iommu model='intel'>
      <driver intremap='on' caching_mode='on' iotlb='on'/>
    </iommu>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'/>
</domain>
3. start testpmd inside guest
[root@localhost ~]# dpdk-testpmd -l 0,1,2,3,4 -n 1 --socket-mem 1024 -- -i --forward-mode=io --burst=32 --rxd=8192 --txd=8192   --mbcache=512  --auto-start

Actual results:
start testpmd failed inside guest
[root@localhost ~]# dpdk-testpmd -l 0,1,2,3,4 -n 1 --socket-mem 1024 -- -i --forward-mode=io --burst=32 --rxd=8192 --txd=8192   --mbcache=512  --auto-start
EAL: Detected CPU lcores: 5
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: VFIO support initialized
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:02:00.0 (socket 0)
eth_virtio_pci_init(): Failed to init PCI device
EAL: Requested device 0000:02:00.0 cannot be used
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0 (socket 0)
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=212992, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 33



Signal 2 received, preparing to exit...

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
EAL: Releasing PCI mapped resource for 0000:03:00.0
EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x1180000000
EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x1180001000
Port 0 is closed
Done

Shutting down port 1...
Closing ports...
PANIC in eal_intr_thread_main():
Error adding fd 33 epoll_ctl, Bad file descriptor
5: [/lib64/libc.so.6(+0x3f450) [0x7f754743f450]]
4: [/lib64/libc.so.6(+0x9f802) [0x7f754749f802]]
3: [/lib64/librte_eal.so.22(+0x34052) [0x7f7547e71052]]
2: [/lib64/librte_eal.so.22(__rte_panic+0xc5) [0x7f7547e4cbcc]]
1: [/lib64/librte_eal.so.22(rte_dump_stack+0x32) [0x7f7547e6d0e2]]
Aborted (core dumped)

[root@localhost ~]# uname -r
5.14.0-197.el9.x86_64


Expected results:


Additional info:


Note You need to log in before you can comment on or make changes to this bug.