RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1669355 - Testpmd command run failed with max-pkt-len=2000 inside guest
Summary: Testpmd command run failed with max-pkt-len=2000 inside guest
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: dpdk
Version: 7.6
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: rc
: ---
Assignee: Jens Freimann
QA Contact: liting
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-25 02:18 UTC by liting
Modified: 2019-04-23 17:40 UTC (History)
11 users (show)

Fixed In Version: 18.11-4.el7_6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1676646 (view as bug list)
Environment:
Last Closed: 2019-04-23 17:40:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0864 0 None None None 2019-04-23 17:40:20 UTC

Description liting 2019-01-25 02:18:58 UTC
Description of problem:
Testpmd command run failed with max-pkt-len=2000 inside guest

Version-Release number of selected component (if applicable):
dpdk-18.11-2.el8.x86_64
dpdk-tools-18.11-2.el8.x86_64
dpdk-18.11-2.el7_6.x86_64
dpdk-tools-18.11-2.el7_6.x86_64

How reproducible:


Steps to Reproduce:
1.Add two guest port to ovs bridge
ovs-vsctl set Open_vSwitch . other_config={}
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,1024"
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=500000500000
ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-iommu-support=true
ovs-vsctl add-port ovsbr0 dpdkvhostuserclient0 -- set Interface dpdkvhostuserclient0 type=dpdkvhostuserclient -- set Interface dpdkvhostuserclient0 options:vhost-server-path=/tmp/dpdkvhostuserclient0
ovs-vsctl add-port ovsbr0 dpdkvhostuserclient1 -- set Interface dpdkvhostuserclient1 type=dpdkvhostuserclient -- set Interface dpdkvhostuserclient1 options:vhost-server-path=/tmp/dpdkvhostuserclient1

2.Start guest.
Guest xml as following.
[root@netqe22 ~]# virsh dumpxml master
<domain type='kvm'>
  <name>master</name>
  <uuid>37425e76-af6a-44a6-aba0-73434afe34c0</uuid>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <hugepages>
      <page size='1048576' unit='KiB'/>
    </hugepages>
    <access mode='shared'/>
  </memoryBacking>
  <vcpu placement='static'>3</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='2'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <emulatorpin cpuset='1'/>
  </cputune>
  <numatune>
    <memory mode='strict' nodeset='0'/>
  </numatune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-rhel7.5.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pmu state='off'/>
    <vmport state='off'/>
    <ioapic driver='qemu'/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <feature policy='require' name='tsc-deadline'/>
    <numa>
      <cell id='0' cpus='0-2' memory='8388608' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/root/rhel8.0-vsperf-1Q-viommu.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='none'/>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xa'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0xb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <interface type='vhostuser'>
      <mac address='52:54:00:11:8f:e8'/>
      <source type='unix' path='/tmp/dpdkvhostuserclient0' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' iommu='on' ats='on'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='52:54:00:11:8f:e9'/>
      <source type='unix' path='/tmp/dpdkvhostuserclient1' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' iommu='on' ats='on'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:bb:63:7b'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
    <iommu model='intel'>
      <driver intremap='on' caching_mode='on' iotlb='on'/>
    </iommu>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'/>
</domain>

3.  Start testpmd forward inside guest
Install dpdk rpms inside guest
Configure hugepage inside guest
bind the two port to dpdk inside guest.
modprobe vfio-pci
dpdk-devbind -b vfio-pci 03:00.0 04:00.0
Start testpmd inside guest:
testpmd -l 0,1,2 -n 4 --socket-mem 1024,1024 --legacy-mem -- --burst=64 -i --rxd=512 --txd=512 --nb-cores=2 --txq=1 --rxq=1 --max-pkt-len=2000

Actual results:
Run testpmd command with "--max-pkt-len=2000" failed inside guest.
Testpmd can run successfully without "--max-pkt-len=2000" inside guest. 
Testpmd can run successfully with "--max-pkt-len=2000"on host.

[root@localhost ~]# uname -a
Linux localhost.localdomain 4.18.0-56.el8.x86_64 #1 SMP Mon Dec 17 13:56:22 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

[root@netqe22 ~]# virsh console master
Connected to domain master
Escape character is ^]

Red Hat Enterprise Linux 8.0 Beta (Ootpa)
Kernel 4.18.0-56.el8.x86_64 on an x86_64

Activate the web console with: systemctl enable --now cockpit.socket

localhost login: root
Password:
Last login: Thu Jan 24 03:22:37 on ttyS0
[root@localhost ~]# /root/one_gig_hugepages.sh 1
dpdk-devbind -b vfio-pci 03:00.0 04:0++ echo 1
0.0[root@localhost ~
[root@localhost ~]#
rst=64 -i --rxd=512 --txd=512 --nb-cores=2 --txq=1 --rxq=1 --max-pkt-len=2000-bur
EAL: Detected 3 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: PCI device 0000:02:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 1af4:1041 net_virtio
EAL: Error: Invalid memory
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 1af4:1041 net_virtio
EAL:   using IOMMU type 1 (Type 1)
EAL: PCI device 0000:04:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 1af4:1041 net_virtio
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Ethdev port_id=0 requested Rx offloads 0x800 doesn't match Rx offloads capabilities 0x21d in rte_eth_dev_configure()
Fail to configure port 0
EAL: Error - exiting with code: 1
  Cause: Start ports failed


Expected results:
Testpmd can run successfully with "--max-pkt-len=2000" inside guest

Additional info:

Comment 1 Jens Freimann 2019-01-30 15:31:25 UTC
Sent patch to upstream mailing list dpdk-dev, "Subject: [PATCH] net/virtio: set offload flag for jumbo frames"

Comment 16 errata-xmlrpc 2019-04-23 17:40:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0864


Note You need to log in before you can comment on or make changes to this bug.