RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1486221 - Larger UDP MTU is not working
Summary: Larger UDP MTU is not working
Keywords:
Status: CLOSED DUPLICATE of bug 1497963
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch
Version: 7.4
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: pre-dev-freeze
: ---
Assignee: Kevin Traynor
QA Contact: Christian Trautman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-29 09:02 UTC by Pradipta Kumar Sahoo
Modified: 2020-12-14 09:45 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-10-10 15:42:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1475576 1 None None None 2023-05-15 11:30:38 UTC

Description Pradipta Kumar Sahoo 2017-08-29 09:02:10 UTC
​
Description of problem:
Larger UDP MTU is not working in ovs_user_bridge

Version-Release number of selected component (if applicable):
RedHat OpenStack 10 Director DPDK environment

How reproducible:
In Customer production OSP10 environment Compute node configured with 4 bond interface on each compute (1 - Linux Bridge used for Internal OpenStack Network and remaining 3 are ovs_user_bridge( i.e. dpdk)).
In OpenStack neutron configured a network on one of this ovs_user_bridge ( i.e. br-sig ) and pumping the UDP traffic from VM.
The customer noticed the UDP packet (5000+ bytes) is getting dropped in the compute(compute-07) with packet fragmentation ovs_user_bridge which using x710 Intel uplink port.


Steps to Reproduce:

Since there is previous BZ(1475576) opened for the same customer. So we are suspecting the issue is related DPDK library of X710 card (i40e) module.
Please help us to understand whether the all compatible library is updated in current DPDK-16.11 which has shipped in RHOSP10. It seems all library has not included yet.
In the comments of previous BZ(1475576), it seems MTU issue has resolved in openvswitch2.6+DPDK-16.11. Please let me know whether any additional information is still required.

Refernece BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1475576

Please find below in-details OVS-DPK port and PCI information.
1. DPDK packages details in compute node.
    compute-7.localdomain]$ egrep "dpdk|openvswitch" installed-rpms
    dpdk-2.2.0-3.el7.x86_64                                     Thu Jun 15 18:53:47 2017
    dpdk-tools-2.2.0-3.el7.x86_64                               Fri Aug 25 21:58:47 2017
    openstack-neutron-openvswitch-9.3.1-2.el7ost.noarch         Thu Jun 15 18:53:43 2017
    openvswitch-2.6.1-10.git20161206.el7fdp.x86_64              Thu Jun 15 18:36:40 2017
    python-openvswitch-2.6.1-10.git20161206.el7fdp.noarch       Thu Jun 15 18:16:49 2017
2. dpdk_nic_bind -s. X710 card configured with i40e driver module.
    Network devices using DPDK-compatible driver
    ============================================
    0000:0b:00.0 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused=i40e
    0000:0b:00.1 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused=i40e
    0000:84:00.0 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused=i40e
    0000:84:00.1 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused=i40e
    0000:88:00.0 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused=i40e
    0000:88:00.1 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused=i40e

    Network devices using kernel driver
    ===================================
    0000:02:00.0 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno1 drv=tg3 unused=vfio-pci *Active*
    0000:02:00.1 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno2 drv=tg3 unused=vfio-pci
    0000:02:00.2 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno3 drv=tg3 unused=vfio-pci
    0000:02:00.3 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno4 drv=tg3 unused=vfio-pci
    0000:05:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens1f0 drv=i40e unused=vfio-pci
    0000:05:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens1f1 drv=i40e unused=vfio-pci

    Other network devices
    =====================
    <none>

3. Interface details:
    sos_commands/networking/ethtool_-i_ens1f0
    driver: i40e
    version: 1.5.10-k
    firmware-version: 5.60 0x80002dab 1.1618.0
    expansion-rom-version:
    bus-info: 0000:05:00.0
    supports-statistics: yes
    supports-test: yes
    supports-eeprom-access: yes
    supports-register-dump: yes
    supports-priv-flags: yes

    sos_commands/networking/ethtool_-i_ens1f1
    driver: i40e
    version: 1.5.10-k
    firmware-version: 5.60 0x80002dab 1.1618.0
    expansion-rom-version:
    bus-info: 0000:05:00.1
    supports-statistics: yes
    supports-test: yes
    supports-eeprom-access: yes
    supports-register-dump: yes
    supports-priv-flags: yes

4. #dpdk1 port drop ~6918 packets while receiving time.
    #sos_commands/openvswitch/ovs-ofctl_dump-ports_br-sig
        OFPST_PORT reply (xid=0x2): 4 ports
          port LOCAL: rx pkts=17, bytes=1310, drop=0, errs=0, frame=0, over=0, crc=0
               tx pkts=2015621, bytes=124768279, drop=0, errs=0, coll=0
          port  1: rx pkts=12601933, bytes=2752634726, drop=6918, errs=18446744073709544698, frame=?, over=?, crc=?
               tx pkts=9846457, bytes=1220948889, drop=0, errs=0, coll=?

    #ovs-ofctl dump-ports-desc  br-sig
        OFPST_PORT_DESC reply (xid=0x2):
         1(dpdk1): addr:14:02:ec:75:64:38
             config:     0
             state:      0
             current:    10GB-FD AUTO_NEG
             speed: 10000 Mbps now, 0 Mbps max

    # ovs-vsctl get Interface dpdk1
        {"rx_1024_to_1518_packets"=944680, "rx_128_to_255_packets"=1005995, "rx_1523_to_max_packets"=0, "rx_1_to_64_packets"=2138882, "rx_256_to_511_packets"=1786750, "rx_512_to_1023_packets"=108467, "rx_65_to_127_packets"=8085744, rx_broadcast_packets=2060979, rx_bytes=2883964698, rx_dropped=6918, rx_errors=-6918, rx_fragmented_errors=0, rx_jabber_errors=0, rx_packets=13922025, rx_undersized_errors=0, "tx_1024_to_1518_packets"=15107, "tx_128_to_255_packets"=1933275, "tx_1523_to_max_packets"=0, "tx_1_to_64_packets"=602413, "tx_256_to_511_packets"=1649247, "tx_512_to_1023_packets"=13139, "tx_65_to_127_packets"=7059080, tx_broadcast_packets=283073, tx_bytes=1397672220, tx_dropped=0, tx_errors=0, tx_multicast_packets=27496, tx_packets=11248564}

5. To access the SOS report, please refer below steps:
    # ssh your_kerb.redhat.com
    # cd /cases/01919306


Expected results:
Expecting 0 UDP packet drop irrespective of MTU size > 1400.

Additional info:
As work around, Customer tried by setting the MTU as 7K ( Just random Setup ) for dpdk and vhost port. The customer doesn't see any UDP packet drop even the 9K packet size. But we need to understand whether 


​
Regards,
Pradipta

Comment 12 Andreas Karis 2017-10-10 15:42:27 UTC
dup of 1497963

*** This bug has been marked as a duplicate of bug 1497963 ***


Note You need to log in before you can comment on or make changes to this bug.