RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1662177 - [Azure][DPDK]There are always too many packets in tx-drop queue in testpmd tx-side
Summary: [Azure][DPDK]There are always too many packets in tx-drop queue in testpmd tx...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: dpdk
Version: 7.6
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Mohammed Gamal
QA Contact: Yuxin Sun
URL:
Whiteboard:
Depends On:
Blocks: 1502856 1673430
TreeView+ depends on / blocked
 
Reported: 2018-12-26 18:47 UTC by Yuhui Jiang
Modified: 2020-05-28 15:25 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1673430 (view as bug list)
Environment:
Last Closed: 2020-03-17 10:31:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yuhui Jiang 2018-12-26 18:47:11 UTC
Description of problem:
While we test dpdk on Azure with testpmd, there are too many packets in tx-drop queue.
                                                 

Version-Release number of selected component (if applicable):
DPDK-18.11

How reproducible:
100%

Steps to Reproduce:
1.Prepare 2 vms on Azure with dpdk installed
2.The 1st vm, start testpmd with txonly mode. The 2nd one start testpmd with rx-only mode.
  And set 1st vm's peer mac address as the 2nd vm's mac address. Conversely, set 2nd vm's
  peer mac address as the 1st one's mac address.
#TX-SIDE
#./testpmd -c 0xf -n 1 -w 0002:00:02.0 --vdev="net_vdev_netvsc0,iface=eth1,force=1" -- --port-topology=chained --nb-cores 1 --forward-mode=txonly --eth-peer=1,00:0d:3a:a1:4b:43 --stats-period 1
#RX-SIDE
#./testpmd -c 0xf -n 1 -w 0002:00:02.0 --vdev="net_vdev_netvsc0,iface=eth1,force=1" -- --port-topology=chained --nb-cores 1 --forward-mode=rxonly --eth-peer=1,00:0d:3a:a1:41:c7 --stats-period 1

3.Keep testpmd running up to 1 minute,then stop testpmd.

4.Check 2 vms' output

Actual results:
tx-side
++++++++ Accumulated forward statistics for all ports++++++++++
   RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 340381695     TX-dropped: 241822721  TX-total: 582204416
++++++++++++++++++++++++++++++++++++++++++++++
rx-side
++++++++ Accumulated forward statistics for all ports++++++
  RX-packets: 339493162    RX-dropped: 0     RX-total: 339493162
  TX-packets: 0              TX-dropped: 0             TX-total: 0
+++++++++++++++++++++++++++++++++++++++++++++


Expected results:
There shouldn't be too many(40%-50%) packets in TX-dropped queue

Additional info:

Comment 8 Mohammed Gamal 2020-02-05 18:01:21 UTC
Opened bug against upstream dpdk
https://bugs.dpdk.org/show_bug.cgi?id=390

Comment 9 Mohammed Gamal 2020-02-11 14:10:16 UTC
Can you retest with the latest upstream DPDK with both vdev_netvsc and netvsc drivers? 
Is there a way we can check the packet drop is caused by the network rather than the driver?

Comment 10 Yuxin Sun 2020-02-12 13:51:25 UTC
(In reply to Mohammed Gamal from comment #9)
> Can you retest with the latest upstream DPDK with both vdev_netvsc and
> netvsc drivers? 
> Is there a way we can check the packet drop is caused by the network rather
> than the driver?

Hi Mohammed,

I've tested with dpdk-19.11 upstream and all of mlx4/5 PMD, VDEV_NETVSC PMD and NETVSC PMD have many tx-drop. Maybe need MSFT to have a look whether there's any configuration issue or network issue:

RHEL-7.8(3.10.0-1126.el7.x86_64)
dpdk: upstream dpdk-19.11
vm size: Standard_DS3_v2
VF card:
tx: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev 80)
rx: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
MLX4/5 PMD
tx:
# timeout 30 ./testpmd -c 0xf -n 1 -w b2d3:00:02.0 -- --port-topology=chained --nb-cores 1 --forward-mode=txonly --eth-peer=1,00:0d:3a:8b:29:32 --stats-period 1
  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 161304832      TX-dropped: 989533760     TX-total: 1150838592
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

rx:
# timeout 40 ./testpmd -c 0xf -n 1 -w 0f6d:00:02.0 -- --port-topology=chained --nb-cores 1 --forward-mode=rxonly --eth-peer=1,00:0d:3a:98:fe:df --stats-period 1
  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 159708201      RX-dropped: 0             RX-total: 159708201
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
VDEV_NETVSC PMD
tx:
# timeout 30 ./testpmd -c 0xf -n 1 -w b2d3:00:02.0 --vdev="net_vdev_netvsc0,iface=eth1,force=1" -- --port-topology=chained --nb-cores 1 --forward-mode=txonly --eth-peer=1,00:0d:3a:8b:29:32 --stats-period 1
  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 161547040      TX-dropped: 953327680     TX-total: 1114874720
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

rx:
# timeout 40 ./testpmd -c 0xf -n 1 -w 4dcd:00:02.0 --vdev="net_vdev_netvsc0,iface=eth1,force=1" -- --port-topology=chained --nb-cores 1 --forward-mode=rxonly --eth-peer=1,00:0d:3a:98:fe:df --stats-period 1
  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 159557769      RX-dropped: 0             RX-total: 159557769
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
NETVSC PMD
setup:
DEV_UUID=$(basename $(readlink /sys/class/net/eth1/device))
NET_UUID="f8615163-df3e-46c5-913f-f2d2f965ed0e"
modprobe uio_hv_generic
echo $NET_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/new_id
echo $DEV_UUID > /sys/bus/vmbus/drivers/hv_netvsc/unbind
echo $DEV_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/bind

tx:
# timeout 30 ./testpmd -c 0xf -n 1 -w b2d3:00:02.0 -- --port-topology=chained --nb-cores 1 --forward-mode=txonly --eth-peer=1,00:0d:3a:8b:29:32 --stats-period 1
  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 161200544      TX-dropped: 961872800     TX-total: 1123073344
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

rx:
# timeout 40 ./testpmd -c 0xf -n 1 -w 0f6d:00:02.0 -- --port-topology=chained --nb-cores 1 --forward-mode=rxonly --eth-peer=1,00:0d:3a:98:fe:df --stats-period 1
  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 159177452      RX-dropped: 0             RX-total: 159177452
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Comment 11 Mohammed Gamal 2020-02-12 18:20:11 UTC
(In reply to Yuxin Sun from comment #10)
> (In reply to Mohammed Gamal from comment #9)
> > Can you retest with the latest upstream DPDK with both vdev_netvsc and
> > netvsc drivers? 
> > Is there a way we can check the packet drop is caused by the network rather
> > than the driver?
> 
> Hi Mohammed,
> 
> I've tested with dpdk-19.11 upstream and all of mlx4/5 PMD, VDEV_NETVSC PMD
> and NETVSC PMD have many tx-drop. Maybe need MSFT to have a look whether
> there's any configuration issue or network issue:
> 
> RHEL-7.8(3.10.0-1126.el7.x86_64)
> dpdk: upstream dpdk-19.11
> vm size: Standard_DS3_v2
> VF card:
> tx: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
> (rev 80)
> rx: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro
> Virtual Function]
> MLX4/5 PMD
> tx:
> # timeout 30 ./testpmd -c 0xf -n 1 -w b2d3:00:02.0 --
> --port-topology=chained --nb-cores 1 --forward-mode=txonly
> --eth-peer=1,00:0d:3a:8b:29:32 --stats-period 1
>   +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 161304832      TX-dropped: 989533760     TX-total: 1150838592
>  
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> rx:
> # timeout 40 ./testpmd -c 0xf -n 1 -w 0f6d:00:02.0 --
> --port-topology=chained --nb-cores 1 --forward-mode=rxonly
> --eth-peer=1,00:0d:3a:98:fe:df --stats-period 1
>   +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>   RX-packets: 159708201      RX-dropped: 0             RX-total: 159708201
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>  
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> VDEV_NETVSC PMD
> tx:
> # timeout 30 ./testpmd -c 0xf -n 1 -w b2d3:00:02.0
> --vdev="net_vdev_netvsc0,iface=eth1,force=1" -- --port-topology=chained
> --nb-cores 1 --forward-mode=txonly --eth-peer=1,00:0d:3a:8b:29:32
> --stats-period 1
>   +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 161547040      TX-dropped: 953327680     TX-total: 1114874720
>  
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> rx:
> # timeout 40 ./testpmd -c 0xf -n 1 -w 4dcd:00:02.0
> --vdev="net_vdev_netvsc0,iface=eth1,force=1" -- --port-topology=chained
> --nb-cores 1 --forward-mode=rxonly --eth-peer=1,00:0d:3a:98:fe:df
> --stats-period 1
>   +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>   RX-packets: 159557769      RX-dropped: 0             RX-total: 159557769
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>  
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> NETVSC PMD
> setup:
> DEV_UUID=$(basename $(readlink /sys/class/net/eth1/device))
> NET_UUID="f8615163-df3e-46c5-913f-f2d2f965ed0e"
> modprobe uio_hv_generic
> echo $NET_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/new_id
> echo $DEV_UUID > /sys/bus/vmbus/drivers/hv_netvsc/unbind
> echo $DEV_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/bind
> 
> tx:
> # timeout 30 ./testpmd -c 0xf -n 1 -w b2d3:00:02.0 --
> --port-topology=chained --nb-cores 1 --forward-mode=txonly
> --eth-peer=1,00:0d:3a:8b:29:32 --stats-period 1
>   +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 161200544      TX-dropped: 961872800     TX-total: 1123073344
>  
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> rx:
> # timeout 40 ./testpmd -c 0xf -n 1 -w 0f6d:00:02.0 --
> --port-topology=chained --nb-cores 1 --forward-mode=rxonly
> --eth-peer=1,00:0d:3a:98:fe:df --stats-period 1
>   +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>   RX-packets: 159177452      RX-dropped: 0             RX-total: 159177452
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>  
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

I've tried the netvsc driver on a local hyper-v setup and got similar results. I tried to find a way to do rate limiting, but the netvsc driver doesn't currently support dpdk rate limiting.

Comment 13 Mohammed Gamal 2020-03-16 19:05:05 UTC
Since this is an upstream (i.e. not Red Hat specific) issue and older than 11 months. I do consider closing it as WONTFIX. If there is no response from upsteam this month, I prefer if we close it after this month!

Comment 14 Mohammed Gamal 2020-03-17 10:31:20 UTC
Stephen Hemminger from Microsoft doesn't believe this is a bug, but rather caused by the fact that testpmd has no flow control and that the sending speed outpaces the driver's capacity. Upstream bug was closed as WONTFIX.
Closing here as NOTABUG


Note You need to log in before you can comment on or make changes to this bug.