RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1345865 - Guest vhostuser mq doesn't work well with OVS-DPDK mq
Summary: Guest vhostuser mq doesn't work well with OVS-DPDK mq
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Victor Kaplansky
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-13 10:45 UTC by Pei Zhang
Modified: 2016-07-12 11:20 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Release Note
Doc Text:
1. Use irqbalance for distribution or set CPU affinity. 2. Keep the number of vCPUs matches with the number of queues. In that scenario, virtio-net will assign one vCPU for each queue.
Clone Of:
Environment:
Last Closed: 2016-07-04 00:30:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1286515 0 medium CLOSED A queue does not generate interrupts inside guest when booting guest with mq + vhost-user 2021-02-22 00:41:40 UTC

Internal Links: 1286515

Description Pei Zhang 2016-06-13 10:45:37 UTC
Description of problem:
vhostuser mq=1, ovs-dpdk mq=4   work
vhostuser mq=1, ovs-dpdk mq=2   work
vhostuser mq=4, ovs-dpdk mq=4   no work
vhostuser mq=2, ovs-dpdk mq=2   no work

So seams when the vhostuser mq >1, the network become simplex from full-duplex in guest. This means, only 1 port receive data, and forward to another port. The expected is that, both 2 ports can receive data and forward data. 

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.6.0-5.el7.x86_64
3.10.0-416.rt56.299.el7.x86_64
openvswitch-dpdk-2.5.0-4.el7.x86_64

How reproducible:
100%

Steps to Reproduce:

There are 2 hosts, Host1 and Host2, connected back-to-back.
1. Start MoonGen in Host1, in order to generate packates to Host2
# chrt -f 95 ./build/MoonGen examples/l2-load-latency.lua 0 1 0.5

2. Start OVS-DPDK in Host2
# ovs-vsctl show 
fd1f756d-06ab-4fef-bdd0-8b10ad45a087
    Bridge "ovsbr0"
        Port "vhost-user1"
            Interface "vhost-user1"
                type: dpdkvhostuser
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
    Bridge "ovsbr1"
        Port "dpdk1"
            Interface "dpdk1"
                type: dpdk
        Port "vhost-user2"
            Interface "vhost-user2"
                type: dpdkvhostuser
        Port "ovsbr1"
            Interface "ovsbr1"
                type: internal

3. Set 4 queues for OVS-DPDK
# ovs-vsctl set Open_vSwitch . other_config={}
# ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=4
# ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x154

4. Boot guest in Host2, with 4 qeues for each port
<vcpu placement='static'>5</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='10'/>
    <vcpupin vcpu='1' cpuset='12'/>
    <vcpupin vcpu='2' cpuset='14'/>
    <vcpupin vcpu='3' cpuset='16'/>
    <vcpupin vcpu='4' cpuset='18'/>
    <emulatorpin cpuset='1,3,5,7,9,11,13,15'/>
  </cputune>

...

<interface type='vhostuser'>
      <mac address='14:18:77:48:01:02'/>
      <source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/>
      <model type='virtio'/>
      <driver name='vhost' queues='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='14:18:77:48:01:03'/>
      <source type='unix' path='/var/run/openvswitch/vhost-user2' mode='client'/>
      <model type='virtio'/>
      <driver name='vhost' queues='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </interface>

Set mq=4 in guest:
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	4
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	4

# ethtool -l eth2
Channel parameters for eth2:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	4
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	4

5. In guest, start testpmd with 4 rxqs. 
# cat testpmd-mq.sh 
queues=4
cores=$queues
testpmd -l 0,1,2,3,4 -n 1 -d /usr/lib64/librte_pmd_virtio.so.1  \
-w 0000:00:02.0 -w 0000:00:06.0 \
-- \
--nb-cores=${cores} \
--disable-hw-vlan -i \
--disable-rss \
--rxq=${queues} --txq=${queues} \
--auto-start \
--rxd=256 --txd=256 \


6. Only 1 port can receive data and forward data.

testpmd> show port stats all 

  ######################## NIC statistics for port 0  ########################
  RX-packets: 111706     RX-missed: 0          RX-bytes:  6702360
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 111835     TX-errors: 0          TX-bytes:  6710100
  ############################################################################

MoonGen results:
...
[Device: id=1] Received 4667232 packets, current rate 0.00 Mpps, 0.00 MBit/s, 0.00 MBit/s wire rate.
[Device: id=1] Sent 4201536 packets, current rate 0.04 Mpps, 18.38 MBit/s, 24.13 MBit/s wire rate.
[Device: id=0] Received 0 packets, current rate 0.00 Mpps, 0.00 MBit/s, 0.00 MBit/s wire rate.
[Device: id=0] Sent 4742764 packets, current rate 0.04 Mpps, 18.42 MBit/s, 24.17 MBit/s wire rate.
[Device: id=1] Received 4667232 packets, current rate 0.00 Mpps, 0.00 MBit/s, 0.00 MBit/s wire rate.
[Device: id=1] Sent 4237440 packets, current rate 0.04 Mpps, 18.38 MBit/s, 24.13 MBit/s wire rate.
[Device: id=0] Received 0 packets, current rate 0.00 Mpps, 0.00 MBit/s, 0.00 MBit/s wire rate.
[Device: id=0] Sent 4778733 packets, current rate 0.04 Mpps, 18.42 MBit/s, 24.17 MBit/s wire rate.
[Device: id=1] Received 4667232 packets, current rate 0.00 Mpps, 0.00 MBit/s, 0.00 MBit/s wire rate.
^CSamples: 540835, Average: 179757.5 ns, StdDev: 5300.8 ns, Quartiles: 179398.4/180025.6/180678.4 ns


Actual results:
In guest, only 1 port can forward packets.

Expected results:
In guest, 2 ports should both can forward packets, as MoonGen generates packages in full-duplex way. 

Additional info:

Comment 5 Flavio Leitner 2016-06-16 19:31:02 UTC
Few things to clarify:

* vhost-user mq doesn't split the traffic because it is in between two devices that should have done that already.  In this case it is virtio-net and a dpdk port.

* To actually use multiple queues, you need multiple streams.  Keep in mind that since the distribution is a result of a hash, collisions might happen and two or more streams could get into the same queue.

* testpmd uses a very simple algorithm to distribute queues among all cores so I don't recommend it for testing mq efficiency.  Basically testpmd will assign one core for each queue of each port.  In this case, you need at least 5 vCPUs to drive forwarding from eth0 -> eth1 (1 master and 4 vCPUs/4 queues).  If you want to forward in the other direction as well then you need more 4 vCPUs, total of 9 vCPUs.

Comment 6 Pei Zhang 2016-06-17 09:06:30 UTC

(In reply to Flavio Leitner from comment #5)
> Few things to clarify:
...
> * testpmd uses a very simple algorithm to distribute queues among all cores
> so I don't recommend it for testing mq efficiency.Basically testpmd will
> assign one core for each queue of each port.  In this case, you need at
> least 5 vCPUs to drive forwarding from eth0 -> eth1 (1 master and 4 vCPUs/4
> queues).  If you want to forward in the other direction as well then you
> need more 4 vCPUs, total of 9 vCPUs.

Flavio, thank you for your detail explain, it's very helpful for QE. We have 2 questions for your help.

1. Could you recommend some tools to test vhostuser mq in DPDK environment?

2. The vhostuser mq is a little complicated for us,  so I want to confirm this with you.  When using vhostuser mq, is below value right? Are they the values for best performance? 

ovs-dpdk mq = n
vhostuser mq = n
vCPUs = 2n+1 (2 ports in guest)
-l = 2n+1 (in testpmd)
--rxq = n (in testpmd)
--nb-cores = 2n(in testpmd)


Addition info for this bug:
I re-test and it works now, so maybe this bug is not a bug:
(1)Environment
Host: 4 queues in ovs-dpdk
Guest: 9 vCPUs, 4 queues per port, 9 vCPUs used in testpmd

# cat testpmd-mq.sh 
queues=4
cores=8
testpmd -l 0,1,2,3,4,5,6,7,8 -n 1 -d /usr/lib64/librte_pmd_virtio.so.1  \
-w 0000:00:02.0 -w 0000:00:06.0 \
-- \
--nb-cores=${cores} \
--disable-hw-vlan -i \
--disable-rss \
--rxq=${queues} --txq=${queues} \
--auto-start \
--rxd=256 --txd=256 \

(2)Result
testpmd> show port stats all 

  ######################## NIC statistics for port 0  ########################
  RX-packets: 8243927    RX-missed: 0          RX-bytes:  494635620
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 4963193    TX-errors: 0          TX-bytes:  297791580
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 4963194    RX-missed: 0          RX-bytes:  297791640
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 8243930    TX-errors: 0          TX-bytes:  494635800
  ############################################################################


-Pei

Comment 7 Flavio Leitner 2016-06-17 13:34:34 UTC
(In reply to Pei Zhang from comment #6)
> 1. Could you recommend some tools to test vhostuser mq in DPDK environment?

The MQ works for both kernel and userspace DP inside of the guest. If you want to try with kernel datapath, just enable MQ in the virtio-net device and use it as a real NIC (netperf, iperf, wget and other tools).  You should see ksoftirqd processing packets on the vCPUs.  You might need to do CPU affinity though, or use irqbalance for distribution.

Another catch is that virtio-net does affinity if and only if the number of vCPUs matches with the number of queues.  In that scenario, virtio-net will assign one vCPU for each queue.

For the userspace DP then you can still use testpmd as long as you know its limitation.  For instance, try 5 vCPUs for 4 queues, but only one direction.
Another alternative is OVS-DPDK in the guest which you can tell how many PMDs you want, but it is a bit more heavy for packet processing than testpmd.

> 2. The vhostuser mq is a little complicated for us,  so I want to confirm
> this with you.  When using vhostuser mq, is below value right? Are they the
> values for best performance? 
> 
> ovs-dpdk mq = n
> vhostuser mq = n

The above follows the whole idea of MQ which is basically having different CPUs processing streams in parallel.  So, if you have 4 queues, it makes sense to have 4 different CPUs.

> vCPUs = 2n+1 (2 ports in guest)
> -l = 2n+1 (in testpmd)
> --rxq = n (in testpmd)
> --nb-cores = 2n(in testpmd)

Looks good.

> Addition info for this bug:
> I re-test and it works now, so maybe this bug is not a bug:

Yup. Sounds like it is working fine.

Comment 8 Pei Zhang 2016-07-04 00:30:16 UTC
Closed this bug as 'NOTABUG' according to Comment 6 and Comment 7.


Note You need to log in before you can comment on or make changes to this bug.