RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1309826 - Throughput of 4-concurrent netperf/TCP_STREAM test streams from a 4-queue vhost-net to Host degraded during a 24+ hours testing
Summary: Throughput of 4-concurrent netperf/TCP_STREAM test streams from a 4-queue vho...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch
Version: 7.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Flavio Leitner
QA Contact: Jean-Tsung Hsiao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-18 18:58 UTC by Jean-Tsung Hsiao
Modified: 2016-06-13 01:20 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-13 01:20:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jean-Tsung Hsiao 2016-02-18 18:58:17 UTC
Description of problem: Throughput of 4-concurrent netperf/TCP_STREAM test streams from a 4-queue vhost-net to Host degraded during a 24+ hours testing

Initially, the throughput rate was above 50 Gb, but then it started to degrade below 40 Gb around 12 hours mark; Eventually, went down to above 30 Gb.

Version-Release number of selected component (if applicable):

Linux netqe5.knqe.lab.eng.bos.redhat.com 3.10.0-327.el7.x86_64 #1 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux

libvirt-1.2.17-13.el7_2.2.x86_64
 
openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7_2.x86_64.rpm

How reproducible: reproducible


Steps to Reproduce: 
1. Configure a ordinary OVS bridge on a host --- ovsbr0.

2.Configure a vhost-net guest with four CPU cores and four queues.
 
 <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='3'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <vcpupin vcpu='2' cpuset='7'/>
    <vcpupin vcpu='3' cpuset='9'/>
  </cputune>

    <interface type='bridge'>
      <mac address='52:54:00:b7:44:50'/>
      <source bridge='ovsbr0'/>
      <virtualport type='openvswitch'>
        <parameters interfaceid='dff11f03-5ae5-4c61-9e52-bb8df9950d5d'/>
      </virtualport>
      <model type='virtio'/>
      <driver name='vhost' queues='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
 
2.Start the guest.
3.Add internal port int0 to ovsbr0.
4. Add IP addrs to eth0 at guest, and int0 at the host.
5. Run 4-concurrent netperf/TCP_STREAM test streams from eth0 to int0.

Below is a script that would last 16 hours.

[root@localhost jhsiao]# cat run_netperf_tcp_mq_4_taskset.sh
for i in {1..192}
do
MPSTAT=/tmp/mpstat."$i"
LOG=netperf_tcp_T."$i"
(sleep 15; mpstat -P ALL 3 90 > $MPSTAT) &
ssh 192.168.122.1 "sleep 15; mpstat -P ALL 3 90 > $MPSTAT" &
echo Test $i
taskset -c 0 netperf -H 172.16.3.105  -l 300  > $LOG.0 2>&1 &
taskset -c 1 netperf -H 172.16.3.105  -l 300  > $LOG.1 2>&1 &
taskset -c 2 netperf -H 172.16.3.105  -l 300  > $LOG.2 2>&1 &
taskset -c 3 netperf -H 172.16.3.105  -l 300  > $LOG.3 2>&1 &
sleep 15
perf record -g -o screenshot-0.$i -C 0 sleep 1
perf record -g -o screenshot-1.$i -C 1 sleep 1
perf record -g -o screenshot-2.$i -C 2 sleep 1
perf record -g -o screenshot-3.$i -C 3 sleep 1
wait
done


Actual results:
The aggregate throughput went down from above 50 Gb to below 40 Gb in about 12 hours.

Expected results:
No degradation.

Additional info:

Comment 2 Jean-Tsung Hsiao 2016-02-19 13:55:56 UTC
Correction: openvswitch used is:

openvswitch-2.4.0-1.el7.x86_64

Comment 3 Flavio Leitner 2016-05-26 19:28:48 UTC
Jean,
Could you see if 2.5 still has this issue?

Thanks,
fbl

Comment 4 Jean-Tsung Hsiao 2016-05-26 20:29:39 UTC
(In reply to Flavio Leitner from comment #3)
> Jean,
> Could you see if 2.5 still has this issue?
> 
> Thanks,
> fbl

Are you referring to the -4 version ?

http://download.eng.bos.redhat.com/brewroot/packages/openvswitch-dpdk/2.5.0/4.el7/x86_64/openvswitch-dpdk-2.5.0-4.el7.x86_64.rpm

Comment 5 Jean-Tsung Hsiao 2016-05-26 20:38:36 UTC
(In reply to Flavio Leitner from comment #3)
> Jean,
> Could you see if 2.5 still has this issue?
> 
> Thanks,
> fbl

I'll work on it.

Comment 6 Jean-Tsung Hsiao 2016-05-27 20:27:59 UTC
(In reply to Flavio Leitner from comment #3)
> Jean,
> Could you see if 2.5 still has this issue?
> 
> Thanks,
> fbl

Hi Flavio,

After repeating the original test, the resulting data confirms that 2.5 doesn't have this issue.

Please see below.

Thanks!

Jean

min/ave/max of the set of 192 throughputs: 44189.1 / 57587.3 / 61627.4
 
Related softwares:

openvswitch-2.5.0-3.el7.x86_64

Linux netqe6.knqe.lab.eng.bos.redhat.com 3.10.0-327.13.1.el7.x86_64 #1 SMP Mon Feb 29 13:22:02 EST 2016 x86_64 x86_64 x86_64 GNU/Linux


Note You need to log in before you can comment on or make changes to this bug.