Bug 1223426

Summary: [NetKVM] Performance degradation with multi-queue
Product: Red Hat Enterprise Linux 7 Reporter: Yvugenfi <yvugenfi>
Component: virtio-winAssignee: Yvugenfi <yvugenfi>
virtio-win sub component: virtio-win-prewhql QA Contact: Virtualization Bugs <virt-bugs>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: unspecified CC: lmiksik, michen, vrozenfe, wquan, wyu, yvugenfi
Version: 7.3   
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Windows   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
NO_DOCS
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-11-04 08:46:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Yvugenfi@redhat.com 2015-05-20 13:34:51 UTC
Description of problem:

Reported for upstream code by Google:

-->
We pulled the latest netkvm changes last week and the performance droped from around 7 Gbps to 2.8. It is a huge drop. Just wanted to let you know.... we will probably revert the February changes if we cant' figure out what is causing this.
-->

On our side, perf was still good at 592fc9e7c428237cebd34baae5231aea5bcc96d8, and it went back on 630ae942aaa7b7a3f3a76bc0a46041f2873e3893. Given the fact that the perf keep normal with vhost+mq on your side, I will also look at our device side which is a virtio-net compatible implementation to cross-check any possible cause. (The perf degradation was measure with exact same device code though).

-->

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Yu Wang 2016-03-23 08:57:02 UTC
Hi Yan,


Could you tell me what is the detail steps and test tool for reproducing this bug? And what is the virtio-win-prewhql version that cause Performance degradation ?


Thanks
wyu

Comment 2 Yvugenfi@redhat.com 2016-03-23 09:15:05 UTC
Hi Wyu,

You can use netperf or iperf to test the performance. Run the tools with 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536 packet sizes.
In NetKVM code there is even Tests directory with wrapper for netperf, but you don't have to use it - you can use your own script if it is more convenient.

https://github.com/YanVugenfirer/kvm-guest-drivers-windows/blob/master/NetKVM/Tests/netperf_wrapper.rb


Builds to compare - latest released driver https://brewweb.devel.redhat.com/buildinfo?buildID=467728 (we still don't have MQ in the release) with 
latest virtio-win-prewhql driver.


Best regards,
Yan.

Comment 7 Yu Wang 2016-05-17 02:16:58 UTC
Hi Yan,

compared above two versions on comment#3, there is no obvious degradation and increase with multiqueue, is that OK? Or the performance should be increased with multiqueue?

Thanks
wyu

Comment 8 Yvugenfi@redhat.com 2016-05-22 09:03:29 UTC
(In reply to wangyu from comment #7)
> Hi Yan,
> 
> compared above two versions on comment#3, there is no obvious degradation
> and increase with multiqueue, is that OK? Or the performance should be
> increased with multiqueue?
> 
> Thanks
> wyu

Hi Wyu,

With multi-queue when using several streams we should see some performance improvement. With one stream, it should be the same.

Best regards,
Yan.

Comment 9 Yu Wang 2016-05-27 03:21:20 UTC
change status to "VERIFIED"  according to comment#3 and comment#8

Comment 12 errata-xmlrpc 2016-11-04 08:46:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2609.html