Bug 706875 - 1G performance was measured when using 10G physical link with virtio-win
Summary: 1G performance was measured when using 10G physical link with virtio-win
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: virtio-win
Version: 5.7
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Yvugenfi@redhat.com
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-05-23 10:35 UTC by Quan Wenli
Modified: 2013-01-09 23:54 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-02-01 11:32:07 UTC
Target Upstream Version:


Attachments (Terms of Use)
VM reports 1G on 10G public bridge (66.07 KB, image/png)
2011-05-23 10:35 UTC, Quan Wenli
no flags Details

Description Quan Wenli 2011-05-23 10:35:21 UTC
Created attachment 500390 [details]
VM reports 1G on 10G public bridge

Description of problem:

Windows VM which is operating on 10G public bridge reports 1G and throughput is below 1G.

Following are results of e1000/virtio nic driver throughput on 10G public bridge from guest to external guest .

       virtio-throughput (avg-3)   e1000-throughput (avg-3)
2K-S     92.44                        94.25
4K-S     176.31                       184.28              
8K-S     276.33                       339.14
16K-S    364.87                       601.87                    
32K-S    562.73                       858.64
64K-S    681.19                       917.75
128K-S   855.69                       1163.76                         
256K-S   932.39                       1176.07

2K-R     79.84                        77.98
4K-R     177.03                       184.28
8K-R     277.01                       339.57
16K-R    365.3                        601.02
32K-R    563.33                       859.34
64K-R    681.39                       918.61
128K-R   875.12                       1163.63
256K-R   941.53                       1178.32


Version-Release number of selected component (if applicable):

virtio-win-1.0.1-3.52454.el5
kernel-2.6.18-259.el5
kvm-83-232.el5

How reproducible:
100%

Steps to Reproduce:
1.boot guest on host 
/usr/libexec/qemu-kvm  -name 'vm1'  -drive file=/root/windows-2k8-r2-sp1-raw-ide,index=0,if=virtio,boot=on,media=disk,cache=none,format=raw -net nic,vlan=0,model=virtio,macaddr='9a:3b:dd:52:d9:d7' -net tap,vlan=0,script=/etc/qemu-ifup -m 4096 -smp 2,cores=1,threads=1,sockets=2  -cpu qemu64,+sse2   -vnc :0 -rtc-td-hack  -boot c  -usbdevice tablet -no-kvm-pit-reinjection
2.boot guest on external host
/usr/libexec/qemu-kvm  -name 'vm1'  -drive file=/root/windows-2k8-r2-sp1-raw-ide,index=0,if=virtio,boot=on,media=disk,cache=none,format=raw -net nic,vlan=0,model=virtio,macaddr='9a:3b:dd:52:d9:d8' -net tap,vlan=0,script=/etc/qemu-ifup -m 4096 -smp 2,cores=1,threads=1,sockets=2  -cpu qemu64,+sse2   -vnc :0 -rtc-td-hack  -boot c  -usbdevice tablet -no-kvm-pit-reinjection
3.run ntttcp-server  command on external guest 
run c:\Program Files (x86)\Misrosoft COrporation\NT Testing TCP\Tool\ for %i in (256k,256k,256k,256k,256k,256k,256k,256k)do NTttcpr.exe -m 1,0,192.168.0.12 -a 6 -rb %i >>received.1.txt

4.run ntttcp-client command on guest 
run c:\Program Files (x86)\Misrosoft COrporation\NT Testing TCP\Tool\ for %i in (2k,4k,8k,16k,32k,64k,128k,256k)do NTttcps.exe -m 1,0,192.168.0.12 -a 2 -l %i  >>send.1.txt

  
Actual results:


Expected results:


Additional info:

#ethtool -k eth2
Offload parameters for eth2:
Cannot get device udp large send offload settings: Operation not supported
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: off
udp fragmentation offload: off
generic segmentation offload: on
generic-receive-offload: on

#ethtool -k breth0
Offload parameters for breth0:
Cannot get device rx csum settings: Operation not supported
Cannot get device udp large send offload settings: Operation not supported
rx-checksumming: off
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: on
udp fragmentation offload: off
generic segmentation offload: on
generic-receive-offload: off

#ethtool -i eth2
driver: ixgbe
version: 3.2.9-k2
firmware-version: 0.9-3
bus-info: 0000:0f:00.

#brctl show
bridge name bridge id  STP enabled interfaces
breth0  8000.001b218eb2b8 no  tap0
       eth2

Comment 12 Quan Wenli 2011-06-09 09:24:38 UTC
Do the same test with scenarios guest to guest (on same host) and guest to exguest.From that two scenarios, throughput could be reached over 1Gb with 256k message size.
After checking again the raw data for rhel6.1 ,I have to  revise my results for rhel6.1 in comment 2 .in comment 2, it shows the throughput for total of 3 iterations.I have averaged it in the following "comparison" part.

version:

kernel-2.6.18-262.el5 
kvm-83-235.el5 
win2k8 r2 sp1 guest with virtio-win-prewhql-0.1-10 

comparison:

message | rhel5.7 host   | rhel5.7 host     |  rhel6.1 host    |
size    | guest -> guest | guest -> exguest |  guest ->exguest |
--------+----------------+------------------+ -----------------+
2K-S       98.59              91.42              96.17
4K-S      192.65             173.91             174.69
8K-S      309.29             268.44             324.66
16K-S     420.84             354.78             517.69     
32K-S     624.91             557.79             766.56
64K-S     767.74             694.31             1023.9
128K-S    916.59             979.03             1284.5    
256K-S   1066.32            1098.36             1396.95

2K-R       94.75              70.69             85.34
4K-R      191.23              172.5             174.95
8K-R      307.4              267.13             324.65
16K-R     419.84             353.67             517.69
32K-R     624.56             555.57             767.1
64K-R     767.51             692.55             1023.93
128K-R    916.1              976.26             1283.75
256K-R   1064.23             1095.73            1395.78

Comment 13 Yvugenfi@redhat.com 2011-06-14 11:28:31 UTC
Two additional questions:
Do you have the results of "guest->guest" with rhel6.1 host?
And do you have results for "host->host2 (exguest host)"?

Comment 14 Quan Wenli 2011-06-15 06:15:33 UTC
(In reply to comment #13)
> Two additional questions:
> Do you have the results of "guest->guest" with rhel6.1 host?
> And do you have results for "host->host2 (exguest host)"?

No, we don't have .

Comment 15 RHEL Program Management 2011-09-23 00:35:59 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated in the
current release, Red Hat is unfortunately unable to address this
request at this time. Red Hat invites you to ask your support
representative to propose this request, if appropriate and relevant,
in the next release of Red Hat Enterprise Linux.

Comment 16 Ronen Hod 2012-02-01 11:32:07 UTC
Closing this RHEL5 bug.
We are focusing on network performance for RHEL6.3, and soon we would like to test its performance.


Note You need to log in before you can comment on or make changes to this bug.