Bug 698541

Summary: vhost-net/kvm need optimizations for UDP R from guest to host .
Product: Red Hat Enterprise Linux 7 Reporter: Quan Wenli <wquan>
Component: kernelAssignee: Vlad Yasevich <vyasevic>
Status: CLOSED WONTFIX QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.0CC: ailan, amit.shah, juzhang, knoel, mkenneth, virt-bugs, virt-maint, vyasevic, wquan
Target Milestone: rc   
Target Release: 7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-06-22 18:00:22 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1113511    
Attachments:
Description Flags
rhel6.1-snp1-vhost-net vs virtio-net megabyte/cpu
none
UDP results with 3.10.0-138.el7.x86_64/qemu-kvm-1.5.3-66.el7.x86_64 none

Description Quan Wenli 2011-04-21 07:54:28 UTC
Created attachment 493728 [details]
rhel6.1-snp1-vhost-net vs virtio-net megabyte/cpu

Description of problem:

1> When application doesn't flow control, megabyte/cpu shows regression (drop ) for all UDP R for guest to host.Detail info ,please check the attachment. 
2> When application does flow control, for example by rate-limiting the stream to within 10% and 30% packet drops, vhost-net doesn't have performance improvement for UDP R for guest to host compared with virtio-net.

Lost packet rate = 10% 
vhost-net 
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec
124928    1460   60.00      468000      0      91.10
124928           60.00      420001             81.76

virtio-net 
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec
124928    1460   60.00      468000      0      91.10
124928           60.00      420003             81.76

Lost packet rate = 30% 

vhost-net
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

124928    1460   60.00      600000      0     116.80
124928           60.00      420122             81.78

virtio-net

Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

124928    1460   60.00      600000      0     116.80
124928           60.00      420029             81.77

Version-Release number of selected component (if applicable):

host:
kernel-2.6.32-125.el6.x86_64
qemu-kvm-0.12.1.2-2.152.el6.x86_64
guest:
kernel-2.6.32-125.el6.x86_64

How reproducible:


Steps to Reproduce:
1.steps for description 1 
run netserver on the guest 
run netperf ci on the host like : mpstat -P ALL 1 &>${CPU_FILE} & ssh $client $client_path -C -c -H $server -l 30 -t UDP_STREAM -- -m $packet_size && kill -9  `pgrep mpstat` 
Calculate megabyte/cpu for each packets sizes

2.steps for description 2 
Rate limit the sender and see at what rate of sender do you get some predefined loss ratio (e.g control the lost packet rate within 10% and 30%).
configurate netperf with "--enable-intervals=yes " 
Lost packet rate = 10% 
vhost-net:/root/tool/netperf-2.4.5/src/netperf -w 1 -b 78 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460 
virtio-net:/root/tool/netperf-2.4.5/src/netperf -w 1 -b 78 -H 192.168.0.13 -l60 -t UDP_STREAM -- -m 1460

Lost packet rate = 30% 

vhost-net:/root/tool/netperf-2.4.5/src/netperf -w 1 -b 100 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460 
virtio-net:/root/tool/netperf-2.4.5/src/netperf -w 1 -b 100 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460  
  
Actual results:


Expected results:


Additional info:

Comment 2 Michael S. Tsirkin 2011-05-30 15:40:21 UTC
Basically I think to improve UDP receive speed we need something like GRO for UDP. Another thing that is reported to help here is the event index
feature. In both cases we need the code upstream first as
maintaining a forked version in rhel would be very painful.

Comment 7 Ronen Hod 2012-07-15 14:41:41 UTC
Deferring to RHEL7.
It seems as if we need to add GRO like functionality for UDP, which is too much for RHEL6.
First do it upstream.

Comment 20 Quan Wenli 2014-08-01 03:18:26 UTC
Created attachment 923057 [details]
UDP results with 3.10.0-138.el7.x86_64/qemu-kvm-1.5.3-66.el7.x86_64

Comment 21 Quan Wenli 2014-08-01 03:22:14 UTC
Re-tested it on latest rhel7 host/guest with 3.10.0-138.el7.x86_64/qemu-kvm-1.5.3-66.el7.x86_64

From atteched file UDP_Rhel7-3.10.0-138.html, we could see :

 - for samll packet ( <= 4096), 0% packet drop in both virtio-net/vhost, but host sends more UDP through virtio-net than vhost, so there is 20% around degradation through vhost.
 - for large packet ( > 4096), although host sends more (around 20%) UDP through virtio-net than vhost, but guest receive UDP more through vhost than virtio-net.

Vlad, any comments about above results, do you think we still have UDP issue with vhost?

Comment 22 Ronen Hod 2014-10-22 18:25:58 UTC
Things improved, but still requires some follow-up/investigation. Deferring to 7.2.