Bug 726835 - tc netem tcp throughput 50x slower with variable delay turned on
Summary: tc netem tcp throughput 50x slower with variable delay turned on
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Rashid Khan
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-07-29 22:15 UTC by Peter Lannigan
Modified: 2012-05-10 18:07 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-05-10 18:07:14 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Peter Lannigan 2011-07-29 22:15:07 UTC
Description of problem:
When using tc netem to emulate a high bandwidth / high delay connection the TCP throughput is about 50x slower when you turn on variable delay.

Version-Release number of selected component (if applicable):
2.6.32

How reproducible:
Very.

Steps to Reproduce:
1. Test bed: client <-GigE-> linix bridge w/ tc netem <-GigE-> HTTP/FTP server
2. On linux bridge do
   tc qdisc add dev eth0 root handle 1:0 netem delay 25ms
   tc qdisc add dev eth1 root handle 2:0 netem delay 25ms
   Verify RTT with ping - should be about 50.2
   Test throughput with wget
3. On linux bridge do
   tc qdisc change dev eth0 root handle 1:0 netem delay 25ms 0.1ms distribution normal
   tc qdisc change dev eth1 root handle 2:0 netem delay 25ms 0.1ms distribution normal
   Verify RTT with ping - should be between 49ms - 51ms
   Retest throughput with wget
  
Actual results:
Throughput drops dramatically

Expected results:
Throughput should drop much less than 50%, not 50x

Additional info:

Comment 2 RHEL Program Management 2011-10-07 15:43:02 UTC
Since RHEL 6.2 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 4 Rashid Khan 2012-05-01 19:57:35 UTC
Can you please try to reproduce this bug with 6.3 Beta.
We have fixed some issues around this area, and we believe it should be fixed now. 
If not please send new logs.
If you cannot try 6.3Beta for some other restrictions then please let us know as well. 

Thanks

Comment 5 Peter Lannigan 2012-05-10 17:11:01 UTC
This bug looks to be fixed.  I used ttcp between two RHEL systems with a RHEL 6.3beta based bridge in between them.  I get the following two results with variable delay turned off then turned on.

Test 1:
  bridge settings on eth0 and eth1:
  tc qdisc add dev eth0 root netem delay 25ms
  tc qdisc add dev eth1 root netem delay 25ms

  result:
  ttcp-r: 536870912 bytes in 17.675 real seconds = 28.968 MB/sec +++


Test 2:
  bridge settings on eth0 and eth1:
  tc qdisc add dev eth0 root netem delay 25ms 1us distribution normal
  tc qdisc add dev eth1 root netem delay 25ms 1us distribution normal

  result:
  ttcp-r: 536870912 bytes in 21.108 real seconds = 24.256 MB/sec +++

So ~29MB/sec versus ~24MB/sec. Much, much better.

Comment 6 Rashid Khan 2012-05-10 18:07:14 UTC
Thanks for letting us know Peter. 
I will close this bug.

Thanks


Note You need to log in before you can comment on or make changes to this bug.