Bug 613150 - Remove network changes from ktune
Summary: Remove network changes from ktune
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: ktune
Version: 5.6
Hardware: All
OS: Linux
low
high
Target Milestone: rc
: ---
Assignee: Thomas Woerner
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-07-09 20:23 UTC by Mark Wagner
Modified: 2018-11-14 15:47 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-10-19 17:55:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Mark Wagner 2010-07-09 20:23:08 UTC
Description of problem:
The network settings in ktune typically hurt performance and should be removed.
Linux does a great job of auto tuning and these typically just mess it up. 

They are already being pulled out for RHEL6

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:
Remove the following:
# 256 KB default performs well experimentally, and is often recommended by ISVs.
net.core.rmem_default = 262144
net.core.wmem_default = 262144

# When opening a high-bandwidth connection while the receiving end is under
# memory pressure, disk I/O may be necessary to free memory for the socket,
# making disk latency the effective latency for the bandwidth-delay product
# initially.  For 10 Gb ethernet and SCSI, the BDP is about 5 MB.  Allow 8 MB
# to account for overhead, to ensure that new sockets can saturate the medium
# quickly.
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608

# Allow a deep backlog for 10 Gb and bonded Gb ethernet connections
net.core.netdev_max_backlog = 10000

# Always have one page available, plus an extra for overhead, to ensure TCP NFS
# pageout doesn't stall under memory pressure.  Default to max unscaled window,
# plus overhead for rmem, since most LAN sockets won't need to scale.
net.ipv4.tcp_rmem = 8192 87380 8388608
net.ipv4.tcp_wmem = 8192 65536 8388608

# Always have enough memory available on a UDP socket for an 8k NFS request,
# plus overhead, to prevent NFS stalling under memory pressure.  16k is still
# low enough that memory fragmentation is unlikely to cause problems.
net.ipv4.udp_rmem_min = 16384
net.ipv4.udp_wmem_min = 16384

# Ensure there's enough memory to actually allocate those massive buffers to a
# socket.
net.ipv4.tcp_mem = 8388608 12582912 16777216
net.ipv4.udp_mem = 8388608 12582912 16777216

Comment 1 RHEL Program Management 2011-09-23 00:29:24 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated in the
current release, Red Hat is unfortunately unable to address this
request at this time. Red Hat invites you to ask your support
representative to propose this request, if appropriate and relevant,
in the next release of Red Hat Enterprise Linux.


Note You need to log in before you can comment on or make changes to this bug.