Bug 613150 - Remove network changes from ktune
Remove network changes from ktune
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: ktune (Show other bugs)
5.6
All Linux
low Severity high
: rc
: ---
Assigned To: Thomas Woerner
Red Hat Kernel QE team
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-07-09 16:23 EDT by Mark Wagner
Modified: 2011-10-19 13:55 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-10-19 13:55:51 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Mark Wagner 2010-07-09 16:23:08 EDT
Description of problem:
The network settings in ktune typically hurt performance and should be removed.
Linux does a great job of auto tuning and these typically just mess it up. 

They are already being pulled out for RHEL6

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:
Remove the following:
# 256 KB default performs well experimentally, and is often recommended by ISVs.
net.core.rmem_default = 262144
net.core.wmem_default = 262144

# When opening a high-bandwidth connection while the receiving end is under
# memory pressure, disk I/O may be necessary to free memory for the socket,
# making disk latency the effective latency for the bandwidth-delay product
# initially.  For 10 Gb ethernet and SCSI, the BDP is about 5 MB.  Allow 8 MB
# to account for overhead, to ensure that new sockets can saturate the medium
# quickly.
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608

# Allow a deep backlog for 10 Gb and bonded Gb ethernet connections
net.core.netdev_max_backlog = 10000

# Always have one page available, plus an extra for overhead, to ensure TCP NFS
# pageout doesn't stall under memory pressure.  Default to max unscaled window,
# plus overhead for rmem, since most LAN sockets won't need to scale.
net.ipv4.tcp_rmem = 8192 87380 8388608
net.ipv4.tcp_wmem = 8192 65536 8388608

# Always have enough memory available on a UDP socket for an 8k NFS request,
# plus overhead, to prevent NFS stalling under memory pressure.  16k is still
# low enough that memory fragmentation is unlikely to cause problems.
net.ipv4.udp_rmem_min = 16384
net.ipv4.udp_wmem_min = 16384

# Ensure there's enough memory to actually allocate those massive buffers to a
# socket.
net.ipv4.tcp_mem = 8388608 12582912 16777216
net.ipv4.udp_mem = 8388608 12582912 16777216
Comment 1 RHEL Product and Program Management 2011-09-22 20:29:24 EDT
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated in the
current release, Red Hat is unfortunately unable to address this
request at this time. Red Hat invites you to ask your support
representative to propose this request, if appropriate and relevant,
in the next release of Red Hat Enterprise Linux.

Note You need to log in before you can comment on or make changes to this bug.