RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 639801 - Performance Tuning Guide: TRACKING BUG for [Network] [Reasons to (not) Adjust Network Settings]
Summary: Performance Tuning Guide: TRACKING BUG for [Network] [Reasons to (not) Adjust...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: doc-Performance_Tuning_Guide
Version: 6.1
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Laura Bailey
QA Contact: ecs-bugs
URL:
Whiteboard:
Depends On:
Blocks: 639779
TreeView+ depends on / blocked
 
Reported: 2010-10-04 02:43 UTC by Don Domingo
Modified: 2011-07-04 01:55 UTC (History)
1 user (show)

Fixed In Version: 6.1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-07-04 01:55:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 7 Neil Horman 2011-02-15 14:21:44 UTC
Hey don, for the most part it looks good.  A few thoughts:

* The text notes that that "the entire stack is quite sensitive".  While this is true, it kind of reads negatively to me, as though the stack is fickle or otherwise easily broken (although I'll defer to you for the final word on that, I'm no author).  It might be better to say something that emphasizes the fact that, while tuning can be good, its also possible to incorrectly adjust the stack if you don't fully understand where your resources need to be allocated, and that that can result in degraded performance.

* Regarding the statement on teh bufferbloat problem, its all correct, except for the assertion that mis-tuning the queue depth will cause congestion - it will cause sub-optimal throughput, specifcially because it will be impossible to detect congestion.

* If we could add a few bullets to the tools list to include the /proc/net/snmp file, the ethtool utility and the ip utility, that would make that list quite comprehensive in conjunction with what you have there.

* Regarding rmem_default, so verified, your text is correct.  It might be useful to note that often rmem_max is also adjusted in conjunction with rmem_default because the kernel requrires that rmem_default <= rmem_max.

* In regards to other examples, there are companion setting for sendbuffer size (wmem_default/max and SO_SNDBUF), as well as some protocol specific settings (like TCP_NODELAY, which gives a user the ability to trade off between tcp througput and low latency).  I think the SO_RCV/SND_BUF settings are sufficient here though.

Comment 8 Don Domingo 2011-02-15 23:28:33 UTC
thanks Neil, comments below:

(In reply to comment #7)
> Hey don, for the most part it looks good.  A few thoughts:
> 
> * The text notes that that "the entire stack is quite sensitive".  While this
> is true, it kind of reads negatively to me, as though the stack is fickle or
> otherwise easily broken (although I'll defer to you for the final word on that,
> I'm no author).  It might be better to say something that emphasizes the fact
> that, while tuning can be good, its also possible to incorrectly adjust the
> stack if you don't fully understand where your resources need to be allocated,
> and that that can result in degraded performance.
> 
> * Regarding the statement on teh bufferbloat problem, its all correct, except
> for the assertion that mis-tuning the queue depth will cause congestion - it
> will cause sub-optimal throughput, specifcially because it will be impossible
> to detect congestion.
>

revised as:

<new>
As mentioned earlier, the network stack is mostly self-optimizing. In addition, effectively tuning the network requires a thorough understanding not just of how the network stack works, but also the specific system's network resource requirements. Incorrect configurations to network performance settings can actually lead to degraded performance.

For example, consider the bufferfloat problem. Increasing buffer queue depths result in TCP connections that have congestion windows larger than the link would otherwise allow (due to deep buffering). However, those connections also have huge RTT values since the frames spend so much time in-queue. This, in turn, actually results in sub-optimal output, as it would become impossible to detect congestion. </new>

 
> * If we could add a few bullets to the tools list to include the /proc/net/snmp
> file, the ethtool utility and the ip utility, that would make that list quite
> comprehensive in conjunction with what you have there.
> 

added. i don't have a reference for more info on /proc/net/snmp though, since the proc man page doesn't have much on it either. suggestions?

> * Regarding rmem_default, so verified, your text is correct.  It might be
> useful to note that often rmem_max is also adjusted in conjunction with
> rmem_default because the kernel requrires that rmem_default <= rmem_max.
> 

revised:

<new>
Replace N with the desired buffer size, in bytes. To determine the value for this kernel parameter, view /proc/sys/net/core/rmem_default. Bear in mind that the value of rmem_default should be no greater than rmem_max (/proc/sys/net/core/rmem_max); if need be, increase the value of rmem_max. 
</new>

> * In regards to other examples, there are companion setting for sendbuffer size
> (wmem_default/max and SO_SNDBUF), as well as some protocol specific settings
> (like TCP_NODELAY, which gives a user the ability to trade off between tcp
> througput and low latency).  I think the SO_RCV/SND_BUF settings are sufficient
> here though.

ok. no further edits.

please verify these edits, and let me know if they're good to go so i can set this bug to MODIFIED. thanks!

Comment 11 Neil Horman 2011-04-05 12:13:44 UTC
Yep perfect, ACK on all your changes

In regards to the SNMP reference, the /proc/net/snmp file exports the values defined in RFC 4293 (as well as a few other stat RFC's).  Basically they just export all the ip layer MIB stats, which is useful in understanding performance issues.

Comment 12 Laura Bailey 2011-04-06 00:33:46 UTC
Awesome, thanks Neil. Setting to MODIFIED.

Comment 14 Michael Doyle 2011-05-06 03:55:19 UTC
Verified based on c#11 in Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-6-en-US-1.0-28


Note You need to log in before you can comment on or make changes to this bug.