RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 640874 - Performance Tuning Guide: TRACKING BUG for [I/O] [I/O Alignment]
Summary: Performance Tuning Guide: TRACKING BUG for [I/O] [I/O Alignment]
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: doc-Performance_Tuning_Guide
Version: 6.1
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Laura Bailey
QA Contact: ecs-bugs
URL:
Whiteboard:
Depends On:
Blocks: 639779
TreeView+ depends on / blocked
 
Reported: 2010-10-07 01:10 UTC by Don Domingo
Modified: 2011-07-04 01:53 UTC (History)
2 users (show)

Fixed In Version: 6.1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-07-04 01:53:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
pdf build as of Feb 17, 2011 (487.22 KB, application/pdf)
2011-02-16 14:59 UTC, Don Domingo
no flags Details

Comment 8 Sanjay Rao 2011-02-14 17:42:18 UTC
Don

I actually took your paragraph from the guide and added content to it in the Hardware section. Here is the modified section.

Hardware


Every I/O sub-system has limits on how much data it can process per second. If your system hosts an application that needs to perform many small (ranging from 2k to 8k) I/Os, typically in transaction processing system, it is recommended that you choose storage subsystems that have a fair amount of controller cache, high-speed disks (i.e. 15Kbps or higher, typically SAS or solid-state disks), and low-latency connection to storage. Each of these hardware recommendations are directly responsible for processing IOs at a high rate but they all come at a steep cost. That is why planning at this stage takes out a lot of trouble that can result if they are identified as bottlenecks at the production stage because changing/upgrading hardware is a diffcult time consuming process. Each of these items are explained in detail 
 
    Controller Cache - This is the cache on the storage controller similar to physical memory (RAM) on a server and is one of the most expensive pieces in the storage controller infrastructure. In most cases the controller cache is battery backed and in these situations, once the data is handed off to the controller cache the server considers the IO as completed. The amount of cache on the controller can range anywhere from 1G to 64G or higher depending on the storage vendor. The information on the amount of cache and rate at which the controllers can process the IO should be the determining factor in deciding which hardware to choose. Most vendors will supply this information under NDA.

    High speed disks - This is pretty self-explanatory. The cost of the disk is directly proportional to the speed of disk. 

    Low-latency connection - The most common external storage are based on Fibre channel, ethernet or PCI direct connect. The rate at which IOs are processed depends on the connectors used. For e.g. Network based storage can have significant latency overhead as each packet needs to be processed but there are special network cards that offload the network layer processing from the OS to the hardware to reduce the latency. 

If the I/Os are in the larger block range, typically in Data processing (BI) type environments, stripping across controllers to take advantage of collective bandwidth is recommended. In data processing environments, the storage needs can be quite high (1 TB or higher) so the low-latency is definitely worth considering, followed by the disk speed. As most of these environments are read based, the controller cache can be scaled back to the lower end. 


Under the operating system section,

Not sure what you mean by the comment "characterization=profile?"

Comment 9 Sanjay Rao 2011-02-14 17:48:17 UTC
The comments that I added are for the IO vs Storage section. This section on IO alignment looks good.

Comment 10 Don Domingo 2011-02-16 04:13:55 UTC
thanks Sanjay, revised the text accordingly. please review:

http://documentation-stage.bne.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/main-io.html#s-io-planning

as for your clarification on "characterization", i was referring to this statement:

<quote>
There are many tools that can be used to do low-level I/O subsystem characterization.
</quote>

By "characterization" did you mean "profiling"?

Comment 11 Sanjay Rao 2011-02-16 11:43:12 UTC
Yes. I did mean profiling.

Comment 12 Sanjay Rao 2011-02-16 11:45:38 UTC
The hardware section in the document is still the old one. Please take a look at the modified section above which has responses to some of your original comments in the document and change it accordingly.

Comment 13 Don Domingo 2011-02-16 14:59:09 UTC
Created attachment 479125 [details]
pdf build as of Feb 17, 2011

i don't know why that is. i'll look into it; in the meantime, i've attached a PDF build to this bug for your perusal.

(In reply to comment #12)
> The hardware section in the document is still the old one. Please take a look
> at the modified section above which has responses to some of your original
> comments in the document and change it accordingly.

Comment 16 Sanjay Rao 2011-03-29 13:51:25 UTC
Laura

I found a few of the edits that I suggested are not in the document. In the following sentence, please replace the word "light" with "small". The word "small" used here is correct in the context because it refers to the IO size and not the IO rate.

----------
Hardware

Every I/O sub-system has limits on how much data it can process per second. If
your system hosts an application that needs to perform many "small" (ranging from
2k to 8k) I/Os, typically in transaction processing system, it is recommended
that you choose storage subsystems that have a fair amount of controller cache,
high-speed disks 
---------------

The first bullet in this section needs to be changed


------------
 The amount of cache on the controller ranges from 1G to 64G (or higher), depending on the storage vendor. The most important factors to consider when choosing controller cache hardware are:

    *  The amount of cache should be the determining factor in deciding which hardware to choose.
    *  The rate at which the controllers can process I/O transactions 

---------------


Other than these the documents looks good.

Comment 17 Laura Bailey 2011-03-30 00:18:38 UTC
Thanks, Sanjay! I've changed:
 * light --> small
 * The amount of cache should be the determining factor in deciding which
hardware to choose. --> The amount of cache, and

Please take one more look so that you have final signoff on this content, and then we're done. :) Thanks!

Comment 18 Sanjay Rao 2011-03-30 12:16:24 UTC
Thanks Laura. 

Those are the only changes I had. We are good to go.

Comment 20 Michael Doyle 2011-05-06 04:53:11 UTC
Verified based on c#18 in Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-6-en-US-1.0-28


Note You need to log in before you can comment on or make changes to this bug.