Bug 181452 - Slow multiple reads from Hitachi 9570V
Summary: Slow multiple reads from Hitachi 9570V
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: kernel
Version: 4.0
Hardware: i386
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Tom Coughlan
QA Contact: Brian Brock
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2006-02-14 11:04 UTC by Nick Grundy
Modified: 2012-06-20 13:21 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-06-20 13:21:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
multipath daemon config (1022 bytes, text/plain)
2006-02-14 11:06 UTC, Nick Grundy
no flags Details
HDS prio_callout script to assign owner path correctly (356 bytes, text/plain)
2006-02-14 11:07 UTC, Nick Grundy
no flags Details
multipath -lll output, dmesg content RE qla (1.03 KB, text/plain)
2006-02-14 11:09 UTC, Nick Grundy
no flags Details

Description Nick Grundy 2006-02-14 11:04:34 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.1) Gecko/20060111 Firefox/1.5.0.1

Description of problem:
During performance testing and acceptance testing of a Hitachi Data Systems 9570V issues were found regarding the performance of Redhat enterprise when tests were run to perform multiple reads from the SAN.

Indicitive numbers from Windows and Solaris systems when testing for 1,2,3 and 4 reads using dd average read performance was approx 140MB/sec, 137MB/sec, 130MB/sec, 128MB/sec

When testing on RHEL4 for 1,2,3 and 4 reads performance was approx
130MB/sec, 70MB/sec, 78MB/sec, 85MB/sec

See attachments for config files uses during the testing.  Installing a vanila Linux kernel, ie fetching kernel source (2.6.15-2) from kernel.org and compiling using RHEL .config file resulted in performance figures in line with Solaris and Windows.

Version-Release number of selected component (if applicable):
All EL series kernels for RHEL4 released by RedHat

How reproducible:
Always

Steps to Reproduce:
1. Installed 2x QLA2340 single port HBA's into PowerEdge 1850 Running RHEL4
2. setup multipathd as per attached multipath.conf file
3. used fdisk on /dev/mapper/1HITACHI_D60091120019 to create single 100GB partition
4. Ran kpartx -a /dev/mapper/1HITACHI_D60091120019 to expose partition
5. mounted partition as /SAN
6. Ran the following to create a set of test files time dd if=/dev/zero of=/SAN/test13 bs=8192k count=250
7. Unmounted /SAN after creating four files
8. remounted /SAN
9. see test-commands.txt for tests ran
  

Actual Results:  Results with EL series kernel 
Single Read 130MB/sec
Two reads 70MB/sec
Three reads 78MB/sec
Four reads 85MB/sec

Expected Results:  These results were acheived while running 2.6.15-2
Single Read 130MB/sec
Dual Read 142MB/sec
Three Reads 113MB/sec
Four Reads 108MB/sec

Additional info:

Each read test was performed with dd, after each test the /SAN filesystem was unmounted and remounted.

SAN is a Hitachi 9570V
Disk used for test was a 100GB LUSE carved out of a 4D+1P FCAL 5x300GB RAID Group
CTRL1 was the owner controller
Both controllers are connected to identical McData 4500 switches
Server was able to see all four paths from HDS Array.
During testing the array uses the correct owner paths.

Comment 1 Nick Grundy 2006-02-14 11:06:08 UTC
Created attachment 124607 [details]
multipath daemon config

Comment 2 Nick Grundy 2006-02-14 11:07:34 UTC
Created attachment 124608 [details]
HDS prio_callout script to assign owner path correctly

Comment 3 Nick Grundy 2006-02-14 11:09:07 UTC
Created attachment 124609 [details]
multipath -lll output, dmesg content RE qla

Comment 4 Jiri Pallich 2012-06-20 13:21:00 UTC
Thank you for submitting this issue for consideration in Red Hat Enterprise Linux. The release for which you requested us to review is now End of Life. 
Please See https://access.redhat.com/support/policy/updates/errata/

If you would like Red Hat to re-consider your feature request for an active release, please re-open the request via appropriate support channels and provide additional supporting details about the importance of this issue.


Note You need to log in before you can comment on or make changes to this bug.