Bug 48422 - Slow raid and scsi function
Summary: Slow raid and scsi function
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: kernel
Version: 7.1
Hardware: i586
OS: Linux
low
high
Target Milestone: ---
Assignee: Arjan van de Ven
QA Contact: David Lawrence
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2001-07-10 16:42 UTC by Need Real Name
Modified: 2005-10-31 22:00 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2003-06-06 14:28:03 UTC


Attachments (Terms of Use)

Description Need Real Name 2001-07-10 16:42:53 UTC
From Bugzilla Helper:
User-Agent: Mozilla/4.72 [en] (X11; U; SunOS 5.6 sun4u)

Description of problem:
Raid systems seem to have problems.

How reproducible:
Always

Steps to Reproduce:
0. hdparm -t /dev/sda1
    1.18 MBps
1. set up stock raidtab with 2 scsi drives From the Software Raid     
   HOW-TO.
2. mkraid /dev/md0
3. mkfs /dev/md0
4. hdparm -t /dev/md0
   600 KBps	

Actual Results:  Raid was slower that the single drive. When I take the
raid 0 to 6 drives data rate drops to around 100KBps. When attempting to
move data to the 6 disk raid0 drive the kernel will panic. The effect
worsens with the use of raid 4 and raid 5.

Expected Results:  I would expect a faster response time from a raid0
config with 2 drives and and much faster response time from a raid0 with 6
drives.

Additional info:

System: 1

P100
64 Megs Ram
1.6 gig IDE HD
AHA1542 Scsi Card
7 IBM 1050m scsi drives

System : 2

 P166
 64 megs Ram
 1.05 gig IDE HD
 AHA1542 SCSI Card
 7 IBM 1050m scsi drives

I have been doing raid testing on RH7.1 I have yet to see it work worth a
flip. Cards tested  2 AHA1542, AHA2920,2 FDomain 18xxx cards
I have also tested on the 7 IBM drives, 4 Quantum, 5 Seagate, 3 Conner
drives (all scsi).

Comment 1 Elliot Lee 2001-08-26 20:40:38 UTC
Get faster hardware? :)

The problem is with your method of benchmarking. I too get a slightly slower
hdparm -t time on a raid0 array (six devices) than I do from an individual disk
in the array. I attribute this to the overhead that the RAID code has compared
to direct disk access.

However, if I do something slightly more realistic (timing a 'dd' of a 256M file
from the same raid0 array compared to a single hard disk), the raid0 array wins

The kernel panic is something that is bad and a more specific problem than "it's
too slow" - I am reassigning this bug to the kernel in hopes that you will
provide the maintainers with details of that problem.

Comment 2 Arjan van de Ven 2001-08-26 20:47:51 UTC
A few questions for the performance first:
* what stride-size did you set the raid0 to ?
* could you use the tiobench program (http://sourceforge.net/projects/tiobench )
  instead of hdparm as that will test FILE access and not raw io access.
  (the kernel will do things like readahead on files, not on devices)
* what kernel version did you try ? 2.4.2-2 or 2.4.3-12 ?

You mention a "panic". Any chance of gettin any info of that so we can try to
find the bug ?

Comment 3 Alan Cox 2003-06-06 14:28:03 UTC
Closing idle bug



Note You need to log in before you can comment on or make changes to this bug.