Bug 48422 - Slow raid and scsi function
Slow raid and scsi function
Product: Red Hat Linux
Classification: Retired
Component: kernel (Show other bugs)
i586 Linux
low Severity high
: ---
: ---
Assigned To: Arjan van de Ven
David Lawrence
Depends On:
  Show dependency treegraph
Reported: 2001-07-10 12:42 EDT by Need Real Name
Modified: 2005-10-31 17:00 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2003-06-06 10:28:03 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Need Real Name 2001-07-10 12:42:53 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.72 [en] (X11; U; SunOS 5.6 sun4u)

Description of problem:
Raid systems seem to have problems.

How reproducible:

Steps to Reproduce:
0. hdparm -t /dev/sda1
    1.18 MBps
1. set up stock raidtab with 2 scsi drives From the Software Raid     
2. mkraid /dev/md0
3. mkfs /dev/md0
4. hdparm -t /dev/md0
   600 KBps	

Actual Results:  Raid was slower that the single drive. When I take the
raid 0 to 6 drives data rate drops to around 100KBps. When attempting to
move data to the 6 disk raid0 drive the kernel will panic. The effect
worsens with the use of raid 4 and raid 5.

Expected Results:  I would expect a faster response time from a raid0
config with 2 drives and and much faster response time from a raid0 with 6

Additional info:

System: 1

64 Megs Ram
1.6 gig IDE HD
AHA1542 Scsi Card
7 IBM 1050m scsi drives

System : 2

 64 megs Ram
 1.05 gig IDE HD
 AHA1542 SCSI Card
 7 IBM 1050m scsi drives

I have been doing raid testing on RH7.1 I have yet to see it work worth a
flip. Cards tested  2 AHA1542, AHA2920,2 FDomain 18xxx cards
I have also tested on the 7 IBM drives, 4 Quantum, 5 Seagate, 3 Conner
drives (all scsi).
Comment 1 Elliot Lee 2001-08-26 16:40:38 EDT
Get faster hardware? :)

The problem is with your method of benchmarking. I too get a slightly slower
hdparm -t time on a raid0 array (six devices) than I do from an individual disk
in the array. I attribute this to the overhead that the RAID code has compared
to direct disk access.

However, if I do something slightly more realistic (timing a 'dd' of a 256M file
from the same raid0 array compared to a single hard disk), the raid0 array wins

The kernel panic is something that is bad and a more specific problem than "it's
too slow" - I am reassigning this bug to the kernel in hopes that you will
provide the maintainers with details of that problem.
Comment 2 Arjan van de Ven 2001-08-26 16:47:51 EDT
A few questions for the performance first:
* what stride-size did you set the raid0 to ?
* could you use the tiobench program (http://sourceforge.net/projects/tiobench )
  instead of hdparm as that will test FILE access and not raw io access.
  (the kernel will do things like readahead on files, not on devices)
* what kernel version did you try ? 2.4.2-2 or 2.4.3-12 ?

You mention a "panic". Any chance of gettin any info of that so we can try to
find the bug ?
Comment 3 Alan Cox 2003-06-06 10:28:03 EDT
Closing idle bug

Note You need to log in before you can comment on or make changes to this bug.