Bug 65560 - RFE: Raid rebuild IO capacity needs improving, also no rebuild on mkraid
RFE: Raid rebuild IO capacity needs improving, also no rebuild on mkraid
Product: Red Hat Linux
Classification: Retired
Component: kernel (Show other bugs)
i386 Linux
high Severity medium
: ---
: ---
Assigned To: Arjan van de Ven
Brian Brock
: FutureFeature
: 65597 (view as bug list)
Depends On:
  Show dependency treegraph
Reported: 2002-05-27 12:45 EDT by Cristian Gafton
Modified: 2007-03-26 23:53 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2003-06-07 14:34:47 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Cristian Gafton 2002-05-27 12:45:51 EDT
During the installation of a new 7.3 system software raid to create a large
RAID5 array (four 100GB IDE drives on a Promise Ultra2 controller), once the
raid device is created and the raid resync process starts the whole install
process effectively grinds to a halt until the resync is done.

If I limit the speed_limit_max from the VC2 to something like 2000 then the
installer proceeds along nicely and finishes as expected. Of course this does
not work for kickstart installs...

Secondly, when the system comes up after install, because the resync did not
finish, as soon as the raid devices are detected the kernel starts a mad race to
resync the big raid5 array, whcih again grinds the boot process to a halt. On my
test system it took 38 minutes from the moment kernel booted to reach a single
user mode prompt to let me limit the speed_limit_max again so the system could
go on with booting up.

Repeated the test on a SCSI LVD system using 4x36GB drives. Once the resync
starts the system doesn't quite stop cold as with the IDE case, but it still
took 26 minutes to give me a single user mode prompt.
Comment 1 Arjan van de Ven 2002-05-28 05:10:33 EDT
short term fix is to reduce the max speed by an order of magnitude; longer term
the speed should be a percentage of IO capacity or be marked "low priority"
where all other IO gets precedence.
Comment 2 Cristian Gafton 2002-05-28 17:40:14 EDT
*** Bug 65597 has been marked as a duplicate of this bug. ***
Comment 3 Cristian Gafton 2002-05-28 17:43:51 EDT
It would be beneficial if anaconda would "slow down" the raid rebuild during the
installation by inserting a lower value in speed_limit_max before the raid
devices are rebuilt.

Also, on the "nice to have" list, having a way to tell raid that it doesn't
really need to do a full resync of a freshly created array (not unless it is
dependent on having exactly the same bitcopy on all drives as opposed to just
syncing superblocks...)
Comment 4 Arjan van de Ven 2002-06-03 15:04:16 EDT
The max speed issue is fixed for the next build; changing this to an RFE bug
since the points are valid and I'd like to address them

Note You need to log in before you can comment on or make changes to this bug.