During the installation of a new 7.3 system software raid to create a large
RAID5 array (four 100GB IDE drives on a Promise Ultra2 controller), once the
raid device is created and the raid resync process starts the whole install
process effectively grinds to a halt until the resync is done.
If I limit the speed_limit_max from the VC2 to something like 2000 then the
installer proceeds along nicely and finishes as expected. Of course this does
not work for kickstart installs...
Secondly, when the system comes up after install, because the resync did not
finish, as soon as the raid devices are detected the kernel starts a mad race to
resync the big raid5 array, whcih again grinds the boot process to a halt. On my
test system it took 38 minutes from the moment kernel booted to reach a single
user mode prompt to let me limit the speed_limit_max again so the system could
go on with booting up.
Repeated the test on a SCSI LVD system using 4x36GB drives. Once the resync
starts the system doesn't quite stop cold as with the IDE case, but it still
took 26 minutes to give me a single user mode prompt.
short term fix is to reduce the max speed by an order of magnitude; longer term
the speed should be a percentage of IO capacity or be marked "low priority"
where all other IO gets precedence.
*** Bug 65597 has been marked as a duplicate of this bug. ***
It would be beneficial if anaconda would "slow down" the raid rebuild during the
installation by inserting a lower value in speed_limit_max before the raid
devices are rebuilt.
Also, on the "nice to have" list, having a way to tell raid that it doesn't
really need to do a full resync of a freshly created array (not unless it is
dependent on having exactly the same bitcopy on all drives as opposed to just
The max speed issue is fixed for the next build; changing this to an RFE bug
since the points are valid and I'd like to address them