Bug 11045 - Booting RAID 5
Booting RAID 5
Status: CLOSED NOTABUG
Product: Red Hat Linux
Classification: Retired
Component: kernel (Show other bugs)
6.2
i386 Linux
medium Severity high
: ---
: ---
Assigned To: Ingo Molnar
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2000-04-25 16:09 EDT by Bas Vermeulen
Modified: 2008-05-01 11:37 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2001-04-16 05:34:32 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Bas Vermeulen 2000-04-25 16:09:55 EDT
Installed 6.2, recompiled the kernel with RAID not as modules, /boot is
not on any raid, /, /usr, /usr/local, /var, /var/mail, /home are all on
seperate RAID 5, 3 drive scsi system, system installs fine boots fine.  We
want to test the raid so we power the system down and unplug the 3rd drive
and boot system but the kernel hangs after freeing memory.
Comment 1 markster 2000-06-29 20:52:56 EDT
I can confirm your trouble, and I can tell you part -- but not all -- of the
reason why...  

In my case, I'm running RAID on a system with 6 9.1GB Western Digital Enterprise
Ultra160 Drives and two Adaptec 29160 SCSI controllers.

Whenever you first build a RAID-1 or higher, the MD must be rebuilt as if there
had been an improper shutdown in order to verify that the RAID algorithm has
been applied to the entire drive.  When you install RedHat, it does correctly
make and start the RAID drive, and then it begins the process of rebuilding the
RAID while the installation is taking place.  The problem is that the
installation of Linux finishes long before the RAID is completely rebuilt.  To
prove this, look at the drive lights after the install is complete and it's
asking you to create the boot disk.  You should see significant disk activity as
the kernel uses the idle I/O bandwidth.  For more detail, do ctrl-alt-f2 and at
the "bash#" prompt, type "cat /proc/mdstat" and you'll see the percentage
complete of the rebuild.

Anyway, if you reboot at this point, the MD is marked clean, and then isn't
checked the next time you reboot.  However, of course, the checksums have not
yet been calculated, so if you remove a drive, it still hasn't got the backup
correct.

The work-around is to let the RAID continue to rebuild until it is 100% finished
*before* rebooting after the initial installation.  Then you will be able to
remove one drive and have the RAID perform as expected in degraded mode.  The
trouble is that even after having done this, I find that my RAID partition,
after removing a single drive, is so full of errors, that it requires a
re-installation anyway -- and this is after removing a drive while the system
was powered down and the filesystem was clean.

In short, my tests show that (at least with my configuration) RedHat 6.2 RAID
Level 5 is COMPLETELY BROKEN and provides NO significant reliability beyond RAID
level 0.

Also please note that these tests were performed with RedHat Kernel 2.2.16-3 and
the updated RAID installation disks.
Comment 2 Ingo Molnar 2001-04-16 05:34:28 EDT
are you using persistent RAID-superblocks? (the "persistent-superblock" option
in /etc/raidtab)

Note You need to log in before you can comment on or make changes to this bug.