Bug 11045
Summary: | Booting RAID 5 | ||
---|---|---|---|
Product: | [Retired] Red Hat Linux | Reporter: | Bas Vermeulen <bvermeul> |
Component: | kernel | Assignee: | Ingo Molnar <mingo> |
Status: | CLOSED NOTABUG | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | 6.2 | ||
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | i386 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2001-04-16 09:34:32 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Bas Vermeulen
2000-04-25 20:09:55 UTC
I can confirm your trouble, and I can tell you part -- but not all -- of the reason why... In my case, I'm running RAID on a system with 6 9.1GB Western Digital Enterprise Ultra160 Drives and two Adaptec 29160 SCSI controllers. Whenever you first build a RAID-1 or higher, the MD must be rebuilt as if there had been an improper shutdown in order to verify that the RAID algorithm has been applied to the entire drive. When you install RedHat, it does correctly make and start the RAID drive, and then it begins the process of rebuilding the RAID while the installation is taking place. The problem is that the installation of Linux finishes long before the RAID is completely rebuilt. To prove this, look at the drive lights after the install is complete and it's asking you to create the boot disk. You should see significant disk activity as the kernel uses the idle I/O bandwidth. For more detail, do ctrl-alt-f2 and at the "bash#" prompt, type "cat /proc/mdstat" and you'll see the percentage complete of the rebuild. Anyway, if you reboot at this point, the MD is marked clean, and then isn't checked the next time you reboot. However, of course, the checksums have not yet been calculated, so if you remove a drive, it still hasn't got the backup correct. The work-around is to let the RAID continue to rebuild until it is 100% finished *before* rebooting after the initial installation. Then you will be able to remove one drive and have the RAID perform as expected in degraded mode. The trouble is that even after having done this, I find that my RAID partition, after removing a single drive, is so full of errors, that it requires a re-installation anyway -- and this is after removing a drive while the system was powered down and the filesystem was clean. In short, my tests show that (at least with my configuration) RedHat 6.2 RAID Level 5 is COMPLETELY BROKEN and provides NO significant reliability beyond RAID level 0. Also please note that these tests were performed with RedHat Kernel 2.2.16-3 and the updated RAID installation disks. are you using persistent RAID-superblocks? (the "persistent-superblock" option in /etc/raidtab) |