Hide Forgot
Description of problem: I've seen this behaviour for a long time, but a recent thread plus a failed disk in an array last week made me check for a bug, which I didn't find. This might also be better assigned to grub, I'll leave that to the maintainers to decide. If you create a RAID 1 array backing /boot, the MBR will only be installed on the first drive in the array. This means that, should that disk fail, the user is left with an unbootable system. The fix is relatively simple, boot from rescue/live DVD, run grub, do 'root (hdX,Y) - setup (hdX)' and the system will boot again, but it requires the knowledge to do so, an optical drive (or PXE server) and the appropriate media. Non-trivial for some admins. Version-Release number of selected component (if applicable): mdadm-3.2.2-9.el6.x86_64 grub-0.97-75.el6.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create a RAID 1 array for /boot 2. Fail the first drive (ie: /dev/sda) 3. Reboot Actual results: System fails to boot, no MBR on RAID member Expected results: Bootable /boot partition. Additional info: This behavious goes back, at least, to RHEL 4 (which I know is EOL).
Anaconda is the package responsible for installing grub on both hard drives during installation. This issue has been fixed in the past (in Fedora), but I don't know when and it's possible the fix happened after the rhel6 anaconda was forked from Fedora.
Thanks for reassigning this. It does not seem to have been addressed in RHEL 6 yet, but I will be happy to test next week when I return to my office to confirm.
Since RHEL 6.3 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux.
Alright, we're going to need more data to figure out what's happening here. Please post the various logs from anaconda. Additionally, if you could zero the MBR's, do a raid 1 install with /boot on the raid, and then dd the first 512 bytes of each disk and attach them, that would be quite helpful.
Moving to CLOSED INSUFFICIENT_DATA. If you can provide the information requested in the previous comment, reopen the bug and we will pick up on the investigation of the issue.