Red Hat Bugzilla – Bug 617504
Installing to MD RAID1 /boot results in an unbootable system
Last modified: 2010-07-28 14:12:33 EDT
Description of problem:
It would seem that anaconda creates MD RAID devices with v1.x metadata by default. When this is applied to a RAID1 device to be used as the /boot volume, boot fails because v1.x metadata is at the front of the disk, which makes grub fail. With the version of grub used, only metadata=0.90 MD devices can be used for /boot. Anaconda needs to be aware of this and act accordingly, until grub is updated to work with metadata=1.x devices.
Version-Release number of selected component (if applicable):
RHEL6 beta 2
Steps to Reproduce:
During the install, create a MD RAID1 device and set it to be mounted under /boot. The system will fail to boot up when the installation is complete.
Are you sure you are seeing this with a separate /boot ? There is a known
(documented and fixed) issue in beta2 where if you have / on an mdraid mirror without having a separate /boot, it will use 1.1 metadata for / (and thus for /boot). But if you have a separate /boot beta2 will use 1.0 metadata, which just like 0.9 metadata lives at the end of the partition and thus is not a problem.
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.
** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **
I am positively sure it's a separate /boot, since the root was RAID5.
It occurs to me, however, that the disks I was installing onto were used in a RAID stripe before, so it is possible there was some leftover metadata on there that confused things. To prevent that being a problem, perhaps the partitions/devices should be cleared with --zero-superblock before the RAID devices are created just to make sure?
(In reply to comment #4)
> I am positively sure it's a separate /boot, since the root was RAID5.
> It occurs to me, however, that the disks I was installing onto were used in a
> RAID stripe before, so it is possible there was some leftover metadata on there
> that confused things. To prevent that being a problem, perhaps the
> partitions/devices should be cleared with --zero-superblock before the RAID
> devices are created just to make sure?
We already clear the first and last 5 MB of a partition before using it, so that should not be a problem.
Can you try to reproduce this, and at the end of the installation (so before rebooting), switch to the shell at tty2 (ctrl + alt + F2), and collect all the /tmp/*log files (you can for example use scp), and then attach those log files here?
Thanks & Regards,
Feel free to reopen the bug if you can provide the information requested in comment #6.