Bug 617504

Summary: Installing to MD RAID1 /boot results in an unbootable system
Product: Red Hat Enterprise Linux 6 Reporter: Gordan Bobic <gordan>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Release Test Team <release-test-team-automation>
Severity: medium Docs Contact:
Priority: low    
Version: 6.0CC: hdegoede
Target Milestone: rcKeywords: RHELNAK
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-07-28 18:12:33 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Gordan Bobic 2010-07-23 09:46:00 UTC
Description of problem:
It would seem that anaconda creates MD RAID devices with v1.x metadata by default. When this is applied to a RAID1 device to be used as the /boot volume, boot fails because v1.x metadata is at the front of the disk, which makes grub fail. With the version of grub used, only metadata=0.90 MD devices can be used for /boot. Anaconda needs to be aware of this and act accordingly, until grub is updated to work with metadata=1.x devices.

Version-Release number of selected component (if applicable):
RHEL6 beta 2

How reproducible:
Every time.

Steps to Reproduce:
During the install, create a MD RAID1 device and set it to be mounted under /boot. The system will fail to boot up when the installation is complete.

Comment 2 Hans de Goede 2010-07-23 10:02:37 UTC
Hi,

Are you sure you are seeing this with a separate /boot ? There is a known
(documented and fixed) issue in beta2 where if you have / on an mdraid mirror without having a separate /boot, it will use 1.1 metadata for / (and thus for /boot). But if you have a separate /boot beta2 will use 1.0 metadata, which just like 0.9 metadata lives at the end of the partition and thus is not a problem.

Regards,

Hans

Comment 3 RHEL Program Management 2010-07-23 10:17:43 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 4 Gordan Bobic 2010-07-23 10:20:26 UTC
I am positively sure it's a separate /boot, since the root was RAID5.

It occurs to me, however, that the disks I was installing onto were used in a RAID stripe before, so it is possible there was some leftover metadata on there that confused things. To prevent that being a problem, perhaps the partitions/devices should be cleared with --zero-superblock before the RAID devices are created just to make sure?

Comment 5 RHEL Program Management 2010-07-23 10:37:40 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 6 Hans de Goede 2010-07-23 11:00:23 UTC
Hi,

(In reply to comment #4)
> I am positively sure it's a separate /boot, since the root was RAID5.

Ok.

> It occurs to me, however, that the disks I was installing onto were used in a
> RAID stripe before, so it is possible there was some leftover metadata on there
> that confused things. To prevent that being a problem, perhaps the
> partitions/devices should be cleared with --zero-superblock before the RAID
> devices are created just to make sure?    

We already clear the first and last 5 MB of a partition before using it, so that should not be a problem.

Can you try to reproduce this, and at the end of the installation (so before rebooting), switch to the shell at tty2 (ctrl + alt + F2), and collect all the /tmp/*log files (you can for example use scp), and then attach those log files here?

Thanks & Regards,

Hans

Comment 7 David Cantrell 2010-07-28 18:12:33 UTC
Feel free to reopen the bug if you can provide the information requested in comment #6.