Bug 1118580 - installer shouldn't permit the creation of md raid1 EFI System partitions
Summary: installer shouldn't permit the creation of md raid1 EFI System partitions
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: python-blivet
Version: 21
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Peter Jones
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-11 05:08 UTC by Chris Murphy
Modified: 2014-12-01 17:27 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1168641 (view as bug list)
Environment:
Last Closed: 2014-12-01 17:27:42 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 788313 None None None Never

Internal Links: 788313

Description Chris Murphy 2014-07-11 05:08:07 UTC
Description of problem:
New feature in anaconda via blivet in Fedora 21 is the ability to set the /boot/efi mountpoint Device Type to RAID, raid1. (It actually looks like it'd let me make it LVM Thin Provisioning too.)

Due to crash bug 1118150 I can't tell what the final result is. But one md member partition has partitiontype GUID for Linux RAID, and on the other disk it's set to Linux data. 

Either way, the problem is it's completely fair for the firmware to ignore the partition because it doesn't state it's an EFI system partition. No other OS would see this as an EFI system partition either, so one will be created elsewhere and now the firmware will likely preferentially use that one since it'll have the correct type code.

If the type code is set to EFI System partition, then there are several problems:

-it's not really an EFI System partition, it's an md raid member first, and only once assembled is it something mountable at /boot/efi
-if we use metadata 1.1 or 1.2 (recommended), the firmware definitely won't use it
-if we use metadata 1.0 to trick the firmware into using it, we have no guarantee the firmware, or some other OS, we run the risk of the firmware, a bootloader, another OS, some other software, updating one of the ESPs as if it's a stand alone FAT. Yet the md raid metadata isn't updated, so once md is running it'll have no idea the two members are actually out of sync. Doing a scrub check will reveal errors. Doing a scrub repair will cause arbitrary resync, there's no assurance the stale ESP won't be copied to the new one.

So I'm not really sure how this is intended to work, but all the ways I'm imagining it might work, aren't going to work. If we had a separate drive with an ESP on it, containing an mdadm.efi driver to teach the firmware's pre-boot environment how to assemble md raid sets, then this could maybe work, but it'd be an all or nothing approach for the entire drive: all raid1 or all raid5 or whatever, no per volume control. We'd be better off with an LVM EFI driver...


Version-Release number of selected component (if applicable):
anaconda-21.46
python-blivet-0.60

How reproducible:
Always

Comment 1 mulhern 2014-07-11 13:00:53 UTC
This was a feature request for RHEL7 (bz#788313).

I might as well pick this one up, since I pushed the code to master.

Comment 2 Chris Murphy 2014-07-11 16:53:46 UTC
In my opinion the solution for this bug is rfe bug 1048999. Installer creates ESPs with identical content, and the ESP grub.cfg is never modified again. Instead kernel updates modify /boot/grub2/grub.cfg just like always, and can be located on raid n.

Comment 3 Brian Lane 2014-07-11 20:14:39 UTC
I don't think that is possible. You have to have the grub.cfg along side the efi executable.

Comment 4 Chris Murphy 2014-07-11 20:53:04 UTC
(In reply to bcl@redhat.com from comment #3)
Yes and one is there still, but it is a minimalist one that forwards to /boot/grub2/grub.cfg by using the GRUB configfile command.

I've been using this and trying to break it for months. It works regardless of which device is disconnected. It works in the face of NVRAM entries being deleted or moving the drive(s) to different hardware.

Basically it's just like a BIOS installation, including the proper /etc/ symlink for grubby, so that grubby modifies /boot/grub2/grub.cfg instead of /boot/efi/EFI/fedora/grub.cfg.

See bug 1048999 comment 13 for an example static ESP "minimalist" grub.cfg for single disk. Ubuntu does something similar but without the hints, which I find are typically bogus anyway. For multiple disk the search command also includes mduuid as well as volume uuid so that GRUB knows to assemble the array before looking for the boot volume.

Comment 5 mulhern 2014-09-02 15:21:53 UTC
Hi Peter,

I think that this should be yours, too.

Comment 6 Peter Jones 2014-12-01 17:27:42 UTC
I don't think you're necessarily wrong that 1048999 is the kind of setup we should be moving to in the future for this.  That said, currently we allow the user to manually create a v1.0 md RAID 1 for /boot/efi .  It's something you have to configure manually, and there are definitely caveats about doing so, but this does provide bootloader redundancy with a minimum of risk.


Note You need to log in before you can comment on or make changes to this bug.