RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1168777 - Custom part: cannot place bootloader stage1 partition on any disk but the first (UEFI, omapARM...) without messy workaround
Summary: Custom part: cannot place bootloader stage1 partition on any disk but the fir...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: anaconda
Version: 7.0
Hardware: All
OS: All
medium
high
Target Milestone: rc
: ---
Assignee: Anaconda Maintenance Team
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On: 1303217
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-28 01:14 UTC by Adam Williamson
Modified: 2020-12-15 07:32 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1303217 (view as bug list)
Environment:
Last Closed: 2020-12-15 07:32:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1168118 0 unspecified CLOSED Custom UEFI layout with /boot/efi on second disk fails: is_valid_stage1_device not called on disks after the first 2021-02-22 00:41:40 UTC

Internal Links: 1168118

Description Adam Williamson 2014-11-28 01:14:18 UTC
I've been investigating https://bugzilla.redhat.com/show_bug.cgi?id=1168118 all day. I think I nailed it down reasonably well. I just tested, and it can also be reproduced on RHEL 7 (well, I had a CentOS 7.0 image lying around so I used that, but same difference). I expect it's in 7.1 and 7.2 as well as it's never been fixed in Fedora.

The reproducer, as described in #1168118, is:

* Boot a UEFI system with two empty disks attached (Xda and Xdb)
* Select both disks and go to custom partitioning
* Create a layout of simple partitions with /boot and /boot/efi on Xdb, / and swap on Xda, all correct (sufficient size, /boot/efi as ESP filesystem)
* Try to complete custom partitioning

actually I think it doesn't matter where / and swap are, the only important thing is you have more than one disk selected, and /boot/efi is on a disk other than the first.

What's going on is that pyanaconda/kickstart.py Bootloader class execute() method always sets storage.bootloader.stage1_disk to the first disk (modulo a couple of checks we don't really need to worry about here), unless its bootDrive variable has been set to something, which AFAICS only happens by user interaction on the 'Full disk summary and bootloader' screen or from a kickstart. If the stage1 target mount point - /boot/efi, in the UEFI case - is on any other disk, everything falls apart because only the partitions on the first disk get considered as possible stage1 target devices, and none of them is one, so the installer believes there's no valid target.

I wondered if it at least works if you go into the "Full disk summary" page and set the appropriate disk as the 'boot disk' before going into custom partitioning, but it doesn't, because between doing that and going into custom partitioning, pyanaconda/ui/gui/spokes/storage.py's _doExecute() is run and decides the choice was invalid (because there's no /boot/efi partition yet, so no valid stage1 device) and resets bootDrive to "".

I posted a patch to #1168118 which seems to fix this for me, and doesn't seem to break various other UEFI and BIOS paths I tested (but this is complex and sensitive code and it's *entirely* possible I missed something or there's a better way to fix it - but it's a start, at least). I strongly suspect this affects omapARM as well, because it really affects anything that uses a mounted partition as a stage1 target.

I *suspect* there may be similar issues with biosboot (BIOS + GPT) and prepboot (PPC) as well, but I don't have PPC hardware and the 'gpt' parameter which should force a gpt-on-BIOS install doesn't seem to work, so I can't easily test those cases right now.

Comment 2 David Lehman 2015-07-08 15:36:36 UTC
Step 1 is definitely to fix the storage spoke so it does not unset anything on BootLoaderError in _doExecute when preparing to enter the custom storage spoke. Until then it is impossible for users to set a boot disk for use with a custom layout.

There are probably some platforms for which there is no notion of a boot disk. Even for these platforms we will need to restrict the stage1 device to a locally attached disk. I'm not sure if we do this now or not.

Comment 3 David Cantrell 2015-07-08 18:07:06 UTC
Upon further investigation, the proper fix for this bug is too untested and too invasive for RHEL 7.2.  Moving this to the 7.3 planning list for now.

Comment 6 Jiri Konecny 2019-05-23 19:24:25 UTC
Hi Adam,

Is this issue still valid? Could you please retest it?

Comment 7 Adam Williamson 2019-05-23 19:28:54 UTC
Per discussion on the Fedora original - https://bugzilla.redhat.com/show_bug.cgi?id=1168118 - yeah, I believe it is. IIRC, this is quite awkward to fix because it would involve kinda rejigging some core assumptions of the bootloader/partitioning code. https://bugzilla.redhat.com/show_bug.cgi?id=1168118#c59 is a good starting point, I think.

Comment 8 Robert Shannon Jr. 2019-06-18 18:32:53 UTC
Situation: When installing a NEW installation of Fedora 30 using Anaconda on a system with an SSD and a NVMe PCIe, the "No valid bootloader target device found" error occurs if some linux partitions are specified on one SSD and some partitions on the other SSD.  I have repeatedly confirmed this occurs.  /boot/efi was specified to be installed on the second (NVMe) device.  

My workaround: HOWEVER, when all the linux installation partitions were specified to be created on the second (NVMe) device, and the first device was left out of being used or having any partitions created on it, the installation went without a hitch.

Environment
Dell M7510 Mobile Precision Workstation with a Samsung 860 2.5" SSD and an Intel 660p 2280 NVMe PCIe when attempting a new installation of either Fedora 30 Workstation or Fedora 30 Spins using Anaconda from
a .iso image burned to DVD.  This occurs whether using Custom or Advanced Blivet install.

Comment 10 RHEL Program Management 2020-12-15 07:32:05 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.