Bug 2152167

Summary: [RFE] Allow to create a softraid on whole disks ("MDRAID set") when using the GUI
Product: Red Hat Enterprise Linux 8 Reporter: Renaud Métrich <rmetrich>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED MIGRATED QA Contact: Release Test Team <release-test-team-automation>
Severity: medium Docs Contact:
Priority: medium    
Version: 8.7CC: jstodola, sbarcomb
Target Milestone: rcKeywords: FutureFeature, MigratedToJIRA
Target Release: ---Flags: pm-rhel: mirror+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-18 15:59:01 UTC Type: Story
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Rescan just after creating the Raid (FAIL: no disk found)
none
Rescan after restarting the Raid again (WORKS: "MDRaid set" seen) none

Description Renaud Métrich 2022-12-09 14:54:28 UTC
Created attachment 1931359 [details]
Rescan just after creating the Raid (FAIL: no disk found)

Description of problem:

This is somehow a continuation of BZ #2152092.

We have customers trying to install a SoftRaid made of the whole disks.
On UEFI, this requires using 1.0 metadata so that the firmware sees the UEFI partition created on top of the SoftRaid.
In BIOS mode, there is no special requirement, except on PPC64le where 1.0 metadata has to be used as well to see the PREP partition.

The issue with all this is there is no way to tell the GUI to create such SoftRaid.
The only possibility is to use the command line, e.g.:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
# mdadm --create /dev/md0 --level raid1 --metadata=1.0 --raid-disks=2 /dev/vda /dev/vdb
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

When doing so, a re-scan of the disks in the GUI shows the array and we can build partitions on this [1].

But actually, it's more complicated, because for some reason, a re-scan isn't enough.
Once the SoftRaid has been created, we need to tear it DOWN then UP again, otherwise Anaconda doesn't see the SoftRaid but no disk at all.

See screenshots attached:

1. "Rescan just after creating the Raid.png" (no disk seen anymore)

At this step, I restart the Raid:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
# mdadm --stop /dev/md0
mdadm: stopped /dev/md0

# mdadm --assemble --scan
mdadm: /dev/md/0 has been started with 2 drives.
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

2. "Rescan after restarting the Raid again.png" (MDRAID set (mirror) seen)


[1] On PPC64le this doesn't work, because Anaconda complains about the PREP partition not being valid for bootloader ("No usable boot drive was found"), which is somehow not true (it boots fine when creating the partition using the command-line prior to configuring using the GUI).

Version-Release number of selected component (if applicable):

anaconda-33.16.7.12-1.el8

How reproducible:

Always

Steps to Reproduce:

1. Create a VM with 2 disks

2. Start the installer

Actual results:

No way to create a "MDRAID set"

Expected results:

Some easy way to create a "MDRAID set"

Comment 1 Renaud Métrich 2022-12-09 14:55:32 UTC
Created attachment 1931360 [details]
Rescan after restarting the Raid again (WORKS: "MDRaid set" seen)

Comment 2 RHEL Program Management 2023-09-18 15:57:49 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 3 RHEL Program Management 2023-09-18 15:59:01 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.