Since last release cockpit-storaged when ran in anaconda mode defaults to using metadata 1.0 for MDRAID. https://bugzilla.redhat.com/show_bug.cgi?id=2352953 This was done to allow bootloaders on top of RAID as they have the metadata limitations to 0.9, 1.0. It seems that MDRAID1 with metadata 1.0 which is what currently cockpit-storage uses when creating MDRAID devices, is incorrectly detected as broken by parted when we put GPT table on that. That causes blivet to try to ask parted to fix it before proceeding with the installation, causing the installer to crash. anaconda-core-42.29.7 anaconda-webui-30 cockpit-storaged-336 Reproducible: Always Steps to Reproduce: 1. Choose two disks vda, vdb 2. Go to cockpit-storage, create RAID1 on disks directly 3. Create GPT partition table on the RAID device 4. Create biosboot, /boot and / partitions 5. Exit the installer, proceed to review screen 6. Start the installation Actual Results: The installer crashes, either at step 5, before starting either right after starting the installation at step 6. See attached screencast for exact reproducer. Unless we can expect a very fast fix for fedora-42 for https://bugzilla.redhat.com/show_bug.cgi?id=2355323, I propose to revert the fix for https://bugzilla.redhat.com/show_bug.cgi?id=2355323, so cockpit-storage can default to 1.2 and disallow bootloaders to be on RAID for now. This will users to at least use RAID for their rootfs. Putting RAID on stage1 and stage2 device is not as important and also not requested by the release critiria.
I sent https://github.com/cockpit-project/cockpit/pull/21793 to revert the previous "fix" for bug 2352953, as asked by Katerina.
The problem here is that parted is run with a mdraid member. This should not be done. blkid/udev can correctly distinguish between mdraid on gpt vs gpt on mdraid, and consequently neither UDisks2 nor Cockpit get confused about this. Whatever code is calling parted on a mdraid member should be fixed. Check this out. mdraid on bare disks with GPT inside the raid: # mdadm --create bare --run --level=1 --metadata=1.0 --raid-devices=2 /dev/sda /dev/sdb mdadm: array /dev/md/bare started. # parted /dev/md/bare -s mktable gpt # blkid -p /dev/sda /dev/sda: UUID="280bac09-2eed-f646-75ca-be09d1268b36" UUID_SUB="1b635ad0-b889-4d26-1c60-3344a98f1aa4" LABEL="dev:bare" VERSION="1.0" TYPE="linux_raid_member" USAGE="raid" # blkid -p /dev/md/bare /dev/md/bare: PTUUID="b193ae51-35ac-4d00-8937-91110cfe84a2" PTTYPE="gpt" Now mdraid on partitions: # parted /dev/sdc -s mktable gpt # parted /dev/sdc -s mkpart primary ext2 1M 2G # parted /dev/sdd -s mktable gpt # parted /dev/sdd -s mkpart primary ext2 1M 2G # mdadm --create parts --run --level=1 --metadata=1.0 --raid-devices=2 /dev/sdc1 /dev/sdd1 mdadm: array /dev/md/parts started. # blkid -p /dev/sdc /dev/sdc: PTUUID="0b64a21c-a125-4637-ab10-16c0736575b2" PTTYPE="gpt" Running parted on /dev/sda would be a mistake since it has TYPE="linux_raid_member" and because we know how tricky mdraid metadata 1.0 is.