Description of problem:
On Fedora 32 mdadm udev rules are installed in /usr/lib/udev/rules.d/rules.d/ so the rules for create and assembly are ignored by udev.
$ rpm -ql mdadm
One the issues is the "/dev/md/<name>" symlinks are not being created:
$ sudo mdadm --assemble --scan
mdadm: /dev/md/blivet has been started with 2 drives.
mdadm: timeout waiting for /dev/md/blivet
Version-Release number of selected component (if applicable):
Related upstream issue for UDisks: https://github.com/storaged-project/udisks/issues/735
And I believe the installation blockers for F32 https://bugzilla.redhat.com/show_bug.cgi?id=1804080 and https://bugzilla.redhat.com/show_bug.cgi?id=1798792 are also related to this, but I'm still debugging these.
Proposed as a Blocker and Freeze Exception for 32-beta by Fedora user dmach using the blocker tracking app because:
LVM logical volume does not activate when it's on top of md raid.
If such volume is in fstab, system may not be able to mount it and then the boot fails. I have this problem with a data volume mounted under /mnt. It is possible that / and /usr work fine, because they are mounted in initramfs.
I've created a PR with a fix for mdadm spec file https://src.fedoraproject.org/rpms/mdadm/pull-request/7
Could this bug prevent installation of Fedora 32 to a system with an existing, valid, LVM on mdadm raid? If yes that would violate one or both:
- Correctly interpret, and modify as described below, any disk with a valid ms-dos or gpt disk label and partition table containing ext4 partitions, LVM and/or btrfs volumes, and/or software RAID arrays at RAID levels 0, 1 and 5 containing ext4 partitions
- Create mount points backed by ext4 partitions, LVM volumes or btrfs volumes, or software RAID arrays at RAID levels 0, 1 and 5 containing ext4 partitions
Or does the post-install startup fail? In which case it's probably both beta criteria:
- A system installed without a graphical package set must boot to a working login prompt without any unintended user intervention, and all virtual consoles intended to provide a working login prompt must do so.
- No part of any release-blocking desktop's panel (or equivalent) configuration may crash on startup or be entirely non-functional.
*** Bug 1811155 has been marked as a duplicate of this bug. ***
If you read the criterion carefully, this situation is excluded:
"Create mount points backed by ext4 partitions, LVM volumes or btrfs volumes, or software RAID arrays at RAID levels 0, 1 and 5 containing ext4 partitions"
note that it's an 'or' sentence and there's no wording about combinations. The intent is that the following things are covered:
* Mount points backed by ext4 partitions
* Mount points backed by LVM volumes
* Mount points backed by btrfs volumes
* Mount points backed by software RAID arrays containing plain ext4 partitions
But *not* combinations of those. So, LVM-on-software-RAID is out (under this criterion).
However, it would be covered under the Final criterion: "The installer must be able to create and install to any workable partition layout using any file system and/or container format combination offered in a default installer configuration."
since that covers just about anything. :P
We do have one case of this breaking an upgrade, that's #181155, which the reporter described thus on devel@: "[after upgrade] "/dev/disk/by-uuid" wasn't and still isn't populated with soft raid volumes, I had to change fstab to point to /dev/mdXXXpX to mount my home and other needed folders. From cockpit it also says "Unrecognized Data" for all my mdraid partitions."
that probably still doesn't quite hit the Beta criteria (which only cover upgrades of 'default installs'), but worth noting. I'd definitely be +1 Beta FE, +1 Final blocker at least on this.
(In reply to Adam Williamson from comment #6)
> If you read the criterion carefully, this situation is excluded
Yep, I remember, but only once you explained it (again).
> "Create mount points backed by ext4 partitions, LVM volumes or btrfs
> volumes, or software RAID arrays at RAID levels 0, 1 and 5 containing ext4
What do you think about
's/, or/; or'
Fast forward, this happens again, you explain it again, and it'll be a "oh yeah" moment, i.e. nothing changes. But there might be a chance. And/Or just copy/paste your latest explanation into an "expand" note in the criterion.
I'm +1 beta FE, +1 final blocker.
+1 for beta blocker, because a) the fix is simple, b) the bug is grave enough for affected people. I don't think we should release a Beta which is missing udev rules for a class of supported storage hardware.
+1 for beta blocker, because c) it's not obvious where the problem is -- I've just sunk about 20 hours into debugging this and getting here.
FEDORA-2020-0274f17e66 has been submitted as an update to Fedora 32. https://bodhi.fedoraproject.org/updates/FEDORA-2020-0274f17e66
I haven't figured out whether or how this bug is different from bug 1804080, which is already a beta blocker. If mdadm raid always fails in the same way for the same reason as in that bug, then I think this bug is probably a dup.
(In reply to Chris Murphy from comment #4)
> Could this bug prevent installation of Fedora 32 to a system with an
> existing, valid, LVM on mdadm raid? If yes that would violate one or both:
I can confirm that my existing mdadm raid partition can't be read under f32 out of box.
I have a multi-boot system with f30/f31/rhel7/ and f32.
Only f32 have the problem of the missing /dev/md/ folder.
I had to add the /root partition to /etc/mdadm.conf to fix booting f32, but this isn't needed, normally.
And UIs like gnome-disk-utily or blivet-gui shows me the mdadm devices, but without a filesystem!
Not sure if this is related but only an older kernel (5.4.0) does boot.
Every kernel > 5.4.0 in f32 failed to boot.
(In reply to Wolfgang Ulbrich from comment #12)
> Not sure if this is related but only an older kernel (5.4.0) does boot.
> Every kernel > 5.4.0 in f32 failed to boot.
This is because the bug only shows for initramfs images that were built with the affected mdadm package release. If you recreated the image using dracut now, even your old pre-F32 kernel wouldn't boot.
Ok, the update fixes the problem with booting/reading existing mdadm partitions with f32.
But that pointed out that all installed initramfs are without or broken mdadm support, and i had to regenerated the initramfs for latest kernel to get things working.
[rave@mother ~]$ uname -r
My older 5.4.0 kernel was before culprit mdadm release installed, for this reason i could boot in this kernel.
That means we need minimum 1 new kernel for our installer images after mdadm-4.1-4.fc32 is in stable.
I know my vote doesn't count :)
+1 for beta blocker
mdadm-4.1-4.fc32 has been pushed to the Fedora 32 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2020-0274f17e66
Wolfgang: the initramfses are generated fresh in each compose of course. The 'doesn't happen until a kernel update unless you do it manually' thing is for installed systems.
Discussed during the 2020-03-09 blocker review meeting: 
The decision to delay the classification of this bug as a blocker and to classify this bug as an "AcceptedFreezeException" was made as this is clearly a bad bug and we'd like it fixed for Beta, but there are a couple of outstanding questions that prevent us deciding blocker status yet. There is a possibility the bug is the same as 1804080, which is already a blocker: we will test to confirm this, and close the bug as a dupe if they turn out to be the same.
mdadm-4.1-4.fc32 has been pushed to the Fedora 32 stable repository. If problems still persist, please make note of it in this bug report.