Red Hat Bugzilla – Bug 1281535
mdadm 3.3.4 - boot failure - problem with initramfs
Last modified: 2016-01-11 12:55:09 EST
Description of problem:
Though this might be related to the mdadm update, I am still able to boot with kernel 4.2.5.
Essentially, the system starts to boot but hangs almost immediately. If I drop into the busybox initramfs prompt, I can see that there are no partitions available for my LVM system/home/swap partitions, and I see no md### entries for the firmware drives.
Version-Release number of selected component (if applicable):
Every time this system boots. (I have another system that boots fine, which uses a true hardware RAID controller.)
Steps to Reproduce:
1. Update to 4.2.6 on my firmware raid system
2. Try to boot
System hangs at target 'Basic System' if I remember correctly. It's the third target.
I can't grab a journalctl as the system can't write the files. But the issue is definitely that it's unable to mount my partitions.
Same thing with kernel 4.2.5-201 under FC22. LVM volumes cannot be found. Works fine under kernel 4.1.7-200.
So kernel 4.2.5 works with F23, but not with F22, kernel 4.2.6 does not work with either?
4.2.5-200 does work under FC22, 201 is where it breaks lvm2.
That clears things up a lot. There were no changes between the 200 and 201 builds, 201 was a rebuild because the 200 kernels were signed with the wrong key for secureboot. Considering mdadm updated to mdadm-3.3.4-2.fc22 roughly the same time as the 201 update was submitted, and the F23 update for mdadm was just before the 4.2.6 kernel release, it is looking like that is the culprit. Reassigning.
Correction, 4.2.3-200 is the last FC22 kernel to work for me.
The 4.2.6-301 kernel still does not work, while the 4.2.5 does (on 23). I do not believe the mdadm version matters here.
I can try downgrading it "just in case" soon, but it makes no sense to me that the new version would work on older kernels but not new kernels while the old version would.
I'm not even sure this is an lvm issue, as I do not see any firmware raid drives when the boot fails to dracut under /dev. There's a problem here detecting drives.
It makes perfect sense because you didn't recreate your initramfs when you upgraded mdadm. Your old kernels still use the old versions in your initramfs. Any new kernel installed after the update would use the new. As I said, there was zero difference between 200 and 201, the bump and rebuild was only to get the correct signature for secure boot on there. That clearly points out that it was not a kernel change which caused your issue.
I wasn't getting the initramfs link. Now I understand the problem. I've tested this theory out and it appears to be correct. It seems that the boot process is broken in the newest mdadm, though the commands appear to run fine once booted.
*** This bug has been marked as a duplicate of bug 912735 ***