Created attachment 1014723 [details] f22-beta-tc2-previous-system-recognition.jpg Description of problem: I tried to reinstall F22 Beta TC2 and the partitioning tool displayed all the RAID partitions as Unknown, whilst it has no problem to recognise previous LVM volumes created directly on the physical drives. That makes impossible to reuse old LVM volumes created on top of RAID devices. Version-Release number of selected component (if applicable): Fedora-Server-DVD-ppc64-22-Beta-TC2
Please attach the log files from /tmp
also experience this on F22 KDE spin Beta x86_64 (arch of original bug was ppc64). Adjusting bug details. Logs will be attached.
Created attachment 1019025 [details] logs from Fedora-Live-KDE-x86_64-22_Beta-3 1) booted from Fedora-Live-KDE-x86_64-22_Beta-3.iso 2) selected sda, sdb and sdc they contain [liveuser@karhu tmp]$ cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md126 : active (auto-read-only) raid1 sdc1[3] sdb1[1] 511988 blocks super 1.0 [3/2] [_UU] md127 : active (auto-read-only) raid5 sda2[4] sdb2[1] sdc2[3] 1952494592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU] bitmap: 0/8 pages [0KB], 65536KB chunk although F21 has them as md0 and md1 - the RAID1 was purposedly broken under F21 to test F22 Beta - the RAID5 contains my LVM PV - I created a naw LV under F21, planning to install F22 Beta into it 3) selected manual partitioning actual results: LVM Logical Volumes not available as install target in anaconda expected result: RAIDs assembled LVM activated LVs available as intall targets additionl info: before launching the installed. When booted in the Live system [liveuser@karhu tmp]$ cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md126 : active (auto-read-only) raid1 sdc1[3] sdb1[1] 511988 blocks super 1.0 [3/2] [_UU] md127 : active (auto-read-only) raid5 sda2[4] sdb2[1] sdc2[3] 1952494592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU] bitmap: 0/8 pages [0KB], 65536KB chunk after the installer probed (this took long enough, without visual feedback in the GUI that a normal user may believe nothig is happening. see progress with "journalctl -f --full") [liveuser@karhu tmp]$ cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raid0] [raid10] unused devices: <none>
(In reply to Patrick C. F. Ernzer from comment #3) > - the RAID1 was purposedly broken under F21 to test F22 Beta Installation to degraded RAID is not supported. *** This bug has been marked as a duplicate of bug 129306 ***
(In reply to David Shea from comment #4) > (In reply to Patrick C. F. Ernzer from comment #3) > > - the RAID1 was purposedly broken under F21 to test F22 Beta > > Installation to degraded RAID is not supported. that does not explain why the RAID5 was not assembled
the RAID1 has been healed meanwhile. pcfe to reproduce with a non-degraded RAID1 /boot to show that the main issue here is that the RAID5 with my LVM PV was not usable by anaconda
Created attachment 1024034 [details] logs from Fedora-Live-KDE-x86_64-22_Beta-3 fresh logs collected, this time from an installation attempt where the RAID was not broken.
(In reply to David Shea from comment #4) > (In reply to Patrick C. F. Ernzer from comment #3) > > - the RAID1 was purposedly broken under F21 to test F22 Beta > > Installation to degraded RAID is not supported. I have recreated the problem, this time the RAID1 I use for /boot was whole. Same problem as before, my pre-existing RAID1 (/boot) and RAID5 (PV for LVM) are not recognised by anaconda.
David, in my case it was not degraded raid. It was RAID1 created with Anaconda and I just tried to reinstall Fedora with newer Beta. You can collect your own logs. Just try to reproduce it.
This is still a problem with Fedora 22 TC4. I created a very specific RAID configuration in the terminal since I have 2 SSDs and 4 HDDs with UEFI. Fedora 21 was able to see the existing RAID volumes fine and install without a problem. Fedora 22 TC4 (I tried the released beta as well), doesn't even show the RAID volumes. One hack I found was after I'm in the partition assignment screen, if I reassemble the RAID volumes from the terminal and rescan it will find them. After performing all the actions I would like to perform and saving my settings it stops all the RAID arrays. I tried with them assembled and without them assembled and either way I get a crash right away setting up the environment in anaconda and the backtrace seems to indicate that it's during the formatting stage. Very likely because it can't find /dev/md/xxx_0 because it's been stopped.
I was trying to recreate the issue to add more details to this bug report, but thought maybe I should try to set the homehost first on the arrays so it doesn't append _0 to /dev/md/xxx. I guess since the hostname is Erwin from DHCP and I set the homehost to erwin it became /dev/md/erwin:xxx instead of /dev/md/xxx as I understand. This seems to have done the trick because now it finishes installing. My guess would be that it doesn't like underscores in the name or expects a colon which comes from having homehost set on the array.
I still needed to reassemble the arrays manually and rescan the disks before I could use them during partitioning. I'm not sure if I needed to, but I reassembled the array manually again before I began the installation as well.
Sorry for the comment spam but one last comment. The initial grub.cfg file had the device path (/dev/md126) not the UUID so the system failed to fully boot. Strangely the rescue entry did have the UUID. After editing the grub.cfg file and copying the root=UUID... line to the main kernel entry the system booted up fine.
Since the only logs available on this bug are from live installs, I'm going to assume that this issue is the same, or at least very similar to, the issue in bug 1219264 in that whatever should be assembling the arrays in the live environment is not. *** This bug has been marked as a duplicate of bug 1219264 ***