Bug 1212036 - Previous Fedora installation not recognized when installed on RAID
Summary: Previous Fedora installation not recognized when installed on RAID
Keywords:
Status: CLOSED DUPLICATE of bug 1219264
Alias: None
Product: Fedora
Classification: Fedora
Component: anaconda
Version: 22
Hardware: All
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Anaconda Maintenance Team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-15 12:53 UTC by Jaromír Cápík
Modified: 2016-02-02 20:58 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-02 20:58:16 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
f22-beta-tc2-previous-system-recognition.jpg (100.41 KB, image/jpeg)
2015-04-15 12:53 UTC, Jaromír Cápík
no flags Details
logs from Fedora-Live-KDE-x86_64-22_Beta-3 (63.28 KB, application/x-bzip)
2015-04-26 14:03 UTC, Patrick C. F. Ernzer
no flags Details
logs from Fedora-Live-KDE-x86_64-22_Beta-3 (66.63 KB, application/x-bzip)
2015-05-10 19:47 UTC, Patrick C. F. Ernzer
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1219264 0 unspecified CLOSED Intel firmware RAID set does not appear in INSTALLATION DESTINATION in live installer 2023-09-14 02:58:58 UTC

Internal Links: 1219264

Description Jaromír Cápík 2015-04-15 12:53:04 UTC
Created attachment 1014723 [details]
f22-beta-tc2-previous-system-recognition.jpg

Description of problem:
I tried to reinstall F22 Beta TC2 and the partitioning tool displayed all the RAID partitions as Unknown, whilst it has no problem to recognise previous LVM volumes created directly on the physical drives. That makes impossible to reuse old LVM volumes created on top of RAID devices.

Version-Release number of selected component (if applicable):
Fedora-Server-DVD-ppc64-22-Beta-TC2

Comment 1 David Shea 2015-04-15 13:25:51 UTC
Please attach the log files from /tmp

Comment 2 Patrick C. F. Ernzer 2015-04-26 13:49:59 UTC
also experience this on F22 KDE spin Beta x86_64 (arch of original bug was ppc64). Adjusting bug details. Logs will be attached.

Comment 3 Patrick C. F. Ernzer 2015-04-26 14:03:12 UTC
Created attachment 1019025 [details]
logs from Fedora-Live-KDE-x86_64-22_Beta-3

1) booted from Fedora-Live-KDE-x86_64-22_Beta-3.iso
2) selected sda, sdb and sdc
   they contain
[liveuser@karhu tmp]$ cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] 
md126 : active (auto-read-only) raid1 sdc1[3] sdb1[1]
      511988 blocks super 1.0 [3/2] [_UU]
      
md127 : active (auto-read-only) raid5 sda2[4] sdb2[1] sdc2[3]
      1952494592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/8 pages [0KB], 65536KB chunk
   although F21 has them as md0 and md1

- the RAID1 was purposedly broken under F21 to test F22 Beta
- the RAID5 contains my LVM PV
- I created a naw LV under F21, planning to install F22 Beta into it
3) selected manual partitioning

actual results:
LVM Logical Volumes not available as install target in anaconda

expected result:
RAIDs assembled
LVM activated
LVs available as intall targets

additionl info:
before launching the installed. When booted in the Live system
[liveuser@karhu tmp]$ cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] 
md126 : active (auto-read-only) raid1 sdc1[3] sdb1[1]
      511988 blocks super 1.0 [3/2] [_UU]
      
md127 : active (auto-read-only) raid5 sda2[4] sdb2[1] sdc2[3]
      1952494592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

after the installer probed (this took long enough, without visual feedback in the GUI that a normal user may believe nothig is happening. see progress with "journalctl -f --full")
[liveuser@karhu tmp]$ cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] [raid0] [raid10] 
unused devices: <none>

Comment 4 David Shea 2015-05-06 14:03:18 UTC
(In reply to Patrick C. F. Ernzer from comment #3)
> - the RAID1 was purposedly broken under F21 to test F22 Beta

Installation to degraded RAID is not supported.

*** This bug has been marked as a duplicate of bug 129306 ***

Comment 5 Patrick C. F. Ernzer 2015-05-07 15:39:17 UTC
(In reply to David Shea from comment #4)
> (In reply to Patrick C. F. Ernzer from comment #3)
> > - the RAID1 was purposedly broken under F21 to test F22 Beta
> 
> Installation to degraded RAID is not supported.

that does not explain why the RAID5 was not assembled

Comment 6 Patrick C. F. Ernzer 2015-05-07 15:41:01 UTC
the RAID1 has been healed meanwhile.

pcfe to reproduce with a non-degraded RAID1 /boot to show that the main issue here is that the RAID5 with my LVM PV was not usable by anaconda

Comment 7 Patrick C. F. Ernzer 2015-05-10 19:47:31 UTC
Created attachment 1024034 [details]
logs from Fedora-Live-KDE-x86_64-22_Beta-3

fresh logs collected, this time from an installation attempt where the RAID was not broken.

Comment 8 Patrick C. F. Ernzer 2015-05-10 19:49:31 UTC
(In reply to David Shea from comment #4)
> (In reply to Patrick C. F. Ernzer from comment #3)
> > - the RAID1 was purposedly broken under F21 to test F22 Beta
> 
> Installation to degraded RAID is not supported.

I have recreated the problem, this time the RAID1 I use for /boot was whole.

Same problem as before, my pre-existing RAID1 (/boot) and RAID5 (PV for LVM) are not recognised by anaconda.

Comment 9 Jaromír Cápík 2015-05-11 12:31:23 UTC
David, in my case it was not degraded raid. It was RAID1 created with Anaconda and I just tried to reinstall Fedora with newer Beta. You can collect your own logs. Just try to reproduce it.

Comment 10 Eric Work 2015-05-16 20:08:59 UTC
This is still a problem with Fedora 22 TC4.  I created a very specific RAID configuration in the terminal since I have 2 SSDs and 4 HDDs with UEFI.  Fedora 21 was able to see the existing RAID volumes fine and install without a problem.  Fedora 22 TC4 (I tried the released beta as well), doesn't even show the RAID volumes.  One hack I found was after I'm in the partition assignment screen, if I reassemble the RAID volumes from the terminal and rescan it will find them.  After performing all the actions I would like to perform and saving my settings it stops all the RAID arrays.  I tried with them assembled and without them assembled and either way I get a crash right away setting up the environment in anaconda and the backtrace seems to indicate that it's during the formatting stage.  Very likely because it can't find /dev/md/xxx_0 because it's been stopped.

Comment 11 Eric Work 2015-05-16 20:29:57 UTC
I was trying to recreate the issue to add more details to this bug report, but thought maybe I should try to set the homehost first on the arrays so it doesn't append _0 to /dev/md/xxx.  I guess since the hostname is Erwin from DHCP and I set the homehost to erwin it became /dev/md/erwin:xxx instead of /dev/md/xxx as I understand.  This seems to have done the trick because now it finishes installing.  My guess would be that it doesn't like underscores in the name or expects a colon which comes from having homehost set on the array.

Comment 12 Eric Work 2015-05-16 20:53:31 UTC
I still needed to reassemble the arrays manually and rescan the disks before I could use them during partitioning.  I'm not sure if I needed to, but I reassembled the array manually again before I began the installation as well.

Comment 13 Eric Work 2015-05-17 20:15:14 UTC
Sorry for the comment spam but one last comment.  The initial grub.cfg file had the device path (/dev/md126) not the UUID so the system failed to fully boot.  Strangely the rescue entry did have the UUID.  After editing the grub.cfg file and copying the root=UUID... line to the main kernel entry the system booted up fine.

Comment 15 David Shea 2016-02-02 20:58:16 UTC
Since the only logs available on this bug are from live installs, I'm going to assume that this issue is the same, or at least very similar to, the issue in bug 1219264 in that whatever should be assembling the arrays in the live environment is not.

*** This bug has been marked as a duplicate of bug 1219264 ***


Note You need to log in before you can comment on or make changes to this bug.