Bug 489148 - F-10 / F11 dmraid issues master bug
Summary: F-10 / F11 dmraid issues master bug
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: anaconda
Version: 11
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Hans de Goede
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
: 409931 467904 470543 471737 488961 (view as bug list)
Depends On: 498544
Blocks: F11Target AnacondaStorage
TreeView+ depends on / blocked
 
Reported: 2009-03-08 08:07 UTC by Hans de Goede
Modified: 2013-01-09 00:51 UTC (History)
31 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-07-14 09:02:35 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
'dmraid -ay -vvv -ddd' on FC9 system with motherboard raid (3.42 KB, text/plain)
2009-03-27 14:16 UTC, Bob Gustafson
no flags Details
Content of /var/log of fedora 11 beta installation (28.65 KB, application/octet-stream)
2009-04-06 10:18 UTC, Winfrid Tschiedel
no flags Details

Description Hans de Goede 2009-03-08 08:07:35 UTC
Description of problem:
anaconda in F-9, F-10 and F-11 alpha has trouble recognizing a variety of dmraid
arrays. Lately we've done lots of debugging of this and we believe all these issues have the same underlying cause (a bug in pyblock). Except for jbod configurations, which are caused by a different pyblock bug.

Both issues are fixed in pyblock in rawhide, but the state of the installer
in rawhide in general makes testing this currently quite hard, the fixes have been tested by a variety of people and they have all confirmed this fixes things. Still I'm leaving this bug open for now until rawhide is in a good shape to test and we can get some confirmation from other reporters of the same problem.

Comment 1 Hans de Goede 2009-03-08 08:19:53 UTC
*** Bug 471737 has been marked as a duplicate of this bug. ***

Comment 2 Hans de Goede 2009-03-08 08:25:55 UTC
*** Bug 488961 has been marked as a duplicate of this bug. ***

Comment 3 Joel Andres Granados 2009-03-13 09:43:03 UTC
*** Bug 467904 has been marked as a duplicate of this bug. ***

Comment 4 Joel Andres Granados 2009-03-13 09:46:43 UTC
ATM anaconda code is moving rather fast.  With this in mind any issues that you might encounter while using the new code (F11, rawhide) might already be fixed or might be easily fixed.  We would appreciate any dmraid issues be discussed in #anaconda (Freenode) first and then, if we can't easily fix these, we would document them here.

I'm just trying to avoid this bug having a gazillion entries :)

Comment 5 Joel Andres Granados 2009-03-13 12:56:26 UTC
*** Bug 470543 has been marked as a duplicate of this bug. ***

Comment 6 Joel Andres Granados 2009-03-13 13:15:00 UTC
*** Bug 409931 has been marked as a duplicate of this bug. ***

Comment 7 Chad Roberts 2009-03-13 18:58:53 UTC
Joel - 
Thanks for the update.
I can verify the following based on some recent testing with that current FC10 release version -
 -  that in the current FC10 release Anaconda / dmraid no longer hang when detecting Intel ICH10R (ergo 9R/8R I believe) Raid disk partitions.
 -  although it doesn't hang, it still doesn't grab the info for the "fakeRaid" partitions that are already created on the drive

Bottom line - you're 1/2 way there w/ current FC10 release (doesn't hang/crash any more), it's really be nice if dmraid / mdadm recognized the already-there Intel fakeRaid info and used that to determine how disks / partitions are already setup...

I was able to setup a dual boot w/ Windows (using Intel Matrix FakeRaid) and Linux FC10 release on the same drives by doing the following this past weekend:
 - BIOS setting is RAID for ICH10R
 - Have 2 drives assigned for OS - in Raid 1 configuration
 - Windows is installed on 2nd partition of Raid array (has been for a while - I obviously backed up this partition before testing below...)
 - First partition is 1G for Linux Boot - 3rd partition is for Linux / install
 - I installed FC10 doing the following - 
   - Leave BIOS SATA ports in Raid mode (so Windows still see's raid), boot FC10 Installer
   - Anaconda detects 2 actual HD's instead of 1 Raid Drive (but does see partitions as being exactly equal on both HD's, as would be expected since they are mirrored)
   - Setup Raid 1 using software raid format for Partition 1 and 3 for BOTH drives in exactly the same way in anaconda
   - Assigned md0 (part 1 from both drives) to /boot, assigned md1 (part 3 from both drives) to / - both ext3 fs
   - Proceeded to install FC10
   - Note - had to install GRUB on the MBR / boot sector of md0 (partition 1) instead of on master HD MBR's to get this to work correctly...
   - Booted to FC10 - verified using qparted it sees 2 HD's, all partitions, and verified w/ mdadm that md0 / md1 are setup correctly
   - Booted to Windows - Verified Windows still sees 1 Raid Volume in Matrix Storage manager, and see's 2 new partitions on that Raid volume (equivalent md0 / md1) that are both ext3 / linux software raid

Now the issues - 
 - As mdadm / dmraid doesn't recognize the ICH10R Raid Array volume I'm pretty sure that I was just lucky in that creating the 2 software raid partitions in Linux didn't over-write my Intel fakeRaid volume info sector (towards end of hard drive)
   - I'll be dumping sectors from end of disks to find Intel Raid info and will manually shrink the last partition in Linux on the HD's to ensure they can't over-write this data
 - I installed the free ext2fs driver for Windows - I need to validate if I can mount / access the Linux partitions in Windows w/out corrupting them (should be able to)
 - I have to setup FC10 to ignore the Windows partitions as it see's them as 2 separate partitions and writing data to either has "undefined" results (i.e. if the exact same changes aren't made on both drives then which data will you get when Windows reads the mirrored array?)
   - I'm going to try to set mdadm to see the Windows partitions as a Raid 1 array - think I'll have to shrink the Win partitions so mdadm has a place to write the raid info at the end, but then Linux will "think" the partitions are in Raid1 and I can access data off them...

The point of the initial bug wasn't just to have dmraid / mdadm work in Anaconda for installing Linux - it's to be able to dual boot Windows and Linux using the same drives / Raid array setup (using Intel fakeRaid or whatever other raid card w/ fake raid you have...)

This was working in FC<8 then was broken... sounds like you guys are on the right track for making it work again - many thanks!!

I'd be happy to test out any updates you have for this going forward (time permitting of course) - just let me know.  I only have Intel ICH8/9/10R fakeRaid boards to test with in General (maybe a Promise / SiL fakeRaid card as well).

Chad

Comment 8 Joel Andres Granados 2009-03-14 11:28:22 UTC
dmraid specific test day:

Date : 18-Mar-2009
Time : 0900 UTC - 1700 UTC
place : #anaconda (Freenode)
Additional info: https://fedoraproject.org/wiki/Anaconda/testing-dmraid
Process: The idea for this event is to get as many issues solved and to get bugzillas for the ones we can't easily close.  Prerequisite for all this is to have a working dmraid setup in your box.
People: jgranado, jlaska and hdegoede
wiki: http://fedoraproject.org/wiki/Anaconda/testing-dmraid

Comment 9 David Krings 2009-03-14 14:08:22 UTC
First of all, the nVidia FakeRAID is problematic as well. I wonder why this is still a problem under Fedora, because others such as OpenSuSE get it working fine using dmraid. Assuming that the code there is open source as well it should be at least straight forward if not easy to get this working under Fedora.
That said, I'll see that I try it out again.

Comment 10 Panagiotis Kalogiratos 2009-03-15 03:48:36 UTC
David:
Take a moment to read what Joel has written before posting please.

 It is working in Fedora. The problems were in anaconda (python-pyblock specifically) which is the Fedora/Redhat installer and has nothing to do with other distros. Also the team has fixed these underlying bugs in pyblock and we have tested the fixes with success (I personally have). This event is for final tests using rawhide because the state of anaconda in rawhide was a mess and it was not possible to have a clear picture plus they have rewrote the partitioning code in anaconda.

Joel:
 I will be there for the event but probably during the last couple of hours due to work schedule. I do hope that now the partitioning stage can also parse /dev/mapper for existing nodes but maybe I'm asking for too much:)

Comment 11 David Krings 2009-03-15 13:32:17 UTC
Sorry if I missed that, but I just searched for the original comment in this issue and could not find it. Maybe the good news was in one of the many duplicates?

Comment 12 Panagiotis Kalogiratos 2009-03-15 17:05:20 UTC
All the duplicates were the originally reported bugs on anaconda and dmraid and yes they contain a lot of information inside. You may want to check them out. For example bug 467904 (the one I had reported) deals with nvraid 0+1 sets and the last tests were successful :)

Comment 13 Joel Andres Granados 2009-03-19 13:57:11 UTC
I would encourage everyone that was not in the test day to give F11 beta a test for their dmraid setups (remember dmraid != mdraid).  There is one outstanding issue that was fixed in pyblock but might not get tagged for the beta compose: dmraid installations will fail with raid10 that have predefined partitions.
To test these setups you might want to grab :
1. http://jgranado.fedorapeople.org/storage/testing/19-03-2009-1452-x86_64.img
2. http://jgranado.fedorapeople.org/storage/testing/19-03-2009-1452-i586.img
These images have the fix in them.

I'm planning to keep this bug alive until the second batch of anaconda testing.  I'm thinking of closing this master issues if we do not find anymore issues that day.

Comment 14 Bob Gustafson 2009-03-19 14:56:29 UTC
Sorry I missed the big test day, but..

I have 3 separate raid systems, two are software raid which work fine and are running under F10 now.

The 3rd system is my central firewall-mail server-router. All the traffic from my other systems and my wife's go through this 3rd box. This is the one with the motherboard ICH9R Raid Array which does not boot under the original(?) FC10 release nor does it boot under the first Fedora Unity respin of FC10 (as documented in my previous bug comments).

Because this is a critical machine in my setup, an upgrade to FC11 must either work perfectly, or fail in a benign way (as did the FC10 upgrade - fortunately).

My reading of Comment #7 gives me some concern. Closing out this bug before these issues are completely resolved seems a little premature.

Comment 15 Joel Andres Granados 2009-03-19 18:35:11 UTC
(In reply to comment #14)
> Sorry I missed the big test day, but..
...
> My reading of Comment #7 gives me some concern. Closing out this bug before
> these issues are completely resolved seems a little premature.  

The comments are not relevant to the current status of the installer for 2 reasons:
1. The mention F10.  We have rewriten the partitioning code.  This means that none of the previous behavior (F10 and before) is relevant.
2. Its talking about something that was already addressed previously https://bugzilla.redhat.com/show_bug.cgi?id=471737.

Its not premature because I have yet to see a report against the new code that describes misbehavior.  Also, I'm doing over 20 installs a day with my raid10 set and have seen no dmraid related issues.

Comment 16 Bob Gustafson 2009-03-19 22:29:46 UTC
I'm happy to hear that you are having good results with RAID on your current system(s). I could ask whether these 20 installs a day are on different systems, or if any have motherboard/hardware RAID?

I only mention my concerns because I am on record as having problems with anaconda install on RAID in FC5, FC6, FC7, FC8, FC9, and FC10 (see Bug #186182, Bug #186312, Bug #188314, Bug #189971, Bug #192542, etc.)

Comment 17 David Krings 2009-03-19 23:17:21 UTC
OK, I grabbed the FC10 iso and installed without problems on a SATA FakeRAID using the nVidia nForce 590 (MCP55) chipset (Asus M2N32-SLI Deluxe). Install worked almost flawlessly except for some network issues. Works for me. YAY! Thanks for fixing.

Comment 18 Bob Gustafson 2009-03-20 02:35:38 UTC
(In reply to comment #17)
> OK, I grabbed the FC10 iso and installed without problems on a SATA FakeRAID
> using the nVidia nForce 590 (MCP55) chipset (Asus M2N32-SLI Deluxe).

Sounds like the NVIDEA controller on that board works fine. Did you try the Silicon Image controller too? You could give us tests on two different motherboard/hardware raid controllers - with the same board and disks.

How many disks did you use in your test?

Which FC10 iso did you grab? The original 25-Nov-2008 DVD?

Comment 19 Bob Gustafson 2009-03-20 02:59:25 UTC
Also, did you do an upgrade from FC9, or a virgin install of FC10?

Comment 20 Joel Andres Granados 2009-03-20 10:30:32 UTC
(In reply to comment #16)
> I'm happy to hear that you are having good results with RAID on your current
> system(s). I could ask whether these 20 installs a day are on different
> systems,
Same system.
> or if any have motherboard/hardware RAID?
Bios RAID.
And this is why we need you.  we need your tests because you have stuff that I don't have.  but if you test with FC5, FC6, FC7, FC8, FC9 FC10 and/or F11Alpha you are not testing what is going to end up in F11.  Pls understand that F{7,8,9,10,11alfa} are no longer relevant when addressing dmraid.  So when you say "I have had problems before", I have no other option but to keep insisting that you test with F11 post alpha.

> 
> I only mention my concerns because I am on record as having problems with
> anaconda install on RAID in FC5, FC6, FC7, FC8, FC9, and FC10 (see Bug #186182,
> Bug #186312, Bug #188314, Bug #189971, Bug #192542, etc.)
I'm sorry, but these bugs/test are all irrelevant to the current code.

Comment 21 Joel Andres Granados 2009-03-20 10:34:33 UTC
(In reply to comment #17)
> OK, I grabbed the FC10 iso and installed without problems on a SATA FakeRAID
> using the nVidia nForce 590 (MCP55) chipset (Asus M2N32-SLI Deluxe). Install
> worked almost flawlessly except for some network issues. Works for me. YAY!
> Thanks for fixing.  

This is great to hear. But there is a possibility that it will not work for F11 because of the code differences between F10 and current status of F11.  If you could grab the latest rawhide and test with that, it would be tremendous help.  And by latest I mean a rawhide from this week :)

Comment 22 Bob Gustafson 2009-03-20 14:08:51 UTC
(In reply to comment #20)

> And this is why we need you.  we need your tests because you have stuff that I
> don't have.  but if you test with FC5, FC6, FC7, FC8, FC9 FC10 and/or F11Alpha
> you are not testing what is going to end up in F11.  Pls understand that
> F{7,8,9,10,11alfa} are no longer relevant when addressing dmraid.  So when you
> say "I have had problems before", I have no other option but to keep insisting
> that you test with F11 post alpha.
> 

I think your current system is similar to my ICH9 motherboard raid system. The only question you haven't answered is whether you did an upgrade or a virgin install. I am interested in an upgrade.

Check out the last part of Comment #7 - Chad mentioned that the upgrade was not picking up the info from raid partitions already on the disks. This works for software only raid, but for fakeRAID/motherboard/BIOSraid, the same information may not be in the same place. This is critical for a successful upgrade. It would be a disaster if the upgraded system appeared to be running successfully, but in fact was running on only one disk in the mirror.

I have already 'tested' the previous releases of Anaconda/Raid. Of course I don't want to do those exercises again. In all cases, except for FC10 motherboard-RAID, with the help of many others on bugzilla, I have managed to upgrade my systems successfully.

I applaud your current efforts to get Anaconda/RAID working ahead of time for the FC11 release. If it were placed on a separate timetrack/project from the regular release schedule, it would be even better.

Anaconda represents Fedora's first face to the users. It is unique to Fedora. If it does not install Fedora reliably, it reflects badly on Fedora. I have been surprised that it has had problems in the past.

I agree with you that it does need an aggressive testing program on many different systems (and testing upgrade mode too). As a 'one-time-run' program for most users (with no possibility of a second try by waiting for a respun disk), it doesn't get much testing.

I priced a new homebrew system for this purpose - around $700 for a nice board with a virtual capable Intel chip, 4GB and dual 1TB drives, but at the moment I just don't have the time to spare to put it together.

Comment 23 David Krings 2009-03-20 23:58:30 UTC
What I did with the nVidia system is a fresh install over OpenSuSE. I recall that the SiL controller and the one drive attached to it was detected fine, but I do not have disks to connect and try out RAID on the SiL. Same for my PCI-X SiL 3124. Sorry.

I had the FC10 system down for a few days and upon start I now get only the grub prompt. I guess a kernel update put things out of reach? Grub is really stupid in a way, although the real blame goes to the BIOS I guess. *sigh*
Downloaded 'an' alpha of FC11 and since the system is broken anyway, I tried it out. How do I know if it is a recent alpha build? Seemed to be not from this week, but from February (the DVD iso torrent from the website). In any case, it didn't work. Told me "Could not stat device/mapper/nvidia_ebdgeefb - No such file or directory." It did see the two individual disks and the one on the SiL.
I'll dig around to see how to obtain "a rawhide from this week" and report back.

Comment 24 Bob Gustafson 2009-03-21 01:12:01 UTC
If it just says GRUB at the top left of the screen, you probably have whacked the master boot record.

Just do a rewrite of the MBR with the install disk in rescue mode - it happened to me recently - it is not the end of the world. But be careful - and write the MBR into both disks - if you have a mirror RAID.

Comment 25 Bob Gustafson 2009-03-21 15:28:25 UTC
It seems as though the latest Anaconda is not being tested.

See fedora-test-list entry below

"Clyde E. Kunkel" <clydekunkel7734>
For testers of Fedora Core development releases <fedora-test-list>
Subject: 	Does daily install image contain newest anaconda?
Date: 	Sat, 21 Mar 2009 10:35:06 -0400 (09:35 CDT)

I ask because the mirrors I looked at have the latest anaconda 
(11.5.0.35-1) in the package dir and the install.img file has the 
current date, but on trying to install using the boot.iso cd and 
askmethod, anaconda reports that it is version -33.  This has occurred 
with both 20090320 (anaconda -34) and 20090321 rawhides.

Comment 26 Alexandru Constantin Minoiu 2009-03-27 00:54:57 UTC
Hi,

I have tested F11 rawhide with Anaconda version 15.5.0.38 and
it only shows one RAID volume. I have 2 volumes using the same 2 hdds.
The first one is RAID 0 with a small strip size (16k I think) and the
second one is also RAID 0 with a bigger strip size. In Anaconda only
the second one shows up.
I have run dmraid -ay after anaconda start up and is at the first setup
screen (so before the disk configuration) and it activates all intel
raid volumes and the partitions it find, in /dev/mapper . After switching back
to Anaconda graphical setup when it gets to partition setup it only shows
the second RAID volume, although I have previously activated all volumes
with dmraid .

I hope thie helps in solving F11 dmraid Anaconda problem.

Comment 27 Hans de Goede 2009-03-27 07:38:56 UTC
(In reply to comment #26)
> Hi,
> 
> I have tested F11 rawhide with Anaconda version 15.5.0.38 and
> it only shows one RAID volume. I have 2 volumes using the same 2 hdds.
> The first one is RAID 0 with a small strip size (16k I think) and the
> second one is also RAID 0 with a bigger strip size. In Anaconda only
> the second one shows up.

Alexandru, thanks for reporting this. This bug is mainly meant as a place
to dup all the pre-existing (F-10) dmraid bugs, which we believe we've ll fixed.

Your bug is a new issue, can you please open a new bug for this. And can you
in this new bug please tell us what kind of dmraid you are using
(intel / nvidia / ....) ? If you don't know what type you've got, please
include the output of "ls /dev/mapper", after doing "dmraid -a y"

Thanks & Regards,

Hans

Comment 28 Joel Andres Granados 2009-03-27 09:28:52 UTC
executing `dmraid -ay -vvv -ddd` gives us usefull info too.

Comment 29 Bob Gustafson 2009-03-27 14:16:29 UTC
Created attachment 337014 [details]
'dmraid -ay -vvv -ddd' on FC9 system with motherboard raid

This output is from an example of a successful motherboard/raid upgraded system (was upgraded from FC8 to FC9..)

Comment 30 Alexandru Constantin Minoiu 2009-03-27 15:19:20 UTC
Hi,
I've created a new bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=492584
It contains the output of dmraid -ay -vvv -ddd and the listing of /dev/mapper
My RAID is the onboard ICH9R.

Thank you for the quick replies!

Comment 31 David Krings 2009-03-29 00:43:33 UTC
After figuring out how to get to the FC11 rawhide image (another d'oh moment) I gave the one from March 26th a try. It uses Anaconda 11.5.0.38 and detected the SATA FakeRAID on the nVidia nForce 590 (MCP55) chipset (Asus M2N32-SLI Deluxe) just fine. I didn't go ahead with the installation, but the point here is that it either detects the FakeRAID or it doesn't. Is there any interest for fixing this bug to go ahead with the full installation?

While browsing the comments here another question I can answer:
How many drives do you use?
I use two drives in one mirror RAID dual-booting XP 64 bit (same as Server 2003) and Linux (currently OpenSuSE, in the future when FC11 is release I switch).
Since the Windope partition is a "production" platform I won't connect the drives to the SiI controller as that will destroy the RAID array. Sorry, won't do that.

OT: I mentioned a boot problem earlier, which indeed is just a grub issue. Grub decided to not write the MBR on the FakeRAID drive, but on the non-RAID drive on the SiI controller although I explicitly specified otherwise. It applies to both FC10 as well as OpenSuSE => will complain to the grub folks and switched boot order in the BIOS for now. I am sure it will work fine when I take the non-RAID drive out of the equation.

Comment 32 Bob Gustafson 2009-03-29 02:01:50 UTC
(In reply to comment #31)

> OT: I mentioned a boot problem earlier, which indeed is just a grub issue. Grub
> decided to not write the MBR on the FakeRAID drive, but on the non-RAID drive
> on the SiI controller although I explicitly specified otherwise. It applies to
> both FC10 as well as OpenSuSE => will complain to the grub folks and switched
> boot order in the BIOS for now. I am sure it will work fine when I take the
> non-RAID drive out of the equation.  

The MBR should probably be written on both drives if it is a RAID1.

Comment 33 Phil 2009-03-29 21:26:19 UTC
1. Bob, he probably has the same issue as myself where my mobo can be told to boot from internal drives first when there is a disk plugged in via esata but as soon as you reconnect or plug another disk in (esata) it resets to boot order to external disks first.

2. Apologies for not being able to try the alpha but i've not been in a position to blow away my machine - i plan on using the beta that's released tomorrow ;).

3. It seems that you guys have nailed the bug for single array setups but how's the stats for people like myself who have 2 arrays in the one system (raid0 for system and jbod or mirror for data disks)?

4. I had major issues with grub being able to boot directly to fakeraid0 - seemed ok with raid1 and jbod.  this should probably go in to another bug but i have a relatively quick work around for anyone with this same issue
   # cfdisk /dev/mapper/nvidia_abcdefgh (get Cylinder-Head-Sector info)
   # grub --device-map=/dev/null
   # device (hd0) /dev/mapper/nvidia-abcdefgh
   # geometry (hd0) C H S
   # root (hd0,3)
   # setup (hd0,3)
   # setup (hd0)

Comment 34 Hans de Goede 2009-03-30 08:24:48 UTC
(In reply to comment #31)
> After figuring out how to get to the FC11 rawhide image (another d'oh moment) I
> gave the one from March 26th a try. It uses Anaconda 11.5.0.38 and detected the
> SATA FakeRAID on the nVidia nForce 590 (MCP55) chipset (Asus M2N32-SLI Deluxe)
> just fine. I didn't go ahead with the installation, but the point here is that
> it either detects the FakeRAID or it doesn't. Is there any interest for fixing
> this bug to go ahead with the full installation?
> 

I do not understand what you are trying to say / ask here. You just said it detects the raid array and then you say: "it either detects the FakeRAID or it doesn't", that is pretty much correct, we used to have a number of bugs
causing us to not detect certain raid arrays those are fixed now. Atleats I assume
that you mean that *older* versions did not detect the array when you say:
"it either detects the FakeRAID or it doesn't".

I guess what you mean with: "Is there any interest for fixing
this bug to go ahead with the full installation?"

Is if it is useful for testing purposes to do a full install, yes it is, there
is more to fakeraid support then just finding the set. Such as finding it
again from the initrd when the installed system boots (assuming you put your / on the raidset).

Comment 35 Hans de Goede 2009-03-30 08:27:14 UTC
(In reply to comment #33)
> 3. It seems that you guys have nailed the bug for single array setups but how's
> the stats for people like myself who have 2 arrays in the one system (raid0 for
> system and jbod or mirror for data disks)?
> 

That should work too, with the exception being having multiple sets using the same disks, something which isw can do. If you have 2 disks and then 2 mirrored sets, each using part of the 2 disks, that will not work *in the beta* in the mean
time we've fixed this, but we do not have a build out yet which contains the fix.

Using multiple sets, which each use their own set of disks should work fine.


> 4. I had major issues with grub being able to boot directly to fakeraid0 -
> seemed ok with raid1 and jbod.  this should probably go in to another bug but i
> have a relatively quick work around for anyone with this same issue
>    # cfdisk /dev/mapper/nvidia_abcdefgh (get Cylinder-Head-Sector info)
>    # grub --device-map=/dev/null
>    # device (hd0) /dev/mapper/nvidia-abcdefgh
>    # geometry (hd0) C H S
>    # root (hd0,3)
>    # setup (hd0,3)
>    # setup (hd0)  

Weird, this should work fine from the beta installer, if not please let us know.

Comment 36 Phil 2009-04-03 11:21:06 UTC
I think my grub issue was just a throw back from my previous F8/9 install/upgrades, it all works fine now though.

Given the resolution to bug #493293 and to me currently waiting for the F11 Beta rpms to finish installing I'm happy to sign off on this bug as being fixed with anaconda 11.5.0.39 and matching pyblock updates.

My thanks to all involved for working through this and I've now got more ammunition against some co-workers who are ubuntu die hards :p.  seriously though, thanks a million people.

Regards,

Phil

Comment 37 Dimi Paun 2009-04-03 14:19:00 UTC
I've just finished installing F11 Beta, and this part of the installation worked well. AFAIAC, this bug is fixed -- thanks!

Comment 38 Phil 2009-04-04 00:53:48 UTC
It seems i spoke to quickly.

issue summary:
new jbod array isn't initialised on boot potentially to previous mdraid info on discs.

How to replicate on an nvidia 680i system:
* 2*discs in raid 0 (/dev/mapper/nvidia_abcdefgh)
* 2*discs as separate discs (sdc and sdd)
* fdisk sdc and sdd with the following:
  sdc: /md0
       /data1
       swap
  sdd: /md0
       /data2
       swap
  md0: (raid1) LABEL=/boot
  (ensure all partitions are appropriately formatted)
* add sdc and sdd as raid discs through the bios
* through the mediashield f10 setup screen create JBOD with sdc and sdd
* boot system from latest boot.iso (anaconda >11.5.0.39)
* after anaconda has scanned disc's accept the "do you want to initialise /dev/mapper/nvidia_hgfedcba" (the new jbod array)
* on to the original raid0 set put /boot / and swap (i put / and swap in to LVM) and create part for windows install (i made a 100GB swap partition because there was no NTFS format option and i couldn't leave it "unformatted")
* on to the new jbod create a massive ext3 partition and some swap space (i used ext3 because I need windows to read it with the ext2ifs driver and am unsure about ext4 functionality)
* install system with default packages and boot your system

hopefully you get the same issue as me where the jbod set was not initialised because of sdd originally being a part of an mdraid set. despite anaconda seeming to format the entire disc during installation the partitions on sdd remained in tact.  I'm guessing from this that the physical data no longer exists but the MBR of the disc still says it does and because of how early mdraid array's are rebuilt during boot it's interfering. this is purely speculation though :p

i hope you have the same problems when you try this :)

Comment 39 David Krings 2009-04-04 19:21:27 UTC
Sorry for the late response, but I went ahead and used the FC11 beta to do the installation test. While it again detected the partitions nicely, installer crashed after selecting a partition to be formatted as ext3 and mounted as /boot. Rather than loosing the info I submitted a new bug using the bug reporter (now runs network card fine). See bug 494124.
Sorry folks, doesn't work for me on SATA FakeRAID on the nVidia nForce 590 (MCP55) chipset (Asus M2N32-SLI Deluxe).

Comment 40 Hans de Goede 2009-04-05 09:44:31 UTC
(In reply to comment #38)
> It seems i spoke to quickly.
> 
> issue summary:
> new jbod array isn't initialised on boot potentially to previous mdraid info on
> discs.
> 

<snip>

Thanks for the detailed bug report. Here is what I believe has happened:
sdc + sdd both had a partition which were part of a mdraid set.

Normally if you would have choosen to re-cycle the 2 mdraid partitions in
anaconda, we would have wiped the mdraid meta data, but instead you made
the 2 disks part of an dmraid set using the BIOS-setup, the BIOS then
wipes the partition table clean. Anaconda sees a wiped partitiontable and
asks whether or not to initialize it, then we create an empty partition table
anaconda does not do any wiping of the mdraid meta data, as we do not see
it, as we check for things like lvm / mdraid meta data on the basis of
the partition table, which says there are no partitions so we do not
check for any meta data.

Then we create and format new partitions as asked, however creating
a filesystem only touches certain parts of the disk (where the
inode tables , journal, etc will live), so we happen to not write
the part where the mdraid meta data lives.

System reboots, scans for mdraid metadata and somehow finds the mdraid
metadata (perhaps some of your new partitions live on the same location
as the old ones ?).

Fixing this is going to be pretty hard, if not damn near impossible. Nothing
sort of doing a full wipe of the disks (slow) is going to get rid of all
metadata. Long story short, by letting the BIOS wipe the partitiontable before
first manually removing lvm and / or mdraid metadata you sort of shot yourself
in the foot. Or you could call this a BIOS bug, as it did not properly
wipe the disks before making a raid array out them.

Anyways please file a new bug including all the excellent details from comment #38 and my analysis of this, thanks.

Comment 41 Phil 2009-04-06 01:19:31 UTC
(In reply to comment #40)
> Anyways please file a new bug including all the excellent details from comment
> #38 and my analysis of this, thanks.  

I've opened bug #494254 for this.

Comment 42 Winfrid Tschiedel 2009-04-06 09:28:28 UTC
My experience with the new anaconda is not very positive -
I tried to install fedora 11 beta (x86_64) from DVD on different platforms,
but I never succeeded - unfortunately also the reporting of the error
does not work ( access to bugzilla, save via scp or to local disk - 
no device found ).

After this I rebooted my system with rhel 5.3 and tried to access the 
disk on which I installed fedora ( fedora 11 beta should be on /dev/sdb )

[root@rx220a ~]# dmraid -b
/dev/sda:    312581808 total, "4MT0G97A"
/dev/sdb:    156301488 total, "5JVEY7CL"
[root@rx220a ~]# dmraid -r
/dev/sda: pdc, "pdc_bjfeeibeeb", stripe, ok, 312368928 sectors, data@ 0
/dev/sdb: pdc, "pdc_cbecidbgbj", stripe, ok, 156118928 sectors, data@ 0
[root@rx220a ~]# dmraid -s -v -v -v
WARN: locking /var/lock/dmraid/.lock
NOTICE: skipping removable device /dev/hda
NOTICE: /dev/sda: asr     discovering
NOTICE: /dev/sda: ddf1    discovering
NOTICE: /dev/sda: hpt37x  discovering
NOTICE: /dev/sda: hpt45x  discovering
NOTICE: /dev/sda: isw     discovering
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi     discovering
NOTICE: /dev/sda: nvidia  discovering
NOTICE: /dev/sda: pdc     discovering
NOTICE: /dev/sda: pdc metadata discovered
NOTICE: /dev/sda: sil     discovering
NOTICE: /dev/sda: via     discovering
NOTICE: /dev/sdb: asr     discovering
NOTICE: /dev/sdb: ddf1    discovering
NOTICE: /dev/sdb: hpt37x  discovering
NOTICE: /dev/sdb: hpt45x  discovering
NOTICE: /dev/sdb: isw     discovering
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi     discovering
NOTICE: /dev/sdb: nvidia  discovering
NOTICE: /dev/sdb: pdc     discovering
NOTICE: /dev/sdb: pdc metadata discovered
NOTICE: /dev/sdb: sil     discovering
NOTICE: /dev/sdb: via     discovering
NOTICE: added /dev/sda to RAID set "pdc_bjfeeibeeb"
NOTICE: added /dev/sdb to RAID set "pdc_cbecidbgbj"
*** Active Set
name   : pdc_bjfeeibeeb
size   : 312368896
stride : 256
type   : stripe
status : ok
subsets: 0
devs   : 1
spares : 0
*** Active Set
name   : pdc_cbecidbgbj
size   : 156118912
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 1
spares : 0
WARN: unlocking /var/lock/dmraid/.lock
[root@rx220a ~]# 

But I cannot access to any partition on /dev/sdb ( parted shows 
3 partitions on that disk ) :

[root@rx220a ~]# ls /dev/mapper
control            pdc_bjfeeibeebp11  pdc_bjfeeibeebp6  pdc_cbecidbgbj
pdc_bjfeeibeeb     pdc_bjfeeibeebp2   pdc_bjfeeibeebp7
pdc_bjfeeibeebp1   pdc_bjfeeibeebp3   pdc_bjfeeibeebp8
pdc_bjfeeibeebp10  pdc_bjfeeibeebp5   pdc_bjfeeibeebp9
[root@rx220a ~]# parted /dev/mapper/pdc_cbecidbgbj
GNU Parted 1.8.1
Using /dev/mapper/pdc_cbecidbgbj
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p

Model: Linux device-mapper (dm)
Disk /dev/mapper/pdc_cbecidbgbj: 79.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      32.3kB  210MB   210MB   primary  ext3         boot
 2      210MB   4505MB  4295MB  primary  linux-swap
 3      4505MB  26.0GB  21.5GB  primary  ext3

Comment 43 Winfrid Tschiedel 2009-04-06 10:18:58 UTC
Created attachment 338311 [details]
Content of /var/log of fedora 11 beta installation

The attached data belong to comment #42

Winfrid

Comment 44 Joel Andres Granados 2009-04-07 17:26:30 UTC
Winfrid:
I see the partitions where detected by anaconda in the storage.log that you sent.  Can you pls be more specific.  You say your installations where not successful but you did not specify the reason.  Where exactly did it fail, and why do you think it is related to the dmraid detection.  A lot of work has been done from beta.  Can you pls test rawhide and see if you can reproduce your issue.
Additionally, if parted can see the partitions, the anaconda most probably will be able to see them, as we use partedlib indirectly.

Comment 45 Jeffrey R. Evans 2009-04-08 00:39:44 UTC
I tested this using the F11 RAWHIDE 64 bit edition.   I get an error indicating unable to initialize storage device.

Gigabyte P45-DS3R w/Intel ICH10R


Note that F9 Works properly on this system with the existing RAID0 on 2 WDC5000AAKS drives.

Comment 46 Jeffrey R. Evans 2009-04-08 00:49:52 UTC
For comment #45 EXACT error message is:

"Filesystem error detected, cannot continue."  


This message is in a modal window with an International "Do not enter" sign and a button labeled "ok".

Comment 47 Joel Andres Granados 2009-04-08 08:51:02 UTC
Jeffrey:
Thx for the info, but this actually happens after we detect the dmraid set.  It occurs when we can't mount the filesystem on top of the device.  A patch for this went in yesterday and anaconda was built at the end of the day yesterday.  Can you pls test with a new rawhide (making sure that you have anaconda version anaconda-11.5.0.41-1.  If your bug persists, pls open another bugzilla to track this, being careful to check if the bug already exists.

Comment 48 Jeffrey R. Evans 2009-04-08 22:31:39 UTC
I'll test with today's bits and provide my results.

Comment 49 Jeffrey R. Evans 2009-04-09 00:43:17 UTC
This is the Link that I used to download the Rawhide bits updated 07APR2009 at 05:31.

http://mirror.unl.edu/fedora/linux/development/x86_64/os/images/boot.iso

That build comes with Anaconda 11.5.0.40

Please advise if I should be testing with a different build and where that build may be downloaded from via URL.

Comment 50 Hans de Goede 2009-04-09 06:42:43 UTC
As Joel has stated you need atleast 11.5.0.41 for the "Filesystem error detected, cannot continue." fix. So try again with todays rawhide's boot.iso, thanks!

Comment 51 Jeffrey R. Evans 2009-04-09 12:22:22 UTC
Dear Joel, Hans and my other distinguished Friends:

If any one of you could kindly paste a uniform resource locator to the boot.iso or some other .iso file that contains  Anaconda 11.5.0.41 or later I would appreciate it.  I might even appreciate enough to test it.  So far, when I check the public mirrors for RAWHIDE F11 x86_64 there are no .iso files more current than the one I have already documented in comment #49.


Good luck Guys!

Comment 52 Phil 2009-04-09 21:30:00 UTC
The List:
http://mirrors.fedoraproject.org/publiclist/Fedora/11-Beta/

The Repo:
http://mirrors.nl.eu.kernel.org/fedora/development/x86_64/os/

I sometimes have the same issue with my ISP that it's mirror is a day or 2 out of sync with the master.  I've found kernel.org to be very close with their repo updates.  good luck mate :)

Comment 53 Jeffrey R. Evans 2009-04-09 22:57:24 UTC
Thank you very Much Phil.  I appreciate it.  Downloading now.  I will test and provide results ASAP.  I did write a bug related to the software distribution issue. https://bugzilla.redhat.com/show_bug.cgi?id=495048

Comment 54 Jeffrey R. Evans 2009-04-09 23:32:55 UTC
Tested, Fails.  See new bug https://bugzilla.redhat.com/show_bug.cgi?id=495156.

Comment 55 David Krings 2009-04-13 00:01:50 UTC
Tried it again with rawhide boot.iso from April 12th, but didn't even get anywhere close because of bug 495424. Will try again after 495424 is fixed.

Comment 56 LukasHetzi 2009-05-01 00:02:35 UTC
Intel ICH9R RAID still doesn't work with dmraid :-(
[root@localhost ~]# dmraid -ay
ERROR: device-mapper target type "raid45" is not in the kernel
RAID set "isw_deaadhfccd_Volume" was not activated

Bug report: https://bugzilla.redhat.com/show_bug.cgi?id=498544

Comment 57 Jeffrey R. Evans 2009-05-18 12:13:28 UTC
Tested with the rawhide boot.iso from 17May2009.   Failed.

Comment 58 Rehan Khan 2009-06-05 17:23:39 UTC
A report from the latest boot.iso I could find (from the above link - dated 15/05/2009). The setup is a Biostar Nforce2 motherboard with a SIL 3124 4port Raid Card pci card with 2 250Gb hard drives attached. The drives are configured with one 'legacy (bootable)' 100GB mirrored drive and a 132Gb mirrored drive (this is using the language used in the Sil Windows utility). The 100Gb mirrored drive has a 30Gb primary partition with Windows XP installed.

For comparison I booted the F10 release dvd and it did not see either of the pre-configured fakeraid mirror drives on the SIL card. I then booted from the May boot.iso and got the same result. Both boots saw the two physical drives on the SIL card and the windows partition on both drives but no SIL mirroring configuration.

I would be happy to provide any other information if I can get some guidance.

Cheers

Comment 59 Hans de Goede 2009-06-06 10:28:05 UTC
(In reply to comment #58)
> A report from the latest boot.iso I could find (from the above link - dated
> 15/05/2009). The setup is a Biostar Nforce2 motherboard with a SIL 3124 4port
> Raid Card pci card with 2 250Gb hard drives attached. The drives are configured
> with one 'legacy (bootable)' 100GB mirrored drive and a 132Gb mirrored drive
> (this is using the language used in the Sil Windows utility). The 100Gb
> mirrored drive has a 30Gb primary partition with Windows XP installed.
> 
> For comparison I booted the F10 release dvd and it did not see either of the
> pre-configured fakeraid mirror drives on the SIL card. I then booted from the
> May boot.iso and got the same result. Both boots saw the two physical drives on
> the SIL card and the windows partition on both drives but no SIL mirroring
> configuration.
> 
> I would be happy to provide any other information if I can get some guidance.
>

Can you run that boot.iso again, start the installer and then when at the partitioning screen switch to the shell at tty2 (ctrl+alt+f2) and then run:
dmraid -a y -vvv > log

Can you then please file a new bug for this and attach the resulting log file and /tmp/anaconda.log and /tmp/storage.log to that new bug ?

Thanks.

Comment 60 Rehan Khan 2009-06-06 22:37:58 UTC
Thanks Hans, I created bug 504428 for the above report and I also created 504429 for another motherboard using the nforce4 mediashield fake raid. Please let me know if I can provide any further information.

cheers
Rehan

Comment 61 Bug Zapper 2009-06-09 12:00:56 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle.
Changing version to '11'.

More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 62 fade 2009-06-17 21:56:09 UTC
Hello, I have four disk drives and anaconda could not deal with anydrive which had a logical partition.

I also have a ICH10R but I've only played with the raid feature and then destroyed it. With or without ICH10R, anaconda crashed. The only way was to remove all logical partitions.

Comment 63 Joe Christy 2009-07-12 17:07:35 UTC
I, too, am having dmraid issues.

My laptop is a ThinkPad W700 w/ a Intel Corporation ICH9M/M-E SATA AHCI Controller, which came from the factory set to RAID1 (mirroring) mode in the BIOS.

I installed Fedora 10 from the x86_64 DVD distribution disc, which failed to recognize the controller. After wiping the discs, re-trying, then trying the BIOS "Compatability mode, I set the controler to AHCI in the BIOS, which got it recognized and proceeded to set up 250MB /boot partition and a mirror on one disc and the other, then created a single large md RAID 1 volume with the rest of the space, layer a single LVM physical volume over it and partitioned with several logical volumes.

Now I want to do a fresh install of F11 - I have backups of everything and want to wipe the discs again and use ext4 partitions.

Sadly, when I try to do an install from the F11 x86_64 DVD distribution disc, anaconda fails after getting confused about dmraid info from the controller (still in AHCI mode). Cf. bug 510772: https://bugzilla.redhat.com/show_bug.cgi?id=510772 .

Is there any way to install F11 with any sort of RAID? anaconda seems to be over-thinking about my HDDs.

Comment 64 Hans de Goede 2009-07-14 09:02:35 UTC
(In reply to comment #63)
> I, too, am having dmraid issues.
> 

Hi, I've commented to your issue in bug 510772, as for this bug all dmraid issues
which this bug was meant to track are fixed now so I'm closing this. I'm not saying there aren't any issue left, but please file new bugs for those.

Comment 65 Emil Volcheck 2009-10-14 05:06:34 UTC
I just attempted to upgrade to Fedora 11.  The problem is worse than
with Fedora 10, cf. my previous comment https://bugzilla.redhat.com/show_bug.cgi?id=409931#c42 .  I tried to upgrade from DVD first without,
then with the linux=nodmraid kernel option.  In both cases, anaconda
failed to recognize my /dev/md0 partition.  For the first time,
I can't upgrade at all!  Even with Fedora 10, nodmraid worked,
so I'm at a loss for what to do now.  I regret that I am unable to
follow the extensive discussion above.  I have not put this
comment under 510772, because anaconda is not crashing.  It simply
does not see /dev/md0 and wants to treat it as a fresh install,
seemingly similar to the bugs that this master bug was made for.

This problem has gone on for so long that I am trying to use
the voting system.  If anybody else is having this issue,
I encourage you to vote this up as well!

If there's another trick I should try, I'd appreciate pointers.

Thanks,

Emil Volcheck
volcheck

Comment 66 Hans de Goede 2009-10-14 06:58:42 UTC
(In reply to comment #65)
> I just attempted to upgrade to Fedora 11.  The problem is worse than
> with Fedora 10, cf. my previous comment
> https://bugzilla.redhat.com/show_bug.cgi?id=409931#c42 .  I tried to upgrade
> from DVD first without,
> then with the linux=nodmraid kernel option.  In both cases, anaconda
> failed to recognize my /dev/md0 partition.  For the first time,
> I can't upgrade at all!  Even with Fedora 10, nodmraid worked,
> so I'm at a loss for what to do now.  I regret that I am unable to
> follow the extensive discussion above.  I have not put this
> comment under 510772, because anaconda is not crashing.  It simply
> does not see /dev/md0 and wants to treat it as a fresh install,
> seemingly similar to the bugs that this master bug was made for.
> 
> This problem has gone on for so long that I am trying to use
> the voting system.  If anybody else is having this issue,
> I encourage you to vote this up as well!
> 
> If there's another trick I should try, I'd appreciate pointers.
> 

I'm not sure what exactly your setup is, but it sounds like F-10 and F-11 are actually doing the right thing, and F-9 had a bug. You probably have 2 disks, which are marked as RAID inside your BIOS, in that case you should not use mdraid, anaconda should see your disks as one raid set (as configured in the BIOS) and you should use that to install on to. This will require a clean re-install.

Alternatively you could remove the BIOS RAID metadata from your disks using
"dmraid -x" (backup first!) But this will cause issues if you are also using
another operating system (such as windows) which is actually using the BIOS RAID.

Comment 67 Hans de Goede 2009-10-14 07:11:19 UTC
Followup to comment #66, if "dmraid -x" does not work you can alsy try:
dmraid -rE

Comment 68 Emil Volcheck 2009-10-14 13:35:52 UTC
Reply to comment #65:

Dear Mr. De Goede,

I have Software RAID created under Fedora 4 (or maybe 5).
I've been reporting this bug every release since then.
I apologize if I sound weary.  Using the nodmraid option has been
a workaround, but this is the first Fedora release for which it
does not work.  I think the RAID is not at the BIOS level.  Is there
some command I could run to provide you with more specific information
about my situation?

Thanks,

--Emil

Comment 69 Hans de Goede 2009-10-14 13:47:39 UTC
(In reply to comment #68)
> Reply to comment #65:
> 
> Dear Mr. De Goede,
> 
> I have Software RAID created under Fedora 4 (or maybe 5).
> I've been reporting this bug every release since then.
> I apologize if I sound weary.  Using the nodmraid option has been
> a workaround, but this is the first Fedora release for which it
> does not work.  I think the RAID is not at the BIOS level.  Is there
> some command I could run to provide you with more specific information
> about my situation?
> 

nomdraid is fixed in F-12, so you could wait for F-12 beta. Another, option
really fixing this once and for all would be to run:
dmraid -rE

Which will remove stale BIOS RAID metadata from your disks, which I believe
is the real problem. This should be safe to do, but better make backups first!

Comment 70 Bob Gustafson 2009-10-14 17:44:18 UTC
I am running F9 on a system with BIOS hardware RAID (ICH9..) and two disks configured as RAID1.

Hopefully F12 will do the trick, although the 'fix' has been heralded on previous versions..

How much testing is being done with the fixed F12?

Is it possible to get a full cheat sheet on how to proceed, given various starting positions:

1) No BIOS raid, software raid F9,F10,F11

1a) Update? or Full wipe necessary?

2) BIOS RAID, F9

2a) Update? (ha, ha, ha).. Only full wipe install is reasonable, yes

Comment 71 Emil Volcheck 2009-10-14 18:00:28 UTC
Reply to comment #69:

Mr. De Goede,

Is that "nomdraid" or "nodmraid"?  I was using the latter.

I'm conservative about what I do to my installation.  I don't
grasp why Fedora software RAID would be affected by BIOS metadata
since recognizing the two partitions and combining them to
/dev/md0 happens after the BIOS phase of start-up, so I would
like to get some pointers explaining why "dmraid -rE" would
make a difference.  I'm trying to avoid having to wipe everything
and start over from scratch.

If Fedora 12 does not fix this bug, would you agree to
reopen this nodmraid master bug?

--Emil

Comment 72 Hans de Goede 2009-10-14 18:44:58 UTC
There are a number of questions people need to answer here:
1) Is there BIOS RAID meta data on the disks
2) Are you actually using BIOS RAID (iow is the option ROM
   handling BIOS raid on your motherboard enabled)
3) If 1 and 2 is yes, was your existing installation done at a time when
   Fedora correctly identified your BIOS RAID set, and did you install
   to the set, Or did Fedora see 2 separate disks (which it should not)
   and did you set of software raid on that (also known as mdraid).

1) Yes 2) No -> remove the metadata (use dmraid -rE), this should be safe
   but backup first

1) No 2) * -> then you are not using BIOS RAID, the option rom won't do
   anything until you create a set in it

1) Yes 2) Yes, see 3) if you did an install to the RAID Set and not some
   hack where you defined a soft raid set yourself you can update, if you
   did the do softraid youraself thingie (because anaconda had a bug and
   did not recognize the set) you will need to re-install. If anaconda
   does still not recognize the set, file a bug, mail me / what ever
   anaconda should see the set and not the separate disks.

And yes I mean nodmraid not nomdraid, sorry, note that with F-12 nodmraid is
an alternate safer solution to "1) Yes 2) No", and can also be used if under 3) you did the do softraid yourself thingie, to continue with your existing setup (you should be able to update using nodmraid then).

Comment 73 Bob Gustafson 2009-10-28 10:09:16 UTC
Congratulations. I installed the Fedora-12-BETA and it went more or less flawlessly.

My system is brand new, no previous files on the disks, an ASUS P5Q Pro Turbo board which has a ICH10R on board RAID handler with two 500G SATA disks.

At the initial boot, anaconda found my BIOS configured RAID system, so I knew things would go well.

I noticed that the plain vanilla install wanted to put a 200MB /boot partition and a 9G swap partition. I read somewhere in the latest install instructions that 500MB was recommended now for /boot and I thought swap should be 2x memory, so I went into the custom partition and deleted what anaconda had initially done and created a 500M /boot. Then created a LVM out of almost all of the rest and within that a swap of 16G and a / partition. (Maybe the recommended 9G would be enough for swap, but the board will go to 16G.. eventually perhaps)

After the boot up (I chose server configuration), I did a System->Software Update to get all of the latest fixes. Only one dependency problem, so I just deselected that package from the GUI - then it updated successfully.

So far it is running fine:

[root@hoho6 user1]# uptime
 05:03:00 up 2 days,  4:41,  3 users,  load average: 0.00, 0.01, 0.00
[root@hoho6 user1]# 

I enabled KVM virtual and did an initial try at installing F12 as a virtual machine, but I guess the BETA disk doesn't yet have a /images/boot.iso file. Still learning here - my first KVM machine.

Thanks again Hans - good job

Comment 74 Emil Volcheck 2009-11-26 01:04:38 UTC
Hans,

I just attempted to upgrade to F-12 with no success.
I tried with and without "linux=nodmraid", and in both
cases, it did not recognize my previous installation.

I'll try to answer your questions above.

1) I don't think so, because I never used the BIOS.
This was Fedora 4 Software RAID, so it was all done
during the disk partitioning.  If Fedora wrote some
metadata, I suppose this is possible.

2) No, this is not BIOS RAID.

3) N/A

Is there something else I can try?

Thanks,

Emil Volcheck

Comment 75 Hans de Goede 2009-11-26 08:10:12 UTC
Emil,

You should not add linux=nodmraid to the cmdline, but just "nodmraid":

1) Boot from the install dvd
2) press a key to stop the boot menu count down (if there is one)
3) highlight the install / upgrade option (should be the default)
4) press tab
5) leave the pre filled command line and add " nodmraid" (without the quotes,
   note there is a space at the front)
6) press enter

Hopefully this helps,

Regards,

Hans


Note You need to log in before you can comment on or make changes to this bug.