Bug 468649

Summary: dmraid/isw cannot access ICH10R RAID metadata on member disks
Product: [Fedora] Fedora Reporter: Marcus Haebler <haebler>
Component: anacondaAssignee: Hans de Goede <hdegoede>
Status: CLOSED RAWHIDE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: high Docs Contact:
Priority: medium    
Version: rawhideCC: anaconda-maint-list, atorkhov, dcantrell, dimi, error, freedom2lee, hdegoede, humpf, kernel-maint, kwade, lars, mhcox, mike.reeves, notting, quintela, redhat-bugzilla, shorinji, sjensen, steve.paul, tom.schoonjans, vincew, zyta2002
Target Milestone: ---Keywords: Reopened
Target Release: ---Flags: kwade: fedora_requires_release_note+
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2009-02-04 21:04:56 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 151189, 438943    

Description Marcus Haebler 2008-10-27 03:36:30 UTC
Description of problem: 
There are three disks in the system. The first two (sda & sdb) are configured as a RAID 1 on an Intel fake RAID (P45/ICH10R). The third disk (sdc) is a single disk and not configured into the RAID in any way.
The kernel/installer in F10 Snap 3 x86_64 DVD does not recognize the RAID 1 and lists the member disk in the installer (anaconda). Interestingly, when I tried the F10 Beta the RAID was properly recognized but the system crashed otherwise during the install.
With Snap 3 I actually managed to install it on sdc. But I did not write GRUB out to one of the member disks for fear of destroying the RAID 1, I simply put GRUB on SDC - hoping to use that to boot Linux from boot.ini on XP. 
During the post package install phase - probably when writing GRUB to sdc I saw the following error messages:

ERROR: isw: Could not find disk /dev/sdb in the metadata 
ERROR: isw: Could not find disk /dev/sda in the metadata 

I managed to reproduce the by booting the rescue kernel. They appear after mounting the installed partitions on sdc.

Version-Release number of selected component (if applicable):
F10 Snap 3 x86_64 DVD, kernel reports version 2.6.27.3-34.rc1.fc10.x86_64, dmraid (library) has version 1.0.0.rc15 , device-mapper is 4.14.0

How reproducible:
Reliably reproducible by running the rescue kernel, will appear after the disk mounting or simply by calling "dmraid -r" from the command line. Interestingly it lists /dev/sdb before /dev/sda.

Steps to Reproduce:
1. boot rescue kernel from DVD: "linux rescue"
2. Let it mount partitions on sdc
3. Error will appear after mount or by calling "dmraid -r"
  
Actual results:
ICH10R RAID1 not recognized by kernel

Expected results:
ICH10R RAID1 recognized and shown via device mapper.

Additional info:
The RAID1 is formatted as NTFS. All three disks are the same size (1TB). RAID1 status was reported as healthy ("Normal") by the BIOS before booting into Linux.
Verified RAID1 is healthy in XP as well. Using Intel Storage Manager 8.6. Write-Back cache is enabled.

Comment 1 Marcus Haebler 2008-10-27 04:43:53 UTC
Bug 364901 might be related.

Comment 2 Karsten Wade 2008-10-29 19:37:37 UTC
I'd prefer to see the bug fixed rather than document a work around.

If a work around is needed, please add it to this report, or put it directly in the appropriate section of wiki/Docs/Beats (either Boot or Installer.)

Comment 3 shorinji 2008-11-05 20:03:05 UTC
Same problem here with an ICH9R raid.

Installer of Fedora-10-beta worked, Fedora-10-preview fails and offers only the separate discs for installation.

Comment 4 Steve Paul 2008-11-06 19:09:57 UTC
Same problem on multiple Intel ICHxR Raid Volumes here.   FC10 BETA properly identifies and uses all tried Intel RAID Volumes. 

FC10 Snap3 and now Preview do not.  Member disks are visible only.

Behavior of the FC10 BETA would be preferable.

Comment 5 Karsten Wade 2008-11-07 12:03:56 UTC
We did not received a clear answer or content to insert in the release notes, so this missed the deadline for translation and inclusion in the F10 GA release notes.

If there is a bug or behavior that needs to be noted, post that content here.  We'll keep the flag enabled for now.  Notes that arrive after this deadline appear on release day in a web-only version of the release notes, which then get rolled in to any future updates to the 'fedora-release-notes' package.

Comment 6 Marcus Haebler 2008-11-07 20:33:18 UTC
How is this for a bullet item in the web-only version:

During installation ICHxR RAID volumes may not properly recognized. All member disks are shown as separate entries instead. This behavior has been confirmed on ICH9R and ICH10R but may affect older ICHxR RAID volumes as well. 
[see Bug 468449]


I will get the Preview release over the weekend and double-check Steve's finding.

Maybe the other people watching this Bug can we somehow narrow down if this only happens on certain types of RAID volumes. I can only try this on an ICH10R RAID1 volume.

Comment 7 Marcus Haebler 2008-11-07 20:48:06 UTC
OK, some typo corrections:

During installation ICHxR RAID volumes may not ne properly recognized. All member
disks are shown as separate entries instead. This behavior has been confirmed on ICH9R and ICH10R but may affect older ICHxR RAID volumes as well.
[see Bug 468649]

Comment 8 Marcus Haebler 2008-11-07 20:54:58 UTC
really sticky fingers today:

During installation ICHxR RAID volumes may not be properly recognized. All member disks are shown as separate entries instead. This behavior has been confirmed on ICH9R and ICH10R but may affect older ICHxR RAID volumes as well.
[see Bug 468649]

Comment 9 Robert Scheck 2008-11-10 20:40:26 UTC
We should address that before F10 to avoid breakage for the masses.

Comment 10 Andy 2008-11-11 18:34:31 UTC
The problem on this "bug":
You dont have a chance to get a clean F10 install (because you see both disks of the raid). You could install on sda2 or sdb2 (e.g.).

Also, the Upgrade path is broken... you can badly destroy your F9 installation using preupgrade (happened to me). /dev/dm-XX not found by GRUB using F10 kernels, and the preupgrade whipes out the old kernels...

The main problem on this topic: F9 supported those fakeraids w/o any probs... so, there are (probably) a LOT of installations out there using the ICH9R/ICH9R driver...

Comment 11 Steve Paul 2008-11-12 05:33:22 UTC
Other reports that appear related include: 470634 & 470543

Comment 12 Bill Nottingham 2008-11-19 02:04:36 UTC
*** Bug 470634 has been marked as a duplicate of this bug. ***

Comment 13 Bill Nottingham 2008-11-19 02:06:24 UTC
Please test with dmraid-1.0.0.rc15-2.fc10.b

Comment 14 Jesse Keating 2008-11-19 03:34:05 UTC
this dmraid is tagged for final, closing.  Please reopen if you have this exact bug, or file a new one if you have a different bug.

Comment 15 Li Qi 2008-11-20 01:31:22 UTC
ICH8 also has this problem. But it could work very perfectly in Fedora 9.

Fedora 10 Preview install application can't recognize the ICH8 raid driver
correctly. It always recognize the ICH8 RAID0/RAID1/RAID5 driver to some single
drivers. But Intel ICH8 RAID was supported very perfectly in Fedora 9.

Steps to Reproduce:
1. Setup a RAID0 driver with two SATA disks in a motherboard with Intel ICH8
2. Try to install Fedora 10

Actual results:
The RAID0 driver was recognize as two single disk sda, sdb.

Expected results:
Fedora could support Intel ICH8 RAID as Fedora 9 did. The RAID0 driver was
recognize as /dev/mapper/xxxxxx

Comment 16 Jesse Keating 2008-11-20 01:44:23 UTC
Please read the comments in the bug.  This is fixed in rawhide.  Boot a boot.iso from rawhide and see for yourself.

Comment 17 Andy 2008-11-26 14:16:54 UTC
@Jesse:

F10 is NOT installable @ ICH9R using Official Live/FullDVD Images !

I was able to activate the dmraid on the F10Live CD but (dmraid -ay) , but I'm not able to Install anything on dmraids (tried F10 DVD and F10 Live !)

Bad Bad Bad start into F10 for me :-(

Luckily I'm not on a ICH9R array @ home... (but sadly @ work)

as already said:
The kernel/installer in F10 x86_64 DVD does not recognize the RAID 1 and
lists the member disk in the installer (anaconda).

QUO VADIS ?
Reopen PLS ! >:-(

Btw: Seems to be a anaconda related bug... dmraid detects, with the dmraid update (@Live using manual activation), but THIS particular Bug is NOT SOLVED !!!

Comment 18 Robert Scheck 2008-11-26 15:26:16 UTC
Reopening as requested.

Comment 19 Steve Paul 2008-11-26 18:27:27 UTC
I can also verify behavior is same as preview in the final F10 release.

At the very minimum, release notes should have an emergency addendum to warn F9 upgrade users with ICH8R/9R/10R volumes to avoid F10 until a fix can be applied.

Comment 20 Bill Nottingham 2008-11-26 19:07:24 UTC
Given the new symptom (works post-boot, but not in anaconda), moving to anaconda.

Steve: from what I understand, a yum upgrade to F10 on such a box would work fine.

Comment 21 Steve Paul 2008-11-26 22:01:34 UTC
Hi Bill: Correct, yum upgrade will not be an issue.  The concern would be the media & dvd upgrade paths.

Comment 22 Karsten Wade 2008-11-26 22:55:41 UTC
(In reply to comment #19)
> I can also verify behavior is same as preview in the final F10 release.
> 
> At the very minimum, release notes should have an emergency addendum to warn F9
> upgrade users with ICH8R/9R/10R volumes to avoid F10 until a fix can be
> applied.

It is sufficient and far easier at this time to add a note (with link to this bug report) to the common bugs page:

https://fedoraproject.org/wiki/Bugs/Common

This is linked from the 'Common Bugs' section in the release notes.

I flipped the release notes flag to + and left this as blocking the release notes tracker bug; we'll review the status again as we prepare the next web-only update for the notes, which would become an update to the 'fedora-release-notes' package.  That could be as soon as the coming weekend (29, 30 Nov.)

Comment 23 Andy 2008-11-27 10:36:58 UTC
@Steve Paul:
Yes... yum upgrade should work. What's also not working is preupgrade. I think this is also related to anaconda.

After I tried the preupgrade way, I just installed it manually:

had to manually remove some 3rd party rpms (nvidia), and then
rpm -Uvh /var/cache/yum/preupgrade/*.rpm

after a reboot (and relabeling SELinux /), and some package-cleanup work, I'm running a clean F10 now on ICH9R :)

Means: You're only able to use ICH9R on F10 when you install F9 first, and the upgrade the manual rpm/yum way.

Only applicable for non-beginners.
Linux-Beginners will mess up their disks using usual F10 Media, and the standard way :-/

Additional Infos to my dmraid devices:
/dev/sdb: isw, "isw_hbhjgidgh", GROUP, ok, 488397166 sectors, data@ 0
/dev/sda: isw, "isw_hbhjgidgh", GROUP, ok, 488397166 sectors, data@ 0

*** Group superset isw_hbhjgidgh
--> Subset
name   : isw_hbhjgidgh_RAIDVOL0
size   : 488390656
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0

Comment 24 Dimi Paun 2008-11-28 05:09:30 UTC
I have the same problem: ICH8R not being recognized by dmraid.
I am currently running F9, and it's a mixed story: 1st set of drives (RAID 1) where recognized OK by F9 and worked from the beginning with no problems.

Recently I've installed (on the same ICH8R) another pair of drives (also RAID 1).
Well these ones are _not_ recognized automatically by F9, however if I call dmraid manually they initialize just fine.

Unfortunately anaconda in F10 doesn't recognize any of them.

Here are the details of my drives:

[root@dimi ~]# dmraid -r
/dev/sdd: isw, "isw_bejeabajig", GROUP, ok, 1465149166 sectors, data@ 0
/dev/sdc: isw, "isw_bejeabajig", GROUP, ok, 1465149166 sectors, data@ 0
/dev/sdb: isw, "isw_bhdbadjhie", GROUP, ok, 488397166 sectors, data@ 0
/dev/sda: isw, "isw_bhdbadjhie", GROUP, ok, 488397166 sectors, data@ 0


(The old ones that works fine are "isw_bhdbadjhie", the new ones are "isw_bejeabajig").

Comment 25 Hans de Goede 2009-01-16 22:05:15 UTC
I've managed to reproduce and I believe fix this using a system with isw raid.
I've provided updates.img files for this here;
http://people.atrpms.net/~hdegoede/updates474399-i386.img
http://people.atrpms.net/~hdegoede/updates474399-x86_64.img

To use this with an i386 install using isw "hardware" raid type the following
at the installer bootscreen (press <tab> to get to the cmdline editor):
updates=http://people.atrpms.net/~hdegoede/updates474399-i386.img

For an x86_64 install use:
updates=http://people.atrpms.net/~hdegoede/updates474399-i386.img

Please let me know if this resolves the issue for you.

Comment 26 lars@bistromatic.de 2009-01-17 17:05:30 UTC
Using http://people.atrpms.net/~hdegoede/updates474399-x86_64.im works. At least anaconda recognizes my dmraid device isw_cgaadiajid_Volume0 (ich9r mirror). So I can select the f9 installation on that device an run a f10 update.

Unfortunately after the update the dmraid device is not started.
At least lvscan gives my errors like:

# lvscan -v
    Finding all logical volumes
  Found duplicate PV PsbFBIQPS4cr4VIGKDWkp1WMuWdDc8cj: using /dev/sdb7 not /dev/sda7
  ACTIVE            '/dev/VolGroup00/LogVol00' [97.66 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [120.66 GB] inherit

Calling "/sbin/dmsetup table" during rc.sysinit it reports "No Devices found".

Comment 27 lars@bistromatic.de 2009-01-18 09:31:29 UTC
The rescue part of the F10 install CD is of little use too. Since that system doesn't recognize any dmraid volumes, it fails to mount sysimage, complaining about double UUIDs.
Had to use the F9 CD to fix the boot/grub installation.

Comment 28 Hans de Goede 2009-01-19 14:00:46 UTC
(In reply to comment #27)
> The rescue part of the F10 install CD is of little use too. Since that system
> doesn't recognize any dmraid volumes, it fails to mount sysimage, complaining
> about double UUIDs.

You can use an updates.img in rescue mode too.

> Had to use the F9 CD to fix the boot/grub installation.

What did you do to fix it?

Comment 29 lars@bistromatic.de 2009-01-23 07:39:49 UTC
Actually I failed miserably.

Right after update booting was OK. The system was using /dev/sd* devices instead of /dev/mapper/*, but it worked.
At some point I changed /boot/menu.lst, deleting 'rhgb' kernel option. Afterwards my grub menu went missing. Got a grub shell only. I was able to 'cat /grub/menu.lst', but I got no boot menu. So I tried to change /boot/grub/device.map - using /dev/sda instead of /dev/mapper/isw_cgaadiajid_Volume0 and did a grub-install from my f10 system.
That made things only worse, because know grub didn't boot at all.
So finally I used the F9 rescue CD, changed back device.map and run grub-install. Know I'm back on the grub shell, but at least I'm able to boot.

Comment 30 Vince Worthington 2009-02-04 20:38:35 UTC
Hello,

just to throw a little more into the fray... :(

Using the updates= cmdline to install from F10 gold DVD media and apply the x86_64 update .img *DOES* allow anaconda to recognize existing isw fakeraid volumes - I have a large-ish auxilliary ext3 filesystem (/storage) on a RAID0 partition.

When starting up Anaconda it uses the network connection to retrieve the update .img, and if I Ctrl-Alt-F2 to the command prompt virtual console once the GUI starts I can see and use all of the partitions on the isw RAID0 device (I have a fat32, the /storage partition, and a couple NTFS's used in Windows XP on it).

I'm not actually installing to the RAID0, its just used for auxilliary storage - the / and /boot filesystems and swap partition are on a 3rd hard drive (/dev/sdb).  So I have used the "create custom layout" option and "edit"ed the fat32 and ext3 /storage filesystem to assign a mount point for them but not format them.

Install completes successfully but afterwards the partitions on the RAID0 are inaccessible.  The /dev/mapper and /dev/dm-* entries get created but if I try to mount any of them it reports "you must specify filesystem type" -- if I add a -t ext3 (when trying to mount /storage on the isw) it complains about not finding a valid superblock.

I found the BZ where it was mentioned an updated nash, mkinitrd, and (forgotten package name) python device package were built - tried installing these and making a new initrd.img but afterwards the partitions were still not usable.

I tried installing again this morning, adding the fedora 10 updates repo to the installation to have it install the latest mkinitrd/nash/etc before making initrd.img, in case it would help.  Same problem - /dev/mapper/isw* entries are created but you can't mount the partitions.  It occurred to me afterwards that these rawhide packages probably wouldn't be in the updates repo.. but I digress.

I can boot from an F9 live cd (i386, even) and 'dmraid -ay' to activate the RAID0 fakeraid isw - and mount the partitions and get to the contents without :q
problem.

I haven't unpacked the initrd.img file yet on F10 to see what's there or try tinkering with it.

If it helps any, here is output of 'dmsetup table' from this rig under F10:

[root@pharaoh ~]# dmsetup table
isw_fibdbbjhh_RAID0_1p8: 0 61448562 linear 253:1 1320976818
isw_fibdbbjhh_RAID0_1p7: 0 460792332 linear 253:1 860184423
isw_fibdbbjhh_RAID0_1: 0 1953535488 striped 2 256 8:0 0 8:32 0
isw_fibdbbjhh_RAID0_1p6: 0 819202482 linear 253:1 40981878
isw_fibdbbjhh_RAID0_1p5: 0 40965687 linear 253:1 16128
isw_fibdbbjhh_RAID0_1p1: 0 1953504000 linear 253:0 16065

and output of 'dmraid -ay -t':

[root@pharaoh ~]# dmraid -ay -t
isw_fibdbbjhh_RAID0_1: 0 1953535488 striped 2 256 /dev/sda 0 /dev/sdc 0
isw_fibdbbjhh_RAID0_1p5: 0 40965687 linear /dev/mapper/isw_fibdbbjhh_RAID0_1 16128
isw_fibdbbjhh_RAID0_1p6: 0 819202482 linear /dev/mapper/isw_fibdbbjhh_RAID0_1 40981878
isw_fibdbbjhh_RAID0_1p7: 0 460792332 linear /dev/mapper/isw_fibdbbjhh_RAID0_1 860184423
isw_fibdbbjhh_RAID0_1p8: 0 61448562 linear /dev/mapper/isw_fibdbbjhh_RAID0_1 1320976818

The output of 'ls /dev/mapper' is a little interesting, perhaps:

[root@pharaoh ~]# ls /dev/mapper
crw-rw---- 1 root root  10, 63 2009-02-04 08:31 control
brw-rw---- 1 root root 253,  0 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1
brw-rw---- 1 root root 253,  1 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p1
brw-rw---- 1 root root 253,  2 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p5
brw-rw---- 1 root root 253,  3 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p6
brw-rw---- 1 root root 253,  4 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p7
brw-rw---- 1 root root 253,  5 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p8

Interestingly, when I tried to deactive them with 'dmraid -an', I still ended up with two entries under /dev/mapper:

crw-rw---- 1 root root  10, 63 2009-02-04 08:31 control
brw-rw---- 1 root root 253,  0 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1
brw-rw---- 1 root root 253,  1 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p1

and a dmesg line indicating one is currently open:

device-mapper: ioctl: unable to remove open device isw_fibdbbjhh_RAID0_1


Comparing the output of 'dmsetup table' to the output of 'ls /dev/mapper', it almost looks like p5-p8 are "linked" to the wrong device major/minor.  'dmsetup table' shows them linked to major/minor 253:1 (isw_fibdbbjhh_RAID0_1p1) - not 253:0 (isw_fibdbbjhh_RAID0_1).  And this may explain why I saw what I did when trying to deactivate with 'dmraid -an'.

Under F9 live CD, all of them are "linked" to the block major/minor representing the whole device, not partition 1, and I am able to use the devices.

Thought this might be of interest, even if I'm not completely sure how to go about fixing it.

Would this be more appropriate as a separate bug?

Thanks,
Vince

Comment 31 Hans de Goede 2009-02-04 20:59:24 UTC
(In reply to comment #30)
> Hello,
> 
> Would this be more appropriate as a separate bug?
> 

Yes, please file a separate bug for this, with dmraid as component, this is not mkinitrd related as you want to use the raidset only after boot.

Comment 32 Hans de Goede 2009-02-04 21:04:56 UTC
Lars,

I've been working some more on this and there are 2 separate issues:
1) anaconda does not recognize the dmraid set, this is fixed by the updates.img 
   provided.

2) mkinitrd creates a non booting system on some raidsetups. I believe what you 
   are seeing now is this bug, this is tracked in bug 476818.

As the anaconda part is fixed I'm closing this bug. If you have time please see if the updated mkinitrd in bug 476818 fixes the not booting issue to test this:

1) Install with the updates.img from this bug (or use existing install)
2) Boot into rescue mode using the updates.img
3) follow instructions for testing in bug 476818.

Thanks!

Comment 33 lars@bistromatic.de 2009-02-06 13:44:53 UTC
Thanks a lot. The hole fix (bug 468649 + bug 476818) works for me.
Anyway, since the update without the fixed packages from bug 476818 messes with the filesystems, the complete procedure should look like:

1) make update as described in comment 25
2) right after update boot into rescue mode from cd
3) install updated packages from bug 476818
4) rebuild initrd
5) reboot

Comment 34 Mike Reeves 2009-03-09 21:15:50 UTC
Hi - I tried the steps listed above for Fedora 10 x64_64 and it did not work. I have the ICH8R configuration on an Intel DG965WH set to RAID5. I used the following parameters -

ext4 updates=http://people.atrpms.net/~hdegoede/updates474399-x86_64.img 

The updates downloaded but got an errors reading the raid configuration.

I am going to try Fedora 11 alph because it seems that the drop that is out there currently is the same one that I have which does not work. Hopefully the code has been fixed in that drop.

Comment 35 Hans de Goede 2009-03-10 08:09:36 UTC
(In reply to comment #34)
> Hi - I tried the steps listed above for Fedora 10 x64_64 and it did not work. I
> have the ICH8R configuration on an Intel DG965WH set to RAID5. I used the
> following parameters -
> 

fakeraid (motherboard raid) RAID 5 is not supported as dmraid5 support is not
yet in the upstream kernel.

Comment 36 Tom Schoonjans 2009-09-01 09:20:08 UTC
Hello,


I would like to install F10 using the instructions outlined here, but it seems the updates474399* files are no longer present at the atrpms.net website.

Could they be put back?

Thanks in advance,

Tom

Comment 37 Hans de Goede 2009-09-01 10:43:04 UTC
(In reply to comment #36)
> Hello,
> 
> 
> I would like to install F10 using the instructions outlined here, but it seems
> the updates474399* files are no longer present at the atrpms.net website.
> 
> Could they be put back?
> 

I'm afraid I no longer have them, why not just install Fedora 11 ?

Comment 38 Tom Schoonjans 2009-09-01 11:19:50 UTC
The computer I'm installing would become part of a F10 cluster. 
I would like to avoid having to install F11 if possible to avoid problems since the home folder would be mounted from an F10 NFS server

Comment 39 Hans de Goede 2009-09-01 11:45:51 UTC
(In reply to comment #38)
> The computer I'm installing would become part of a F10 cluster. 
> I would like to avoid having to install F11 if possible to avoid problems since
> the home folder would be mounted from an F10 NFS server  

In that case you are probably best of just disabling the BIOS RAID. Be sure to reset the disks to NON RAID in the OROM before turning of the OROM altogether.

Comment 40 Tom Schoonjans 2009-09-01 12:02:27 UTC
The computer would be dual boot WinXP64/F10. I already managed to get WinXP64 installed on the fakeraid  so it 'd be a shame not to get F10 on it as well.

If you're saying there's no other way, then I'll disable the RAID and reinstall WinXP64.

Comment 41 Hans de Goede 2009-09-01 12:17:09 UTC
(In reply to comment #40)
> The computer would be dual boot WinXP64/F10. I already managed to get WinXP64
> installed on the fakeraid  so it 'd be a shame not to get F10 on it as well.
> 
> If you're saying there's no other way, then I'll disable the RAID and reinstall
> WinXP64.  

If you want F-10 that is the best way forward, yes.

Comment 42 Tom Schoonjans 2009-09-01 12:30:34 UTC
Ok thanks for the advice.

Tom

Comment 43 lars@bistromatic.de 2009-09-01 16:51:51 UTC
If you still need updates474399-x86_64.img
you can get a copy from
http://bistromatic.de/updates474399-x86_64.img

Comment 44 Tom Schoonjans 2009-09-01 16:55:11 UTC
thanks, I'll give it a try