Bug 468649
Summary: | dmraid/isw cannot access ICH10R RAID metadata on member disks | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Marcus Haebler <haebler> |
Component: | anaconda | Assignee: | Hans de Goede <hdegoede> |
Status: | CLOSED RAWHIDE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | rawhide | CC: | anaconda-maint-list, atorkhov, dcantrell, dimi, error, freedom2lee, hdegoede, humpf, kernel-maint, kwade, lars, mhcox, mike.reeves, notting, quintela, redhat-bugzilla, shorinji, sjensen, steve.paul, tom.schoonjans, vincew, zyta2002 |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | --- | Flags: | kwade:
fedora_requires_release_note+
|
Hardware: | i386 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2009-02-04 21:04:56 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 151189, 438943 |
Description
Marcus Haebler
2008-10-27 03:36:30 UTC
Bug 364901 might be related. I'd prefer to see the bug fixed rather than document a work around. If a work around is needed, please add it to this report, or put it directly in the appropriate section of wiki/Docs/Beats (either Boot or Installer.) Same problem here with an ICH9R raid. Installer of Fedora-10-beta worked, Fedora-10-preview fails and offers only the separate discs for installation. Same problem on multiple Intel ICHxR Raid Volumes here. FC10 BETA properly identifies and uses all tried Intel RAID Volumes. FC10 Snap3 and now Preview do not. Member disks are visible only. Behavior of the FC10 BETA would be preferable. We did not received a clear answer or content to insert in the release notes, so this missed the deadline for translation and inclusion in the F10 GA release notes. If there is a bug or behavior that needs to be noted, post that content here. We'll keep the flag enabled for now. Notes that arrive after this deadline appear on release day in a web-only version of the release notes, which then get rolled in to any future updates to the 'fedora-release-notes' package. How is this for a bullet item in the web-only version: During installation ICHxR RAID volumes may not properly recognized. All member disks are shown as separate entries instead. This behavior has been confirmed on ICH9R and ICH10R but may affect older ICHxR RAID volumes as well. [see Bug 468449] I will get the Preview release over the weekend and double-check Steve's finding. Maybe the other people watching this Bug can we somehow narrow down if this only happens on certain types of RAID volumes. I can only try this on an ICH10R RAID1 volume. OK, some typo corrections: During installation ICHxR RAID volumes may not ne properly recognized. All member disks are shown as separate entries instead. This behavior has been confirmed on ICH9R and ICH10R but may affect older ICHxR RAID volumes as well. [see Bug 468649] really sticky fingers today: During installation ICHxR RAID volumes may not be properly recognized. All member disks are shown as separate entries instead. This behavior has been confirmed on ICH9R and ICH10R but may affect older ICHxR RAID volumes as well. [see Bug 468649] We should address that before F10 to avoid breakage for the masses. The problem on this "bug": You dont have a chance to get a clean F10 install (because you see both disks of the raid). You could install on sda2 or sdb2 (e.g.). Also, the Upgrade path is broken... you can badly destroy your F9 installation using preupgrade (happened to me). /dev/dm-XX not found by GRUB using F10 kernels, and the preupgrade whipes out the old kernels... The main problem on this topic: F9 supported those fakeraids w/o any probs... so, there are (probably) a LOT of installations out there using the ICH9R/ICH9R driver... Other reports that appear related include: 470634 & 470543 *** Bug 470634 has been marked as a duplicate of this bug. *** Please test with dmraid-1.0.0.rc15-2.fc10.b this dmraid is tagged for final, closing. Please reopen if you have this exact bug, or file a new one if you have a different bug. ICH8 also has this problem. But it could work very perfectly in Fedora 9. Fedora 10 Preview install application can't recognize the ICH8 raid driver correctly. It always recognize the ICH8 RAID0/RAID1/RAID5 driver to some single drivers. But Intel ICH8 RAID was supported very perfectly in Fedora 9. Steps to Reproduce: 1. Setup a RAID0 driver with two SATA disks in a motherboard with Intel ICH8 2. Try to install Fedora 10 Actual results: The RAID0 driver was recognize as two single disk sda, sdb. Expected results: Fedora could support Intel ICH8 RAID as Fedora 9 did. The RAID0 driver was recognize as /dev/mapper/xxxxxx Please read the comments in the bug. This is fixed in rawhide. Boot a boot.iso from rawhide and see for yourself. @Jesse: F10 is NOT installable @ ICH9R using Official Live/FullDVD Images ! I was able to activate the dmraid on the F10Live CD but (dmraid -ay) , but I'm not able to Install anything on dmraids (tried F10 DVD and F10 Live !) Bad Bad Bad start into F10 for me :-( Luckily I'm not on a ICH9R array @ home... (but sadly @ work) as already said: The kernel/installer in F10 x86_64 DVD does not recognize the RAID 1 and lists the member disk in the installer (anaconda). QUO VADIS ? Reopen PLS ! >:-( Btw: Seems to be a anaconda related bug... dmraid detects, with the dmraid update (@Live using manual activation), but THIS particular Bug is NOT SOLVED !!! Reopening as requested. I can also verify behavior is same as preview in the final F10 release. At the very minimum, release notes should have an emergency addendum to warn F9 upgrade users with ICH8R/9R/10R volumes to avoid F10 until a fix can be applied. Given the new symptom (works post-boot, but not in anaconda), moving to anaconda. Steve: from what I understand, a yum upgrade to F10 on such a box would work fine. Hi Bill: Correct, yum upgrade will not be an issue. The concern would be the media & dvd upgrade paths. (In reply to comment #19) > I can also verify behavior is same as preview in the final F10 release. > > At the very minimum, release notes should have an emergency addendum to warn F9 > upgrade users with ICH8R/9R/10R volumes to avoid F10 until a fix can be > applied. It is sufficient and far easier at this time to add a note (with link to this bug report) to the common bugs page: https://fedoraproject.org/wiki/Bugs/Common This is linked from the 'Common Bugs' section in the release notes. I flipped the release notes flag to + and left this as blocking the release notes tracker bug; we'll review the status again as we prepare the next web-only update for the notes, which would become an update to the 'fedora-release-notes' package. That could be as soon as the coming weekend (29, 30 Nov.) @Steve Paul: Yes... yum upgrade should work. What's also not working is preupgrade. I think this is also related to anaconda. After I tried the preupgrade way, I just installed it manually: had to manually remove some 3rd party rpms (nvidia), and then rpm -Uvh /var/cache/yum/preupgrade/*.rpm after a reboot (and relabeling SELinux /), and some package-cleanup work, I'm running a clean F10 now on ICH9R :) Means: You're only able to use ICH9R on F10 when you install F9 first, and the upgrade the manual rpm/yum way. Only applicable for non-beginners. Linux-Beginners will mess up their disks using usual F10 Media, and the standard way :-/ Additional Infos to my dmraid devices: /dev/sdb: isw, "isw_hbhjgidgh", GROUP, ok, 488397166 sectors, data@ 0 /dev/sda: isw, "isw_hbhjgidgh", GROUP, ok, 488397166 sectors, data@ 0 *** Group superset isw_hbhjgidgh --> Subset name : isw_hbhjgidgh_RAIDVOL0 size : 488390656 stride : 128 type : mirror status : ok subsets: 0 devs : 2 spares : 0 I have the same problem: ICH8R not being recognized by dmraid. I am currently running F9, and it's a mixed story: 1st set of drives (RAID 1) where recognized OK by F9 and worked from the beginning with no problems. Recently I've installed (on the same ICH8R) another pair of drives (also RAID 1). Well these ones are _not_ recognized automatically by F9, however if I call dmraid manually they initialize just fine. Unfortunately anaconda in F10 doesn't recognize any of them. Here are the details of my drives: [root@dimi ~]# dmraid -r /dev/sdd: isw, "isw_bejeabajig", GROUP, ok, 1465149166 sectors, data@ 0 /dev/sdc: isw, "isw_bejeabajig", GROUP, ok, 1465149166 sectors, data@ 0 /dev/sdb: isw, "isw_bhdbadjhie", GROUP, ok, 488397166 sectors, data@ 0 /dev/sda: isw, "isw_bhdbadjhie", GROUP, ok, 488397166 sectors, data@ 0 (The old ones that works fine are "isw_bhdbadjhie", the new ones are "isw_bejeabajig"). I've managed to reproduce and I believe fix this using a system with isw raid. I've provided updates.img files for this here; http://people.atrpms.net/~hdegoede/updates474399-i386.img http://people.atrpms.net/~hdegoede/updates474399-x86_64.img To use this with an i386 install using isw "hardware" raid type the following at the installer bootscreen (press <tab> to get to the cmdline editor): updates=http://people.atrpms.net/~hdegoede/updates474399-i386.img For an x86_64 install use: updates=http://people.atrpms.net/~hdegoede/updates474399-i386.img Please let me know if this resolves the issue for you. Using http://people.atrpms.net/~hdegoede/updates474399-x86_64.im works. At least anaconda recognizes my dmraid device isw_cgaadiajid_Volume0 (ich9r mirror). So I can select the f9 installation on that device an run a f10 update. Unfortunately after the update the dmraid device is not started. At least lvscan gives my errors like: # lvscan -v Finding all logical volumes Found duplicate PV PsbFBIQPS4cr4VIGKDWkp1WMuWdDc8cj: using /dev/sdb7 not /dev/sda7 ACTIVE '/dev/VolGroup00/LogVol00' [97.66 GB] inherit ACTIVE '/dev/VolGroup00/LogVol01' [120.66 GB] inherit Calling "/sbin/dmsetup table" during rc.sysinit it reports "No Devices found". The rescue part of the F10 install CD is of little use too. Since that system doesn't recognize any dmraid volumes, it fails to mount sysimage, complaining about double UUIDs. Had to use the F9 CD to fix the boot/grub installation. (In reply to comment #27) > The rescue part of the F10 install CD is of little use too. Since that system > doesn't recognize any dmraid volumes, it fails to mount sysimage, complaining > about double UUIDs. You can use an updates.img in rescue mode too. > Had to use the F9 CD to fix the boot/grub installation. What did you do to fix it? Actually I failed miserably. Right after update booting was OK. The system was using /dev/sd* devices instead of /dev/mapper/*, but it worked. At some point I changed /boot/menu.lst, deleting 'rhgb' kernel option. Afterwards my grub menu went missing. Got a grub shell only. I was able to 'cat /grub/menu.lst', but I got no boot menu. So I tried to change /boot/grub/device.map - using /dev/sda instead of /dev/mapper/isw_cgaadiajid_Volume0 and did a grub-install from my f10 system. That made things only worse, because know grub didn't boot at all. So finally I used the F9 rescue CD, changed back device.map and run grub-install. Know I'm back on the grub shell, but at least I'm able to boot. Hello, just to throw a little more into the fray... :( Using the updates= cmdline to install from F10 gold DVD media and apply the x86_64 update .img *DOES* allow anaconda to recognize existing isw fakeraid volumes - I have a large-ish auxilliary ext3 filesystem (/storage) on a RAID0 partition. When starting up Anaconda it uses the network connection to retrieve the update .img, and if I Ctrl-Alt-F2 to the command prompt virtual console once the GUI starts I can see and use all of the partitions on the isw RAID0 device (I have a fat32, the /storage partition, and a couple NTFS's used in Windows XP on it). I'm not actually installing to the RAID0, its just used for auxilliary storage - the / and /boot filesystems and swap partition are on a 3rd hard drive (/dev/sdb). So I have used the "create custom layout" option and "edit"ed the fat32 and ext3 /storage filesystem to assign a mount point for them but not format them. Install completes successfully but afterwards the partitions on the RAID0 are inaccessible. The /dev/mapper and /dev/dm-* entries get created but if I try to mount any of them it reports "you must specify filesystem type" -- if I add a -t ext3 (when trying to mount /storage on the isw) it complains about not finding a valid superblock. I found the BZ where it was mentioned an updated nash, mkinitrd, and (forgotten package name) python device package were built - tried installing these and making a new initrd.img but afterwards the partitions were still not usable. I tried installing again this morning, adding the fedora 10 updates repo to the installation to have it install the latest mkinitrd/nash/etc before making initrd.img, in case it would help. Same problem - /dev/mapper/isw* entries are created but you can't mount the partitions. It occurred to me afterwards that these rawhide packages probably wouldn't be in the updates repo.. but I digress. I can boot from an F9 live cd (i386, even) and 'dmraid -ay' to activate the RAID0 fakeraid isw - and mount the partitions and get to the contents without :q problem. I haven't unpacked the initrd.img file yet on F10 to see what's there or try tinkering with it. If it helps any, here is output of 'dmsetup table' from this rig under F10: [root@pharaoh ~]# dmsetup table isw_fibdbbjhh_RAID0_1p8: 0 61448562 linear 253:1 1320976818 isw_fibdbbjhh_RAID0_1p7: 0 460792332 linear 253:1 860184423 isw_fibdbbjhh_RAID0_1: 0 1953535488 striped 2 256 8:0 0 8:32 0 isw_fibdbbjhh_RAID0_1p6: 0 819202482 linear 253:1 40981878 isw_fibdbbjhh_RAID0_1p5: 0 40965687 linear 253:1 16128 isw_fibdbbjhh_RAID0_1p1: 0 1953504000 linear 253:0 16065 and output of 'dmraid -ay -t': [root@pharaoh ~]# dmraid -ay -t isw_fibdbbjhh_RAID0_1: 0 1953535488 striped 2 256 /dev/sda 0 /dev/sdc 0 isw_fibdbbjhh_RAID0_1p5: 0 40965687 linear /dev/mapper/isw_fibdbbjhh_RAID0_1 16128 isw_fibdbbjhh_RAID0_1p6: 0 819202482 linear /dev/mapper/isw_fibdbbjhh_RAID0_1 40981878 isw_fibdbbjhh_RAID0_1p7: 0 460792332 linear /dev/mapper/isw_fibdbbjhh_RAID0_1 860184423 isw_fibdbbjhh_RAID0_1p8: 0 61448562 linear /dev/mapper/isw_fibdbbjhh_RAID0_1 1320976818 The output of 'ls /dev/mapper' is a little interesting, perhaps: [root@pharaoh ~]# ls /dev/mapper crw-rw---- 1 root root 10, 63 2009-02-04 08:31 control brw-rw---- 1 root root 253, 0 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1 brw-rw---- 1 root root 253, 1 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p1 brw-rw---- 1 root root 253, 2 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p5 brw-rw---- 1 root root 253, 3 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p6 brw-rw---- 1 root root 253, 4 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p7 brw-rw---- 1 root root 253, 5 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p8 Interestingly, when I tried to deactive them with 'dmraid -an', I still ended up with two entries under /dev/mapper: crw-rw---- 1 root root 10, 63 2009-02-04 08:31 control brw-rw---- 1 root root 253, 0 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1 brw-rw---- 1 root root 253, 1 2009-02-04 08:31 isw_fibdbbjhh_RAID0_1p1 and a dmesg line indicating one is currently open: device-mapper: ioctl: unable to remove open device isw_fibdbbjhh_RAID0_1 Comparing the output of 'dmsetup table' to the output of 'ls /dev/mapper', it almost looks like p5-p8 are "linked" to the wrong device major/minor. 'dmsetup table' shows them linked to major/minor 253:1 (isw_fibdbbjhh_RAID0_1p1) - not 253:0 (isw_fibdbbjhh_RAID0_1). And this may explain why I saw what I did when trying to deactivate with 'dmraid -an'. Under F9 live CD, all of them are "linked" to the block major/minor representing the whole device, not partition 1, and I am able to use the devices. Thought this might be of interest, even if I'm not completely sure how to go about fixing it. Would this be more appropriate as a separate bug? Thanks, Vince (In reply to comment #30) > Hello, > > Would this be more appropriate as a separate bug? > Yes, please file a separate bug for this, with dmraid as component, this is not mkinitrd related as you want to use the raidset only after boot. Lars, I've been working some more on this and there are 2 separate issues: 1) anaconda does not recognize the dmraid set, this is fixed by the updates.img provided. 2) mkinitrd creates a non booting system on some raidsetups. I believe what you are seeing now is this bug, this is tracked in bug 476818. As the anaconda part is fixed I'm closing this bug. If you have time please see if the updated mkinitrd in bug 476818 fixes the not booting issue to test this: 1) Install with the updates.img from this bug (or use existing install) 2) Boot into rescue mode using the updates.img 3) follow instructions for testing in bug 476818. Thanks! Thanks a lot. The hole fix (bug 468649 + bug 476818) works for me. Anyway, since the update without the fixed packages from bug 476818 messes with the filesystems, the complete procedure should look like: 1) make update as described in comment 25 2) right after update boot into rescue mode from cd 3) install updated packages from bug 476818 4) rebuild initrd 5) reboot Hi - I tried the steps listed above for Fedora 10 x64_64 and it did not work. I have the ICH8R configuration on an Intel DG965WH set to RAID5. I used the following parameters - ext4 updates=http://people.atrpms.net/~hdegoede/updates474399-x86_64.img The updates downloaded but got an errors reading the raid configuration. I am going to try Fedora 11 alph because it seems that the drop that is out there currently is the same one that I have which does not work. Hopefully the code has been fixed in that drop. (In reply to comment #34) > Hi - I tried the steps listed above for Fedora 10 x64_64 and it did not work. I > have the ICH8R configuration on an Intel DG965WH set to RAID5. I used the > following parameters - > fakeraid (motherboard raid) RAID 5 is not supported as dmraid5 support is not yet in the upstream kernel. Hello, I would like to install F10 using the instructions outlined here, but it seems the updates474399* files are no longer present at the atrpms.net website. Could they be put back? Thanks in advance, Tom (In reply to comment #36) > Hello, > > > I would like to install F10 using the instructions outlined here, but it seems > the updates474399* files are no longer present at the atrpms.net website. > > Could they be put back? > I'm afraid I no longer have them, why not just install Fedora 11 ? The computer I'm installing would become part of a F10 cluster. I would like to avoid having to install F11 if possible to avoid problems since the home folder would be mounted from an F10 NFS server (In reply to comment #38) > The computer I'm installing would become part of a F10 cluster. > I would like to avoid having to install F11 if possible to avoid problems since > the home folder would be mounted from an F10 NFS server In that case you are probably best of just disabling the BIOS RAID. Be sure to reset the disks to NON RAID in the OROM before turning of the OROM altogether. The computer would be dual boot WinXP64/F10. I already managed to get WinXP64 installed on the fakeraid so it 'd be a shame not to get F10 on it as well. If you're saying there's no other way, then I'll disable the RAID and reinstall WinXP64. (In reply to comment #40) > The computer would be dual boot WinXP64/F10. I already managed to get WinXP64 > installed on the fakeraid so it 'd be a shame not to get F10 on it as well. > > If you're saying there's no other way, then I'll disable the RAID and reinstall > WinXP64. If you want F-10 that is the best way forward, yes. Ok thanks for the advice. Tom If you still need updates474399-x86_64.img you can get a copy from http://bistromatic.de/updates474399-x86_64.img thanks, I'll give it a try |