User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4 Nvidia RAID volume was properly recognized by Anaconda during install (/dev/mapper/nvidia_xxxxxxxx1) and I installed to this. After finishing install and rebooting, the drives are mounted as /dev/sda1, /dev/sda3, etc. instead of the /dev/mapper/nvidia_xxxxx paths. For more info from several others experiencing the issue, please see these forum threads: http://forums.fedoraforum.org/showthread.php?t=206206 http://forums.fedoraforum.org/showthread.php?t=206284 Reproducible: Always Steps to Reproduce: 1. 2. 3. Actual Results: Volumes are mounted on /dev/sdax paths instead of /dev/mapper/nvidia_xxxxxx paths. Expected Results: Volumes are mounted on /dev/mapper/nvidia_xxxxxx paths. # dmraid -v -d -r /dev/sdb: "pdc" and "nvidia" formats discovered (using nvidia)! /dev/sda: "pdc" and "nvidia" formats discovered (using nvidia)! INFO: RAID devices discovered: /dev/sdb: nvidia, "nvidia_dceibdfa", mirror, ok, 625142446 sectors, data@ 0 /dev/sda: nvidia, "nvidia_dceibdfa", mirror, ok, 625142446 sectors, data@ 0 # mount /dev/sda3 on / type ext3 (rw) /proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
Note that this issue is new to Fedora 10. I used Fedora 9 on this same computer with no issue. This is Fedora 10 x86_64 installed on an Athlon64 machine. ASUS M2N-SLI motherboard. I'm happy to provide any logs, troubleshooting data, or run tests...just let me know what you need.
Same problem. My configuration is following: x86_64 AMD Phenom x4, ASUS M3A motherboard. Additional information: Volumes mounted correctly as /dev/mapper/nvidia_xxxxxx during installation and when booting up in the rescue mode. In normal mode, volumes mounted as /dev/sdax.
Also having the same problem. Fedora 10 (from DVD, no updates) on a Jetway JNC62K with nvidia softRAID controller, with 2 mirrored disks. [root@localhost ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb3 236109048 1219308 222896016 1% / /dev/sdb1 124427 13184 104819 12% /boot tmpfs 964108 0 964108 0% /dev/shm These should be /dev/mapper/... entries and not /dev/sdb* or /dev/sda*. [root@localhost ~]# dmraid -r /dev/sdb: nvidia, "nvidia_ifaddeda", mirror, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_ifaddeda", mirror, ok, 488397166 sectors, data@ 0 [root@localhost ~]# modprobe dm-mirror FATAL: Module dm_mirror not found. [root@localhost ~]# /sbin/dmraid -ay -i -p -t nvidia_ifaddeda: 0 488397166 mirror core 2 131072 nosync 2 /dev/sda 0 /dev/sdb 0 1 handle_errors resolve_dm_name nvidia_ifaddeda in /etc/rc.sysinit resolves to an empty string [root@localhost ~]# /sbin/dmraid -v -v -ay -i -p nvidia_ifaddeda NOTICE: /dev/sdb: asr discovering ... NOTICE: /dev/sdb: lsi discovering NOTICE: /dev/sdb: nvidia discovering NOTICE: /dev/sdb: nvidia metadata discovered NOTICE: /dev/sdb: pdc discovering ... NOTICE: /dev/sda: via discovering NOTICE: added /dev/sdb to RAID set "nvidia_ifaddeda" NOTICE: added /dev/sda to RAID set "nvidia_ifaddeda" RAID set "nvidia_ifaddeda" was not activated While doing the above command, the following is logged in /var/log/messages Dec 5 00:39:14 localhost kernel: device-mapper: table: 253:0: mirror: Device lookup failure Dec 5 00:39:14 localhost kernel: device-mapper: ioctl: error adding target to table Here I am lost. Anyone?
dm_mirror is linked into the Fedora kernel, hence the module load failure. See "dmsetup targets" output. It should show up therein. Please attach the bzip2'ed + tar'ed output files of "dmraid -rD" (*.{dat,offset,size} files} for analysis. This maybe related to bz#471689 too, where a nash bug prevents the RAID set named to be activated...
Created attachment 325875 [details] output of dmraid -rD As requested, the tar+bzip2ed output of dmraid -rD.
Created attachment 325887 [details] Output of dmraid -rD Output of dmraid -rD, as requested.
Created attachment 325895 [details] output of dmraid -rD as requested
Created attachment 326000 [details] Output of the dmraid -rD Output of the dmraid -rD
Created attachment 326007 [details] Output of dmraid -rD Same thing happens to me with a striped diskset (RAID0) on M57SLI-S4 mainboard with Fedora 10 (2.6.27.5-117.fc10.x86_64).
I'm affected by this too. Watch out when resolving this bug. When the raidset is not initialized, the second disk from the set is used. If the raidset _is_ initialized, metadata is read from the first disk, destroying the changes on the second. I found this out the hard way after booting from the rescue disk (which initializes the set fine) and making changes on the disk.
Roel- I noticed this, too, and I'm worried about the implications when this bug is fixed. Hopefully there will either be a simple way to rebuild the array from the second drive to fix the problem or the Fedora will provide some method/process to get back to a properly-working array without serious data loss. Fingers crossed. (Hopefully they don't just publish a new kernel or package that fixes this that gets downloaded through yum and auto-hoses us.)
One other warning to all experiencing this bug, though I'm not certain this is caused by this bug or might be another bug. I noticed that if I modify my grub.conf file (I tried to remove the "hiddenmenu" line) while the array is not properly initialized (e.g., / mounted from /dev/sda3), then my bootloader config is totally hosed. When I reboot, it drops into the GRUB shell and I have to know my exact GRUB config to manually boot it again. Anybody have this problem (or successfully change their grub.conf while having this issue)?
Andrew- That is exactly what happened to me. I quoted out "hiddenmenu" and then fixed that with the rescue CD hosing my disk. As a precaution you could disconnect the first disk of the array. When you reconnect it again, the array should be rebuild, but I never tried this.
I assume this is another duplicate of bz#471689.
Could be, though the error messages are not the same. Also, because Fedora did work during installation and because the new spiffy bootscreen is hiding all the errors, there might actually be a lot of users out there who do not know they are affected.They will get a nasty suprise when they install the patch. I'm not in the position to try out any more. I needed my PC so I switched to software raid instead
Created attachment 326342 [details] output of dmraid -rD I have exactly the same problem with RH5.2 2.6.18-92.1.18.el5 Attached output of dmraid -rD Is it possible that grub does something funny and confuses fakeraid? ... because it looks like this problem happens only when you boot from the hard drive - if you boot from a rescue CD or install from CD ... everything looks fine? .. I am not an expert :) so most likely wrong.
Heinz- This may be a dup of 471689, but as Roel says, it presents differently. Is there any ETA when the fix for 471689 will be released in Fedora 10 packages? I am unable to update my kernel package, modify my grub.conf, or otherwise change my /boot filesystem without rendering my system completely unbootable. Makes me a bit twitchy.
Andrew, Anaconda colleagues are looking into it. No ETA yet but ASAP. Keeping component for the time being until we've got tests with that future fix on Fedora 10.
(In reply to comment #18) Thanks, Heinz. Again, I'm happy to collect any further info or even test patches, workarounds, etc. Just let me know.
I'm wonder if bug #474074 is also related to these. I'm running Sil and Promise cards using software raid (2 disk striped raid sets). During a normal boot lvscan sees the volume groups, and activates the one VG, but the rest fail. Device-mapper spits out something about striped not being able to parse the destination (probably to do with the volume not being activated properly). Rescue disk works fine (mounts show up as /dev/dm-x instead of /dev/device-mapper/vg-xxx, but at least I can access the data). I've attached tonnes of information to the above bug # if it helps anyone out.
I encountered a similar problem with intel onboard raid, bug #474399. May be related.
I'm gathering that our definitions of "ASAP" may be a little different...
I have an Nvidia based motherboard, and I have found that the solutions from bug #473305 did the trick. I used the scsi_mod.scan=sync on the DVD install image and on the installed kernel. Then updated only nash and mkinitrd, then all the rest of the updates. This way, the new kernel was built with the mkinitrd fix.
If this is fixed in mkinitrd would a fresh network install or a newly built jigdo ISO solve the problem?
I was able to install Rawhide (F-11) on such nvidia raid 0 configuration. But I haven't re-tested for F-10... Is this bug supposed to be closed now ? Or are we waiting for a confirmation for F-10 ?
I don't feel empowered to close this bug, But unless something is silently missing, it should be closed, at least to avoid false positive search results.
Hi Nicolas, yes - someone with appropriate hardware needs to test on the release the bug is filed against (f10). If there was ever an f11 version of this BZ that could be closed based on your testing but this one requires testing in f10.
I retested this after upgrading to kernel 2.6.27.25-170.2.72.fc10.x86_64 on f10 and the problem remains. dmraid shows the same output as before and there is no /dev/mapper/nvidia_xxxxxxxx entry. Has anyone heard that this should be fixed in f10 ? Or are we assuming this because it was fixed in f11 (which also works for me).
This message is a reminder that Fedora 10 is nearing its end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 10. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '10'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 10's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 10 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug to the applicable version. If you are unable to change the version, please add a comment here and someone will do it for you. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
This is still fixed in the latest fedora release, please move.
Fedora 10 changed to end-of-life (EOL) status on 2009-12-17. Fedora 10 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. Thank you for reporting this bug and we are sorry it could not be fixed.