Bug 499733
Summary: | Anaconda ignores disks with incomplete BIOS RAID metadata -- linux nodmraid is ineffective | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Daniel Duggan <seve141> |
Component: | anaconda | Assignee: | Hans de Goede <hdegoede> |
Status: | CLOSED RAWHIDE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | urgent | Docs Contact: | |
Priority: | low | ||
Version: | 11 | CC: | awilliam, benl, Bert.Deknuydt, bryan.christ, crash70, daviddoraine, fil_hor, fred.wimberly, fretless_kb, james, jlaska, maurizio.antillon, mrsam, noloader, offlimitgod, pasik, rjgleits, rmaximo, rtyler, tore, u60149431, vanmeeuwen+fedora, yasin.gedik |
Target Milestone: | --- | Keywords: | CommonBugs |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | https://fedoraproject.org/wiki/Common_F11_bugs#dmraid-nodmraid | ||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2009-08-16 18:47:52 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 513462 | ||
Attachments: |
Description
Daniel Duggan
2009-05-07 19:49:40 UTC
can we adjust the priority? I have two systems on which the beta and preview iso's wont recognize my sata drives (actually anaconda thinks they are part of a dmraid configuration....sounds like we broke one to fix another!!) It really would have been nice to have participated in beta & preview test!!!! Can you attach /tmp/storage.log and /tmp/anaconda.log from when you attempt an install with nodmraid given? not sure how to attach same. ctl+alt+pf2 shows anaconda 11.5.0.47 complaining about dev/sdb pdc & isw formats found (using isw) followed by error on isw device for RAID00 raid set isw_eifaajeie_RAID00. this followed by dev sda (samemsg)....I just deleted all partitions on these drives usin gparted. you need to teach anaconda to stop assuming the devices are %^*&#@ raid!! There is a dump in 499321 from a preupgrade attempt on same system (In reply to comment #2) > Can you attach /tmp/storage.log and /tmp/anaconda.log from when you attempt an > install with nodmraid given? (In reply to comment #2) > Can you attach /tmp/storage.log and /tmp/anaconda.log from when you attempt an > install with nodmraid given? If you can provide me with the instructions as to how I can get those two logs I will attach them as soon as possible. Thanks one final attempt proved fruitfull thanks to Luckyy at FedoraForums. When anaconda posted msg about no valid devices found, did ctrl+alt+ pf1 and found same msgs about pdc isw formats on sda & sdb. think ctrl+alt+pf2 got me to prompt screen, where i entered dmraid -r -E several time to clean all errant formats on sda and sdb....reboot produced successfull install. i have no idea what pdc and isw formatts are, but it sure would have been nice if gparted had cleaned them up. would like some additional info if available. This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle. Changing version to '11'. More information and reason for this action is here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping I've had the same problems when attempting to install F11 x86 on my single-drive system - when getting to the storage configuration step, the drive is nowhere to be seen. The drive (and its partitions) is detected by the kernel, but the log messages seems to indicate that Anaconda erroneously detects it as part of a dmraid array. It is not, and has never been either. I'm not used to the "dmraid" command, but I think the drive does not detect the drive as being part of any array: [root@wrath ~]# dmraid -r -D /dev/sda ERROR: pdc: identifying /dev/sda, magic_0: 0x4d4705cb/0xa012623, magic_1: 0x42e3d4e/0x97741f00, total_disks: 255 no raid disks and with names: "/dev/sda" I had no such problems when installing F10 on the same hardware. I'll attach /tmp/anaconda.log, /tmp/storage.log, /proc/partitions, and the output from "dmesg" after having clicked my way (just next-next-next, not changing any defaults) to the storage configuration setup. I will try booting with "nodmraid" and if that doesn't help, "dmraid -r -E /dev/sda", and report if it did the trick or not. Created attachment 347734 [details]
/tmp/anaconda.log from the failed installation attempt
Created attachment 347735 [details]
/tmp/storage.log from the failed installation attempt
Created attachment 347736 [details]
dmesg output from the failed installation attempt
Created attachment 347737 [details]
/proc/partitions from the failed installation attempt
Booting with "nodmraid" didn't help, and "dmraid -r -E /dev/sda" doesn't seem to help: [root@wrath ~]# dmraid -r -E /dev/sda ERROR: pdc: identifying /dev/sda, magic_0: 0x4d4705cb/0xa012623, magic_1: 0x42e3d4e/0x97741f00, total_disks: 255 no raid disks and with names: "/dev/sda" There doesn't appear to be possible to install F11 on this machine, but I'll keep trying to figure something out. Tore Hi all, I am having the same issue. I have two identical Sata disks and they are not configured to be raid. I checked bios and raid is disabled for sure. When I choose custom layout, partitions in first disk are correctly displayed but second disk is read as raw. First disk contains a windows partiton. My motherboard is Asus p4p800. Has anyone found a workaround? Thanks, Yasin I made it work by writing zeroes on the last few MB of the problematic harddrive (using dd). After that Anaconda/dmraid no longer believed it to be part of a dmraid array, and I could install normally. Tore *** Bug 506654 has been marked as a duplicate of this bug. *** Wouldn't it be possible to actually ASK the installer if it's right that the disks found are part of a RAID system or not. This would have saved a lot of pain. Perhaps anaconda is not getting the nodmraid option from the kernel. I have disassembled the initrd.img file (it's gzipped cpio) but can't locate the source code for the init program used in it. That currently appears to be the key, as I think it starts anaconda. I located some source rpms that may have value, but I can't install them on my F9 system: rpmlib version too low. But since I can't install F11... I don't understand how this bug was assigned low priority. No workaround has emerged and the problem prevents installation. Bravo to Tore Anderson for writing zeroes to the end of one of his file systems but I am not about to do that. Maybe somewhere it says that command line options for anaconda being passed through the kernel have to have a particular format... Some prefix to make sure it's routed to init and then to anaconda. Anyway peace to everyone. *** Bug 501814 has been marked as a duplicate of this bug. *** Hi, Sorry for the long silence. Yes F-11 anaconda does not honor the nodmraid option I can confirm this is a bug. I'll be working on a fix. I'll be using this bug to track all issues stemming from the problems caused by the following: There has been a behavioural change between F-10 and F-11 where in F-10 drives which contain invalid / stale dmraid (BIOS RAID) metadata / which were part of an incomplete BIOS RAID set would be just seen as the raw disks, where as F-11 these drives are ignored. In F-10 in cases where dmraid was detected unwantedly (in case of a complete set, but being disabled in the BIOS for example), the BIOS RAID detection could be avoided with nodmraid. In F-11 this option currently does not work, this is bug 499733. Once 499733 is fixed you can workaround your issue using the nodmraid installer cmdline option. Note that a better solution would be to remove the unwanted BIOS RAID metadata from the disks, this can be done using "dmraid -x", be sure to make backups before doing this! "dmraid -x" should leave your data intact, but better safe then sorry. Also only do this if you really want your disks to not be part of a BIOS RAID set, if for example windows is currently using the disks as a BIOS RAID set you do not want to do this! *** Bug 508554 has been marked as a duplicate of this bug. *** Ok, To be clear the summary of this bug I'm setting now: "Anaconda ignores disks with incomplete BIOS RAID metadata" Is not accurate, this bug is for tracking the issue of anaconda failing to honor the "nodmraid" cmdline option. The fact that "Anaconda ignores disks with incomplete BIOS RAID metadata" with Fedora 11 and newer is not a bug, as explained in Comment #20 the real fix for this is to remove the (often stale) BIOS RAID metadata from your disks. However for people who do not want to do that, or cannot for some reason, the nodmraid cmdline option should make anaconda ignore the metadata and just use the raw disks (like older versions did), that is what this bug is about, and once that is resolved this bug will be closed. I'm using the inaccurate (more like blatantly wrong) summary to make this bug easier to find for people with the same issue. *** Bug 515129 has been marked as a duplicate of this bug. *** Re comment #20: dmraid -x is not a workaround for this bug. I have two drives that were, ages ago, originally formatted by an Adaptec HBA in RAID mode, but when I discovered that, way back when, Fx did not support Adaptec hardware RAID, I turned it off, and switched to softraid. Now, I can't install F11 because Anaconda does not see the hard drives, and dmraid -x does not work: [root@commodore ~]# dmraid -r ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on /dev/sdb ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on /dev/sda no raid disks [root@commodore ~]# dmraid -x /dev/sdb ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on /dev/sdb ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on /dev/sda no raid disks and with names: "/dev/sdb" [root@commodore ~]# dmraid -r -E /dev/sdb ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on /dev/sdb no raid disks and with names: "/dev/sdb" (In reply to comment #24) > Re comment #20: dmraid -x is not a workaround for this bug. > > I have two drives that were, ages ago, originally formatted by an Adaptec HBA > in RAID mode, but when I discovered that, way back when, Fx did not support > Adaptec hardware RAID, I turned it off, and switched to softraid. > > Now, I can't install F11 because Anaconda does not see the hard drives, and > dmraid -x does not work: > > [root@commodore ~]# dmraid -r > ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on > /dev/sdb > ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on > /dev/sda > no raid disks > [root@commodore ~]# dmraid -x /dev/sdb > ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on > /dev/sdb > ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on > /dev/sda > no raid disks and with names: "/dev/sdb" > [root@commodore ~]# dmraid -r -E /dev/sdb > ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on > /dev/sdb > no raid disks and with names: "/dev/sdb" This is a dmraid bug, please file a bug against dmraid for this. Thanks! The nodmraid options has returned / has been fixed in anaconda-12.12-1, which will be in Fedora 12 alpha. With reference to bug#: 515129. In my case my disks did show up as a dmraid from the bios. Adding -nodmraid did not work. After clearing the raid config with dmraid -rE I was then able to install F11. Thanks v much for your commehts. *** Bug 505488 has been marked as a duplicate of this bug. *** *** Bug 526453 has been marked as a duplicate of this bug. *** *** Bug 528865 has been marked as a duplicate of this bug. *** Adding keyword 'CommonBugs'. This will allow someone from the QA team (usually myself or awilliam) to help draft some documentation for this issue on https://fedoraproject.org/wiki/Common_F12_bugs (even though this is reported against F11, it seems to still occur on F12 too). According to comment #26, this should be fixed in F12. indeed, all the recently-filed dupes seem to be from F11. please only document on F11 common bugs page. -- Fedora Bugzappers volunteer triage team https://fedoraproject.org/wiki/BugZappers update from hans: superseding his above suggestion, 'dmraid -rE' is a better method than 'dmraid -x' for removing stale metadata. -- Fedora Bugzappers volunteer triage team https://fedoraproject.org/wiki/BugZappers dmraid -rE also fails to remove my stale metadata. [root@commodore ~]# dmraid -rE ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on /dev/sdb ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on /dev/sda no raid disks (In reply to comment #35) > dmraid -rE also fails to remove my stale metadata. > > [root@commodore ~]# dmraid -rE > ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on > /dev/sdb > ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on > /dev/sda > no raid disks Hmm, that is not good, note this specific issue is being tracked in bug 517761. I use CentOS-5.2 i disabled from the BIOS the voice S.M.A.R.T. On the boot option i wrote "linux nodmraid" And that is all!!!! Found this information on derkeiler.com (http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.ibm.pc.hardware.storage/2006-02/msg01143.html). The commenter Arno Wagner suggest the following, which worked for me. "I just had a look at the madam man-page. It seems deleting the metadata is as easy as a "mdadm --zero-superblock <disk/partition>". Of course you have to stop the RAID array the disk/partition is part of first. "mdadm --stop /dev/md<device>" should do that." NOTE: I believe this will destroy all data. fwiw, I just had a disk where dmraid -rE doesn't work; it claims to be working, but a following dmraid -r still detects the 'array', and anaconda refuses to install to it. it's definitely a dmraid not an mdraid. now zeroing out the whole disk... (In reply to comment #39) > fwiw, I just had a disk where dmraid -rE doesn't work; it claims to be > working, but a following dmraid -r still detects the 'array', and anaconda > refuses to install to it. it's definitely a dmraid not an mdraid. now > zeroing out the whole disk... I just noticed the same. This was a disk from LSI megaraid array (actually Dell PERC, but it's LSI rebranded), and "dmraid -r -E" didn't work. I had to do this to clear the last 512 kB of the disk: dd if=/dev/zero of=$YOUR_DEV bs=512 seek=$(( $(blockdev --getsz $YOUR_DEV) - 1024 )) count=1024 After that Fedora 17 happily re-initialized the disk. Btw anaconda gives python stacktrace and crashes when it refuses to use the disk, and thus there are no usable/installable disks in the system.. (In reply to Adam Williamson from comment #39) > fwiw, I just had a disk where dmraid -rE doesn't work; it claims to be > working, but a following dmraid -r still detects the 'array', and anaconda > refuses to install to it. it's definitely a dmraid not an mdraid. now > zeroing out the whole disk... I had success with the following on a used hard disk (Intel SSD at /dev/sda): dd if=/dev/zero of=/dev/sda bs=512 count=2048 And then: fdisk /dev/sda n (new partition, select any type) w always (write it) It seems to stop the braindead behavior when trying to install Fedora 26. Its kind of sad its 2017 and we are still doing work for the programs. Programs are supposed to do the work for us. |