Bug 833314

Summary: Anaconda ignores disks with incomplete BIOS RAID metadata
Product: [Fedora] Fedora Reporter: mgoodman.d
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED WONTFIX QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 17CC: anaconda-maint-list, g.kaviyarasu, jonathan, stev, vanmeeuwen+fedora
Target Milestone: ---   
Target Release: ---   
Hardware: i686   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-06-21 22:57:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Anaconda log of Fedora 17 live
none
Program log of Fedora 17 live
none
Storage log of Fedora 17 live
none
/var/log/message file from Fedora 17 live none

Description mgoodman.d 2012-06-19 08:33:38 UTC
Description of problem: I am unable to install Fedora 17 i686 Live CD.  Anaconda will not recognize the hard drive that I am wanting to install Fedora on.  When I try to install from disk is says /dev/sdd has BIOS meta data on the drive and is going to ignore it.  I have Fedora 15 on the hard drive now.


Version-Release number of selected component (if applicable):
Fedora 17 Live CD

How reproducible:
Every time I try to install Fedora 17

Steps to Reproduce:
1.Boot from Fedora 17 Live CD
2.Click on Install to Hard drive
3.
  
Actual results:
Ignoring two drives that has BIOS meta data.  /dev/sdb and /dev/sdd

Expected results:
Expected to see all of my hard drives to install Fedora 17 to.

Additional info:  Some other people have had a similar issue and have recommended to run dmraid -rE /dev/sdd.  I run the command I get this 
ERROR: pdc: identifying /dev/sdd, magic_0: 0xe1e2e3e4/0xf9650624, magic_1: 0xd9dadbdc/0x0, total_disks: 0
no raid disks and with names: "/dev/sdd"

Other users suggested to run mdadm --zero-superblock /dev/sdd.  Here is the output of that command.
mdadm: Couldn't open /dev/sdd for write - not zeroing

I tried putting nomdraid in the linux boot but it says it can't locate the Fedora 17 iso and drops me to dracut command line.
Mobo: MSI K9A2 Platinum
CPU: AMD Athlon
SATA Controllers:
00:12.0 SATA controller: ATI Technologies Inc SB600 Non-Raid-5 SATA
03:00.0 RAID bus controller: Promise Technology, Inc. PDC42819 [FastTrak TX2650/TX4650]

Comment 1 David Lehman 2012-06-19 14:18:23 UTC
(In reply to comment #0)
> Description of problem: I am unable to install Fedora 17 i686 Live CD.

Please collect the following log files after hitting the error and attach them to this bug one at a time as attachments of type text/plain:

 /tmp/anaconda.log
 /tmp/storage.log
 /tmp/program.log
 /var/log/messages

Thanks.

Comment 2 mgoodman.d 2012-06-20 20:11:09 UTC
Created attachment 593303 [details]
Anaconda log of Fedora 17 live

Comment 3 mgoodman.d 2012-06-20 20:11:54 UTC
Created attachment 593304 [details]
Program log of Fedora 17 live

Comment 4 mgoodman.d 2012-06-20 20:12:44 UTC
Created attachment 593305 [details]
Storage log of Fedora 17 live

Comment 5 mgoodman.d 2012-06-20 20:13:26 UTC
Created attachment 593306 [details]
/var/log/message file from Fedora 17 live

Comment 6 David Lehman 2012-06-20 23:18:54 UTC
Try running 'wipefs /dev/sdb' and 'wipefs /dev/sdd' and posting the full output here.

Comment 7 David Lehman 2012-06-20 23:25:18 UTC
To be clear, those wipefs commands will not do anything to your disks -- all they do is print out what the utility sees. If it sees what we expect it to see I will give you some commands to remove the stale metadata. Removing the stale metadata is what you're going to want to do in the end. If you leave it there, this will come up over and over again. Better to clean up the mess and move along. For now, though, we're just seeing what we're dealing with.

Comment 8 mgoodman.d 2012-06-21 04:22:45 UTC
#wipefs /dev/sdd 
wipefs: WARNING: /dev/sdd: appears to contain 'dos' partition table
offset               type
----------------------------------------------------------------
0xaea8cd6200         promise_fasttrack_raid_member   [raid]

#wipefs /dev/sdb 
wipefs: WARNING: /dev/sdb: appears to contain 'dos' partition table
offset               type
----------------------------------------------------------------
0x1d1c110e200        promise_fasttrack_raid_member   [raid]

Comment 9 David Lehman 2012-06-21 14:01:53 UTC
If you want to remove the obsolete raid metadata, you can do so by running the following commands:

  WARNING: This will remove the raid signatures from your disks, so these disks
           will no longer be recognizable as members of any raid set. There is
           no "undo" button.

  wipefs -o 0xaea8cd6200 /dev/sdd

  wipefs -o 0x1d1c110e200 /dev/sdb

Comment 10 mgoodman.d 2012-06-21 19:33:43 UTC
That did it.  When I do dmraid -r  is says not raid disks and wipefs doesn't tell me any information.  

I do have a question though.  I tried going into the BIOS and running BIOS raid on the hard drives to see if I could clear the metadata from the hard drives.  The other two disks /dev/sda and /dev/sdc were in their own JBOD.  BIOS raid said that /dev/sdb and /dev/sdd where free to make a raid configuration.  I thought that was werid since it was inverted as to how dmraid was seeing things.

Thanks for the help.

Comment 11 David Lehman 2012-06-21 22:57:20 UTC
(In reply to comment #10)
> I do have a question though.  I tried going into the BIOS and running BIOS
> raid on the hard drives to see if I could clear the metadata from the hard
> drives.  The other two disks /dev/sda and /dev/sdc were in their own JBOD. 
> BIOS raid said that /dev/sdb and /dev/sdd where free to make a raid
> configuration.  I thought that was werid since it was inverted as to how
> dmraid was seeing things.

I'm assuming you did this before you wiped the metadata from sdb and sdd. The firmware probably doesn't care if the disks have raid metadata or not -- only whether they are part of a valid/known raid set, which they were not (as I understand it).

> 
> Thanks for the help.

No problem. I'm closing this as WONTFIX because we intentionally ignore such disks in anaconda to force users to properly address whatever the issue is, be that obsolete metadata or a broken firmware raid configuration.

Comment 12 David Shea 2015-04-16 12:53:22 UTC
*** Bug 1210671 has been marked as a duplicate of this bug. ***