Bug 506861 - Anaconda must warn (print (kickstart) or dialog) when ignoring BIOS RAID disks
Anaconda must warn (print (kickstart) or dialog) when ignoring BIOS RAID disks
Product: Fedora
Classification: Fedora
Component: anaconda (Show other bugs)
All Linux
low Severity medium
: ---
: ---
Assigned To: Hans de Goede
Fedora Extras Quality Assurance
: 547560 (view as bug list)
Depends On:
  Show dependency treegraph
Reported: 2009-06-18 19:37 EDT by erikj
Modified: 2010-02-18 09:49 EST (History)
5 users (show)

See Also:
Fixed In Version: anaconda-13.27-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 560932 (view as bug list)
Last Closed: 2010-02-18 09:49:26 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
anaconda.log (5.13 KB, text/plain)
2009-06-18 19:37 EDT, erikj
no flags Details
program.log (358 bytes, text/plain)
2009-06-18 19:38 EDT, erikj
no flags Details
storage.log (7.43 KB, text/plain)
2009-06-18 19:38 EDT, erikj
no flags Details

  None (edit)
Description erikj 2009-06-18 19:37:25 EDT
In Fedora11, I tried to perform an installation.  I was very surprised 
when anaconda didn't provide me with any disks.  It just seemed like 
the installer didn't think there were any.

So I tried various options to the kernel command line to try to work 
around this.

I later found that, when I shelled out while anaconda was running (hit
ENTER on the terminal window when using the vnc install mode)....  That
the disk was indeed present to the install environment.  I could use 
parted on it, I could re-partition it, etc.  However, anconda just 
seemed blind to it.

At that point, I started sniffing around the logs.  I saw this:

[2009-06-16 20:31:17,816]    DEBUG: type detected on 'sdb' is 'ddf_raid_member'
[2009-06-16 20:31:17,817]    DEBUG: getFormat('ddf_raid_member') returning DMRaidMember instance

I could further confirm it by shelling out from anaconda and typing this:

sh-4.0# dmraid -r
/dev/sda: ddf1, ".ddf1_disks", GROUP, ok, 285155328 sectors, data@ 0

However, simply using dmrraid with -r and -E didn't work, it produced errors!
So I couldn't even use dmraid to clear the problem:

sh-4.0# dmraid -E -r
Do you really want to erase "ddf1" ondisk metadata on /dev/sda ? [y/n] :y
ERROR: ddf1: seeking device "/dev/sda" to 75169657520128
ERROR: writing metadata to /dev/sda, offset 146815737344 sectors, size 0 bytes returned 0
ERROR: erasing ondisk metadata on /dev/sda

So now I was perplexed.  Here I had discovered there was indeed dmraid
metadata on it...  But I couldn't even erase it using dmraid.

What I finally did was start up parted and change the partition label on
the problem disk from msdos to gpt.  I then switched it back to msdos.  
I did this knowing that gpt stores label info at the front and back so I 
suspected it might wipe out the errant raid label.  This indeed worked.  
After doing that, anaconda properly allowed me to choose the storage.
(That was a quick hack, I was going to try dding the front and back
if parted msdos->gpt->msdos didn't wipe it, I was just too lazy to figure
out the syntax).

My guess is this disk came out of a system that had bios-assisted software
raid.  They just tossed that drive in to the system for me to use not 
knowing that it had metadata poison.

I'd like to be careful here, however.  It's important that we don't
BLINDLY consider disks with dmraid metadata for re-installation, at least
in my opinion.  We don't want to accidentally destroy any dmraid's that 
are legit.

My suggestion would be -- especially when anaconda is presented with a 
system where no disks are found -- to tell the customer that one or more
disks with dmraid metadata was found and if those disks wish to be used
by the install program, you should clean them.  It might be nice to
mention how -- perhaps a couple dd commands that erase the front and back
of the disk to truly clean that stuff out.

Finally, we do want to be careful to not cause trouble in situations where
there could be hundreds of disks attached.

Thanks for your consideration.
Comment 1 erikj 2009-06-18 19:37:56 EDT
Created attachment 348581 [details]
Comment 2 erikj 2009-06-18 19:38:29 EDT
Created attachment 348582 [details]
Comment 3 erikj 2009-06-18 19:38:54 EDT
Created attachment 348583 [details]
Comment 4 David Lehman 2009-08-28 15:50:27 EDT
I am currently planning to add a prompt when anaconda encounters incomplete biosraid arrays to offer the opportunity to reinitialize the disk(s). I expect to have a patch sometime next week.
Comment 5 Hans de Goede 2009-09-16 14:28:35 EDT

We (me and Dave Lehman) have just discussed this, and for a number of reasons
for now we will be sticking to ignoring the disks, for a number of reasons

1) All these metadata formats are manufacturer specific and tend to get
   extended over time, so sometimes we fail to properly bring up a proper
   valid BIOS RAID set due to not completely understanding the format.

   In the past this has happened quite often, in this case we don't want
   user to start clicking the "sure go ahead" button and then later
   finding out all their data is "gone"

2) As you've discovered yourself dmraid's metadata removal is not all that

We do however plan to add a warning dialog whenever one or more disks where
ignored because of this and users can workaround the problem by specifying
nodmraid on the installer cmdline when starting it, this will cause all
BIOS RAID data to be ignored during installation time, and during subsequent
boots of that installation.

Keeping this bug open to track the addition of the warning dialog.
Comment 7 Hans de Goede 2009-12-16 05:30:17 EST
*** Bug 547560 has been marked as a duplicate of this bug. ***
Comment 8 Hans de Goede 2010-02-18 09:49:26 EST
This is fixed in anaconda-13.27-1 by this commit:


Note You need to log in before you can comment on or make changes to this bug.