Bug 136051 - mdadm.conf generated makes mdadm complain
mdadm.conf generated makes mdadm complain
Product: Fedora
Classification: Fedora
Component: anaconda (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Chris Lumens
Mike McLean
: 132334 155864 (view as bug list)
Depends On:
Blocks: FC4Blocker
  Show dependency treegraph
Reported: 2004-10-17 03:13 EDT by Bill Nottingham
Modified: 2014-03-16 22:49 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2005-08-15 11:33:30 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
mkinitrd patch (1.39 KB, patch)
2005-05-13 13:52 EDT, Doug Ledford
no flags Details | Diff

  None (edit)
Description Bill Nottingham 2004-10-17 03:13:26 EDT
# cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
ARRAY /dev/md0 super-minor=0
ARRAY /dev/md1 super-minor=1
# mdadm -A -s
mdadm: only specify super-minor once, super-minor=0 ignored.
mdadm: only specify super-minor once, super-minor=1 ignored
Comment 1 Jeremy Katz 2004-10-17 18:13:13 EDT
Ermm, that doesn't seem right.  According to the man page, you should
have super-minor once per ARRAY line which is all there is based on
your config.
Comment 2 Aleksandar Milivojevic 2004-11-18 14:03:29 EST
I'm getting exactly the same warning when booting my system.  It has
two disks, and I configured them as mirrors during installation (in
ks.cfg file).  Configuration file looks similar to the one above (it
was generated by Anaconda):

# mdadm.conf written out by anaconda
DEVICE partitions
ARRAY /dev/md0 super-minor=0
ARRAY /dev/md2 super-minor=2
ARRAY /dev/md1 super-minor=1
ARRAY /dev/md3 super-minor=3
Comment 3 David Jansen 2004-11-19 06:45:50 EST
yes, it happened to me too. same setup, raid 1 arrays created in
anaconda; exactly same mdadm.conf as in the previous comments.

everything seems to be working fine otherwise, /proc/mdstat reports
that the arrays are running and other tools cannot find problems either.
Comment 4 Michael Best 2004-11-25 14:01:06 EST
I think the syntax is wrong.  On a Redhat9 machine this is how that
config line was written:

$ grep superminor /etc/mdadm.conf
ARRAY /dev/md1 superminor=1

The man page for mdadm.conf has two conflicting things in it:
        The value is an integer which indicates  the  minor  number
        that  was  stored in the superblock when the array was cre-
        ated. When an array is created as /dev/mdX, then the  minor
        number X is stored.

And in the example later on it reads:
       ARRAY /dev/md1 superminor=1
Comment 5 Michael Best 2004-11-25 14:08:47 EST
From rc.sysinit:
           if [ $RESULT -gt 0 -a -x /sbin/mdadm ]; then
               /sbin/mdadm -Ac partitions $i -m dev

The -m dev (--super-minor=dev) also sets the super-minor mode, so if
it's in /etc/mdadm.conf I can see how it is reporting the error of it
being set twice.
Comment 6 Doug Ledford 2005-05-13 10:34:20 EDT
The super-minor identification of md devices is actually a very weak method of
identification that's very error prone.  I would suggest dumping that out of
Anaconda all together and replacing it with UUID based identification and
writing out the anaconda generated mdadm.conf file so it reads something like:

ARRAY /dev/md0 uuid=b23f3c6d:aec43a9f:fd65db85:369432df

This is a *much* more robust way to handle things.
Comment 7 Doug Ledford 2005-05-13 13:20:41 EDT
*** Bug 132334 has been marked as a duplicate of this bug. ***
Comment 8 Doug Ledford 2005-05-13 13:48:58 EDT
I'm proposing the following format for anaconda generate mdadm.conf ARRAY lines.

Each line should be as follows:

ARRAY $(md_device) level=$(level) num-devices=$(number) uuid=$(uuid) auto=(md|mdp)


md_device=/dev/md$(number) for regular, non partitionable md device or
md_device=/dev/md_d$(number) for partitionable md devices

level=raid type (aka multipath, raid1, etc)

number=number of devices in array

uuid=actual uuid that is generated on array creation

auto=md for regular md device, mdp for partionable devices

As an example of a working mdadm.conf file for stacked md devices:

[dledford@pe-fc4 ~]$ cat /etc/mdadm.conf

# mdadm.conf written out by anaconda
DEVICE partitions /dev/md[0-3]
ARRAY /dev/md0 level=multipath num-devices=2
UUID=34f4efec:bafe48ef:f1bb5b94:f5aace52 auto=md
ARRAY /dev/md1 level=multipath num-devices=2
UUID=bbaaf9fd:a1f118a9:bcaa287b:e7ac8c0f auto=md
ARRAY /dev/md2 level=multipath num-devices=2
UUID=a719f449:1c63e488:b9344127:98a9bcad auto=md
ARRAY /dev/md3 level=multipath num-devices=2
UUID=37b23a92:f25ffdc2:153713f7:8e5d5e3b auto=md
ARRAY /dev/md_d0 level=raid5 num-devices=4
UUID=910b1fc9:d545bfd6:e4227893:75d72fd8 auto=part
[dledford@pe-fc4 ~]$

There are associated changes to mkinitrd that goes with this and I'll both post
the changes here and start a new bugzilla under mkinitrd for those changes.  The
mkinitrd changes are reasonable and should be included irrespective of any
possible changes to anaconda since users can create stacked devices or
partitionable devices on their own and the current mkinitrd fails to handle them
Comment 9 Doug Ledford 2005-05-13 13:52:33 EDT
Created attachment 114347 [details]
mkinitrd patch

This makes the initrd work with stacked and partitioned md devices properly. 
The raid autorun facility is deprecated and doesn't handle all possible
situations properly, where as mdadm does a much better job.
Comment 10 Alexandre Oliva 2005-05-13 14:24:44 EDT
I suppose mdadm -A --scan will still start raid devices not listed in
mdadm.conf, as well as degraded raid devices, right?  If not, this would be a
major regression (for degraded raid devices) and inconvenience (having to update
mdadm.conf and rerun mkinitrd for every raid change).
Comment 11 Doug Ledford 2005-05-13 14:31:30 EDT
I should also note that there is no reason to make device files on the initrd
image when using auto= in the mdadm.conf file as mdadm will make the files as
needed.  Also, for partitionable devices (aka, auto=mdp) you can append a number
to the mdp to cause mdadm to create a different number of possible allowed
partitions (the default is to only allow 4 partitions).  So, for example,
auto=mdp15 will create a device with 15 possible partitions (although the extra
devices that get created will end up on the /initrd filesystem and not the real
/dev filesystem, but udev does automatically create any defined partitions that
exist on the device on the real /dev filesystem, and running fdisk on the array
will trigger recreation of any new devices assuming the device is available to
be revalidated).  The main point to consider here is that when creating the
initial /dev/md_d device, the mdp{number} determines the maximum number of
partitions that the device will support, and the default is only 4, so if you
want support for more partitions than that then you need to specificy a larger
number.  The md raid code allocates the needed minor numbers for the array at
run time, so if you create the array with the default 4 partitions and then
create a partition table with 5 different partitions, there simply won't be
enough minor numbers to represent all the partitions.

As to your comments Alexandre, I'll check to make sure.
Comment 12 Doug Ledford 2005-05-13 15:28:59 EDT
OK, current mdadm -A --scan does not assemble devices that are not in the
mdadm.conf file.  I'll fix that.  As for degraded arrays, adding the --run
option causes mdadm to attempt to run degraded arrays, so I'll add that to the
mkinitrd command line.  However, any array that is autodetected but not in the
config file is going to get started with default options, so in order to get non
default behavior you will have to update the mdadm.conf file and remake the
initrd image.  The mdadm -E --scan option is nice for generating a default
mdadm.conf file, but it does have a bug related to overlapping super-minor
numbers (basically, you can have two of any super minor number, one for
partitionable devices and one for non partitionable devices, and it doesn't deal
with that properly, I've submitted this problem to the upstream maintainer and
will work on a fix through him).  So, the short answer is yeah, it has some
regression, I'll get on addressing those right now.
Comment 13 Doug Ledford 2005-05-16 13:40:26 EDT
Poking around shows that a person can do something silly (as I told the upstream
maintainer) like mdadm --examine --scan --brief --device=partitions | mdadm
--assemble --scan -c - --device=partitions and get it to assemble all devices
found at reboot whether they are in a config file or not.  I find it silly that
the examine mode finds all devices while the assemble mode only assembles
devices in the mdadm.conf file (especially since the mdadm man page touts the
fact that mdadm is designed to be run without a config file).  Anyway, in order
for this to be reliable, I've got to fix a bug in the --examine mode that causes
some entries to have doubled device= entries.  Working on that now.
Comment 14 Jeremy Katz 2005-05-16 16:25:16 EDT
The mkinitrd changes seem a bit on the large size for this stage of the game for
FC4.  Switching to writing out the UUID is a little bit easier, but still not
trivial to do :/

Note that as it stands right now, we don't support partitioned md devices really
at all.
Comment 15 Peter Jones 2005-05-19 15:50:30 EDT
*** Bug 155864 has been marked as a duplicate of this bug. ***
Comment 16 Chris Lumens 2005-05-19 16:49:57 EDT
Committed a fix to anaconda CVS.  See bug 157680 for the mkinitrd portion of
this bug.
Comment 17 Alexandre Oliva 2005-05-22 01:05:28 EDT
The code that generates the uuids doesn't always generate them in a format that
mdadm recognizes.  If any of the 8-hex-digit chunks would start with a zero,
anaconda omits the zero, and then mdadm complains.  I had such
missing-leading-zeros in 3 out of 8 arrays on my main workhorse.

This is what anaconda generates:

ARRAY /dev/md9 level=raid1 num-devices=2 uuid=c01a19d6:78fd9bdf:b0b7a5d:104b4f9
ARRAY /dev/md27 level=raid1 num-devices=2 uuid=cae9b0de:3ec23d99:9f98f76:95fa9f44
ARRAY /dev/md28 level=raid1 num-devices=2 uuid=d0ad3bbc:fc6e59f9:35f2eaf:bee50450

causing boot-time errors such as:

mdadm: bad uuid: uuid=c01a19d6:78fd9bdf:b0b7a5d:104b4f9
mdadm: ARRAY line /dev/md9 has no identity information.
mdadm: bad uuid: uuid=cae9b0de:3ec23d99:9f98f76:95fa9f44
mdadm: ARRAY line /dev/md27 has no identity information.
mdadm: bad uuid: uuid=d0ad3bbc:fc6e59f9:35f2eaf:bee50450
mdadm: ARRAY line /dev/md28 has no identity information.

Fixing mdadm.conf so as to say:

ARRAY /dev/md9 level=raid1 num-devices=2 uuid=c01a19d6:78fd9bdf:0b0b7a5d:0104b4f9
ARRAY /dev/md27 level=raid1 num-devices=2 uuid=cae9b0de:3ec23d99:09f98f76:95fa9f44
ARRAY /dev/md28 level=raid1 num-devices=2 uuid=d0ad3bbc:fc6e59f9:035f2eaf:bee50450

removes the warnings printed when e.g. starting mdmonitor.
Comment 18 Jeremy Katz 2005-05-23 13:16:23 EDT
Chris fixed earlier today.

Note You need to log in before you can comment on or make changes to this bug.