Bug 136051
Summary: | mdadm.conf generated makes mdadm complain | ||||||
---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Bill Nottingham <notting> | ||||
Component: | anaconda | Assignee: | Chris Lumens <clumens> | ||||
Status: | CLOSED NEXTRELEASE | QA Contact: | Mike McLean <mikem> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | rawhide | CC: | crash70, dblistsub-redzilla, gabriello.ramirez, jansen, jheinonen, katzj, k.georgiou, moneta.mace, nobody+pnasrat, oliva, rvokal | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | All | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2005-08-15 15:33:30 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 136450 | ||||||
Attachments: |
|
Description
Bill Nottingham
2004-10-17 07:13:26 UTC
Ermm, that doesn't seem right. According to the man page, you should have super-minor once per ARRAY line which is all there is based on your config. I'm getting exactly the same warning when booting my system. It has two disks, and I configured them as mirrors during installation (in ks.cfg file). Configuration file looks similar to the one above (it was generated by Anaconda): # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 super-minor=0 ARRAY /dev/md2 super-minor=2 ARRAY /dev/md1 super-minor=1 ARRAY /dev/md3 super-minor=3 yes, it happened to me too. same setup, raid 1 arrays created in anaconda; exactly same mdadm.conf as in the previous comments. everything seems to be working fine otherwise, /proc/mdstat reports that the arrays are running and other tools cannot find problems either. I think the syntax is wrong. On a Redhat9 machine this is how that config line was written: $ grep superminor /etc/mdadm.conf ARRAY /dev/md1 superminor=1 The man page for mdadm.conf has two conflicting things in it: super-minor= The value is an integer which indicates the minor number that was stored in the superblock when the array was cre- ated. When an array is created as /dev/mdX, then the minor number X is stored. And in the example later on it reads: ARRAY /dev/md1 superminor=1 From rc.sysinit: if [ $RESULT -gt 0 -a -x /sbin/mdadm ]; then /sbin/mdadm -Ac partitions $i -m dev RESULT=$? fi The -m dev (--super-minor=dev) also sets the super-minor mode, so if it's in /etc/mdadm.conf I can see how it is reporting the error of it being set twice. The super-minor identification of md devices is actually a very weak method of identification that's very error prone. I would suggest dumping that out of Anaconda all together and replacing it with UUID based identification and writing out the anaconda generated mdadm.conf file so it reads something like: ARRAY /dev/md0 uuid=b23f3c6d:aec43a9f:fd65db85:369432df This is a *much* more robust way to handle things. *** Bug 132334 has been marked as a duplicate of this bug. *** I'm proposing the following format for anaconda generate mdadm.conf ARRAY lines. Each line should be as follows: ARRAY $(md_device) level=$(level) num-devices=$(number) uuid=$(uuid) auto=(md|mdp) where md_device=/dev/md$(number) for regular, non partitionable md device or md_device=/dev/md_d$(number) for partitionable md devices level=raid type (aka multipath, raid1, etc) number=number of devices in array uuid=actual uuid that is generated on array creation auto=md for regular md device, mdp for partionable devices As an example of a working mdadm.conf file for stacked md devices: [dledford@pe-fc4 ~]$ cat /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions /dev/md[0-3] MAILADDR root ARRAY /dev/md0 level=multipath num-devices=2 UUID=34f4efec:bafe48ef:f1bb5b94:f5aace52 auto=md ARRAY /dev/md1 level=multipath num-devices=2 UUID=bbaaf9fd:a1f118a9:bcaa287b:e7ac8c0f auto=md ARRAY /dev/md2 level=multipath num-devices=2 UUID=a719f449:1c63e488:b9344127:98a9bcad auto=md ARRAY /dev/md3 level=multipath num-devices=2 UUID=37b23a92:f25ffdc2:153713f7:8e5d5e3b auto=md ARRAY /dev/md_d0 level=raid5 num-devices=4 UUID=910b1fc9:d545bfd6:e4227893:75d72fd8 auto=part [dledford@pe-fc4 ~]$ There are associated changes to mkinitrd that goes with this and I'll both post the changes here and start a new bugzilla under mkinitrd for those changes. The mkinitrd changes are reasonable and should be included irrespective of any possible changes to anaconda since users can create stacked devices or partitionable devices on their own and the current mkinitrd fails to handle them properly. Created attachment 114347 [details]
mkinitrd patch
This makes the initrd work with stacked and partitioned md devices properly.
The raid autorun facility is deprecated and doesn't handle all possible
situations properly, where as mdadm does a much better job.
I suppose mdadm -A --scan will still start raid devices not listed in mdadm.conf, as well as degraded raid devices, right? If not, this would be a major regression (for degraded raid devices) and inconvenience (having to update mdadm.conf and rerun mkinitrd for every raid change). I should also note that there is no reason to make device files on the initrd image when using auto= in the mdadm.conf file as mdadm will make the files as needed. Also, for partitionable devices (aka, auto=mdp) you can append a number to the mdp to cause mdadm to create a different number of possible allowed partitions (the default is to only allow 4 partitions). So, for example, auto=mdp15 will create a device with 15 possible partitions (although the extra devices that get created will end up on the /initrd filesystem and not the real /dev filesystem, but udev does automatically create any defined partitions that exist on the device on the real /dev filesystem, and running fdisk on the array will trigger recreation of any new devices assuming the device is available to be revalidated). The main point to consider here is that when creating the initial /dev/md_d device, the mdp{number} determines the maximum number of partitions that the device will support, and the default is only 4, so if you want support for more partitions than that then you need to specificy a larger number. The md raid code allocates the needed minor numbers for the array at run time, so if you create the array with the default 4 partitions and then create a partition table with 5 different partitions, there simply won't be enough minor numbers to represent all the partitions. As to your comments Alexandre, I'll check to make sure. OK, current mdadm -A --scan does not assemble devices that are not in the mdadm.conf file. I'll fix that. As for degraded arrays, adding the --run option causes mdadm to attempt to run degraded arrays, so I'll add that to the mkinitrd command line. However, any array that is autodetected but not in the config file is going to get started with default options, so in order to get non default behavior you will have to update the mdadm.conf file and remake the initrd image. The mdadm -E --scan option is nice for generating a default mdadm.conf file, but it does have a bug related to overlapping super-minor numbers (basically, you can have two of any super minor number, one for partitionable devices and one for non partitionable devices, and it doesn't deal with that properly, I've submitted this problem to the upstream maintainer and will work on a fix through him). So, the short answer is yeah, it has some regression, I'll get on addressing those right now. Poking around shows that a person can do something silly (as I told the upstream maintainer) like mdadm --examine --scan --brief --device=partitions | mdadm --assemble --scan -c - --device=partitions and get it to assemble all devices found at reboot whether they are in a config file or not. I find it silly that the examine mode finds all devices while the assemble mode only assembles devices in the mdadm.conf file (especially since the mdadm man page touts the fact that mdadm is designed to be run without a config file). Anyway, in order for this to be reliable, I've got to fix a bug in the --examine mode that causes some entries to have doubled device= entries. Working on that now. The mkinitrd changes seem a bit on the large size for this stage of the game for FC4. Switching to writing out the UUID is a little bit easier, but still not trivial to do :/ Note that as it stands right now, we don't support partitioned md devices really at all. *** Bug 155864 has been marked as a duplicate of this bug. *** Committed a fix to anaconda CVS. See bug 157680 for the mkinitrd portion of this bug. The code that generates the uuids doesn't always generate them in a format that mdadm recognizes. If any of the 8-hex-digit chunks would start with a zero, anaconda omits the zero, and then mdadm complains. I had such missing-leading-zeros in 3 out of 8 arrays on my main workhorse. This is what anaconda generates: ARRAY /dev/md9 level=raid1 num-devices=2 uuid=c01a19d6:78fd9bdf:b0b7a5d:104b4f9 ARRAY /dev/md27 level=raid1 num-devices=2 uuid=cae9b0de:3ec23d99:9f98f76:95fa9f44 ARRAY /dev/md28 level=raid1 num-devices=2 uuid=d0ad3bbc:fc6e59f9:35f2eaf:bee50450 causing boot-time errors such as: mdadm: bad uuid: uuid=c01a19d6:78fd9bdf:b0b7a5d:104b4f9 mdadm: ARRAY line /dev/md9 has no identity information. mdadm: bad uuid: uuid=cae9b0de:3ec23d99:9f98f76:95fa9f44 mdadm: ARRAY line /dev/md27 has no identity information. mdadm: bad uuid: uuid=d0ad3bbc:fc6e59f9:35f2eaf:bee50450 mdadm: ARRAY line /dev/md28 has no identity information. Fixing mdadm.conf so as to say: ARRAY /dev/md9 level=raid1 num-devices=2 uuid=c01a19d6:78fd9bdf:0b0b7a5d:0104b4f9 ARRAY /dev/md27 level=raid1 num-devices=2 uuid=cae9b0de:3ec23d99:09f98f76:95fa9f44 ARRAY /dev/md28 level=raid1 num-devices=2 uuid=d0ad3bbc:fc6e59f9:035f2eaf:bee50450 removes the warnings printed when e.g. starting mdmonitor. Chris fixed earlier today. |