Bug 606481 - raid device created as /dev/md0 is renamed to /dev/md127 after reboot
Summary: raid device created as /dev/md0 is renamed to /dev/md127 after reboot
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: mdadm
Version: 13
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Doug Ledford
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-06-21 18:32 UTC by markm
Modified: 2014-11-17 02:33 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-07-20 19:29:37 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description markm 2010-06-21 18:32:47 UTC
Description of problem:

I have created a raid5 device /dev/md0 - it works fine, although every time I start my computer, it is re-created as /dev/md127 not /dev/md0. when I stop and start raid device using gnome-disk-utils, it's recreated as /dev/md0 (as expected).

Version-Release number of selected component (if applicable):

mdadm-3.1.2-10.fc13.i686

How reproducible:

always

Steps to Reproduce:
1. create a raid device as /dev/md0
2. restart computer
  
Actual results:

device is listed as /dev/md127

Expected results:

device is listed as /dev/md0

Additional info:

Although the user is an owner of the device, it still need to enter root password to mount it, very annoying!

Comment 1 Doug Ledford 2010-07-20 19:29:37 UTC
mdadm-3.1.2 creates raid arrays using version 1.2 superblocks by default.  Version 1.2 superblocks will not automatically assemble with a numbered device name unless specifically told to do so.  There are two way to do this:

Pass the --name=<number> option to the mdadm --create command.  This will cause the name to be a simple number, and mdadm will assume that you want the array created with that number as the device name.  Aka, 0 will get assembled as /dev/md0, 1 as /dev/md1, etc.

Create an ARRAY line in the mdadm.conf file that specifies the uuid of the array and the name you wish the array to have, eg.

ARRAY /dev/md0 uuid=<uuid>

For version 1.2 superblocks, the preferred way to create arrays is by using a name instead of a number.  For example, if the array is your home partition, then creating the array with the option --name=home will cause the array to be assembled with a random device number (which is what you are seeing now, when an array doesn't have an assigned number we start at 127 and count backwards), but there will be a symlink in /dev/md/ that points to whatever number was used to assemble the array.  The symlink in /dev/md/ will be whatever is in the name field of the superblock.  So in this example, you would have /dev/md/home that would point to /dev/md127 and the preferred method of use would be to access the device via the /dev/md/home entry.

A final note: mdadm will also check the homehost entry in the superblock and if it doesn't match either the system hostname or the HOSTNAME entry in mdadm.conf, then it will get assembled with a number postfix, so /dev/md/home might get assembled as /dev/md/home_0.  To turn this behavior off you either need to have a HOMEHOST entry in mdadm.conf that matches the homehost portion of the raid superblock or you need to have HOMEHOST <any> in the mdadm.conf file.

Comment 2 naren 2011-10-19 08:07:19 UTC
Thanks for the detailed explanation.

I still can't get this to work with mdadm 3.1.4. As suggested by you I created a new array using this command

mdadm --create /dev/md0 --name=0 --chunk=256 --level=10 --raid-devices=4 /dev/xvdk /dev/xvdl /dev/xvdm /dev/xvdn

Now when I reboot, it still creates a degraded, read only array under /dev/md127.

Looking at the superblock of one of the component drives, the name is not just 0, but is prepended with the hostname and a colon. So the name becomes 
ip-xx-xx-xx-xx:0..

I tried the second option as well, but since the name was prepended by the host name, the results were again the same.

Any information here will be useful.

Comment 3 Doug Ledford 2011-10-19 14:15:18 UTC
The mdadm command automatically joins the homehost and name into a single string.  What you are seeing is homehost:name, so the name=0 part is working.  That it is getting assembled with a random number means that you probably have a /dev/md/0_0 symlink to /dev/md127, and that means that mdadm is not detecting that your array is "local" to your machine, which is what happens when the homehost portion of the array doesn't match what mdadm expects.  You can either specify the homehost on the original create command:

mdadm -C /dev/md0 --name=0 --homehost=my_machine ...

or you can go into the /etc/mdadm.conf file and set the HOMEHOST option to match what is in the array.  Then it should assemble as you want.

But the best way to ensure it gets assembled as you want is to simply add the array to the mdadm.conf file.  You can do so by creating the array and then doing this:

mdadm -Db /dev/md0 >> /etc/mdadm.conf

Comment 4 naren 2011-10-19 17:29:30 UTC
Unfortunately that doesn't work. I tried that before and I tried it again

Here are the detailed steps

create a raid10 array

1) mdadm --create /dev/md0 --name=0 --chunk=256 --level=10 --raid-devices=4 /dev/xvdk1 /dev/xvdl1 /dev/xvdm1 /dev/xvdn1

As suggested earlier, the name is ip-xx-xx-xx-xx:0

2) Next add an entry in the /etc/mdadm/mdadm.conf file. This is
mdadm -Db /dev/md0 >> /etc/mdadm/mdadm.conf

Now, my mdadm.conf has this entry
ARRAY /dev/md0 metadata=1.2 name=ip-10-xx-xx-xxx:0 UUID=571e2c88:ce2d6e40:c999bf78:ffb1f725

reboot

Look at /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md127 : active (auto-read-only) raid10 xvdm1[2] xvdn1[3] xvdk1[0] xvdl1[1]
      20962304 blocks super 1.2 256K chunks 2 near-copies [4/4] [UUUU]
      	resync=PENDING


Not sure what I am missing.

Comment 5 naren 2011-10-19 17:43:44 UTC
Also, just to further add, if I execute the following two commands after reboot, things are back to normal. However this is not the preferred approach

1) mdadm --stop /dev/md127
2) mdadm --assemble --scan

I am on Ubuntu 11.04 natty, and mdadm 3.1.4

Comment 6 Doug Ledford 2011-10-19 18:07:08 UTC
I'm not sure I can help you with Ubuntu.  The early stages of raid bring up between Ubuntu and Fedora are different.  I would guess that Ubuntu is starting your array using udev, but doing so from the initramfs and you don't have a copy of your mdadm.conf file in the initramfs for mdadm to use to determine your array name.  But, the better question is why not just create the array using a name?  Something like --name=home will cause mdadm to create the array as /dev/md/home and then you never have to worry about the number.

Comment 7 naren 2011-10-19 18:38:21 UTC
>> But, the better question is why not just create the array
using a name?  Something like --name=home will cause mdadm to create the array
as /dev/md/home and then you never have to worry about the number.

It is exactly the same irrespective of whether the name is a string or a number. Now I changed the create statement to 

mdadm --create /dev/md/home --name=home --chunk=256 --level=10 --raid-devices=4 /dev/xvdk1 /dev/xvdl1 /dev/xvdm1 /dev/xvdn1

This worked fine.

After reboot, there is a degraded read only array under md127. There is a link /dev/md/ip-10-171-19-226\:home that points to /dev/md127. But neither the link name nor the state of the array are correct.

The name is always prepended with the homehost string.

Comment 8 naren 2011-10-19 21:03:51 UTC
Ah, I finally got it working.

Once the new mdadm.conf was ready, I had to execute "sudo update-initramfs -u." This updated the conf file used by initramfs.

Thanks for all the help.

Comment 9 Grant McWilliams 2012-01-30 02:30:39 UTC
So you're saying that the need to recreate your ramdisk every time you create a new RAID is NOT a bug?

If you do 

mdadm --create /dev/md0 --name=0 -l 0 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sde1

You end up with 

/dev/md0
Nothing in /dev/md/*
No output from mdadm -As

if you then do mdadm --stop /dev/md0 ; mdadm -As you get

/dev/md0 
/dev/md/0

if you then reboot you get 
 
/dev/md1272.6.32-220.2.1.el6.x86_64.img
/dev/md/192.168.0.10:0

if you then do mdadm --stop /dev/md0 ; mdadm -As you get

/dev/md0
/dev/md/0

If you create an /etc/mdadm.conf from the output of mdadm -Ds /dev/md0 it makes no difference in the outcome after reboot.

I have not tried recreating the ramdisk using dracut and specifying draconf mdadmconf="{yes|no}" because I find it preposterous that we should have to create a new ramdisk and reboot our servers each time we create a RAID. 

Are you really sure this shouldn't be considered a bug?

Comment 10 Grant McWilliams 2012-01-30 02:31:56 UTC
/dev/md1272.6.32-220.2.1.el6.x86_64.img

should read 

/dev/md127

There was a little copy and paste action going on there.

Comment 11 Doug Ledford 2012-01-30 14:42:18 UTC
Yes.  This is working as designed and intended (which is not to say I necessarily agree 100% with the design and intent, but that's upstream's call, this is working the way upstream wants it to, including upstream *wants* you to have to think about the homehost and your mdadm.conf file in order to make an array appear without the homehost prepend in the /dev/md/ link specifically to avoid the case where a raid array is moved from one host to another and accidentally comes up as a local array and then you end up with some sort of data corruption).

Comment 12 Grant McWilliams 2012-01-30 15:18:54 UTC
If it's the way the upstream wants it then the upstream needs to make it consistent. I wouldn't even have a problem with the way they're naming the arrays but it has to be the same every single time. It's a gross violation of the core Linux principals to require a reboot on a server to configure a hard drive! 

As far as it being a bug or not it seems that dracut can be configured to ignore the mdadm.conf in the ramdisk. I'd guess this would force it to go look at the one at /etc/mdadm.conf? If so then that's the bug in RHEL. If the mdadm tool created the proper links in /dev/md (it doesn't) and the ramdisk referenced /etc/mdadm.conf (it doesn't) then we'd have a workable solution because then we'd create our Arrays and the link in /dev/md would be consistent from the time we created the array to the time we reboot.

It seems the only time you'd want the mdadm.conf in your ramdisk is when you want to boot from the array which is probably not the majority of array configurations. My question would be too (which I'll find out soon enough) is that if you update your kernel will the ramdisk be created with your updated mdadm.conf already or will you have to manually create the ramdisk yourself during every kernel update?

Comment 13 Grant McWilliams 2012-01-30 15:33:06 UTC
Doug, over breakfast I pondered your answer. 

Let's say I take an array from one host and plug it into the other. An mdadm -As will NOT give me /dev/md/hostname:arrayname thus I could have corruption so it doesn't accomplish anything. The only way from my testing to get /dev/md/hostname:arrayname is to reboot the server which is not acceptable. 

Maybe it's not a bug - it's a design flaw. Again if the upstream wants to make mdadm -As provide the exact same thing as rebooting then we're on to something but requiring a reboot to AVOID data corruption is a bad and counter intuitive. 

So in summary: 
1. mdadm -As should give the same result as rebooting
2. If an admin wants to boot from the array there needs to be an easy way to include the mdadm.conf in the ramdisk. For all other arrays it should not be required or necessary so an mdadm.conf in the ramdisk shouldn't be the default.

Comment 14 Doug Ledford 2012-01-30 17:09:11 UTC
Grant, I'm sorry things aren't making sense to you.  You have a number of mistaken assumptions about how things work that are contributing towards a good deal of confusion on the various issues.  So, allow me to fill in some of the blanks and correct a few things that are mistaken.

First, automatic assembly is done via mdadm -I, not mdadm -A.  Those two modes are different and are not necessarily expected to work exactly the same.  This is because one mode we know is being driven by a human that presumably is doing what they should be doing, and the other mode is being run by a script and so a certain amount of care must be taken to make sure that an automated action does not do the wrong thing.

Regardless of that distinction though, that isn't why this happens:

"if you then do mdadm --stop /dev/md0 ; mdadm -As you get

/dev/md0 
/dev/md/0

if you then reboot you get 

/dev/md127
/dev/md/192.168.0.10:0

if you then do mdadm --stop /dev/md0 ; mdadm -As you get

/dev/md0
/dev/md/0"

The above happens because when you are running mdadm from the command line, the machine is already up and running and the hostname of the machine has been set.  That hostname was used during the initial creation of the md raid array (no homehost option was passed to the create command, so the default homehost of `hostname -s` was used by mdadm, but the user has the option to set homehost to whatever they want).  However, during early boot, when the initramfs is running, the hostname has not been set yet.  As a result, the homehost in the superblock and the hostname of the machine don't match, mdadm thinks the array is from another machine, so it is assembled with the hostname prepended to the array name and using a high md device number.  The proper fix for this is to do one of two things:

1) If your array is needed during early boot (like it is a root array), then you make sure that the homehost you used to create the array is listed in the mdadm.conf file and you make sure that the mdadm.conf file is included in the initramfs by dracut.
2) If your array is not needed during early boot, then the easiest thing to do is to simply tell dracut not to assemble it (the exact command varies depending on dracut version, but something like rd_NO_MD or rd.no.md on the kernel boot command line should do the trick).  This will delay raid device initialization until the machine is up and running, at which point both /etc/mdadm.conf will be readable (so having the right homehost entry in there will work) and also the machine's hostname will be set in case you didn't specify the homehost option at all and are just letting it be whatever it turns out to be.

So, you see, the problem you outlined above isn't mdadm being inconsistent, it's that your raid subsystem was only partially configured.  Part of fully configuring the raid subsystem is making sure that the homehost used on the arrays on a given host is listed in the mdadm.conf file and included in the initramfs if the initramfs is going to assemble the arrays.  Well, actually you can list the homehost, or you can list the arrays with ARRAY lines since if an array is listed on an ARRAY line in the mdadm.conf file, then whether the homehost matches or not it is still considered a local array.  Either way though, it still involves making sure the initramfs is properly configured if it is going to start arrays for you.

"It's a gross violation of the core Linux principals to require a reboot on a server to configure a hard drive!"

No reboot is required.  If a person fully configures their raid device/subsystem then it will behave the same either from the command line or from a reboot.

"As far as it being a bug or not it seems that dracut can be configured to
ignore the mdadm.conf in the ramdisk. I'd guess this would force it to go look
at the one at /etc/mdadm.conf?"

No, when dracut is running, the /etc/mdadm.conf file does not exist.  Dracut is used to make / mountable, and once it's mounted and /etc/mdadm.conf could be found, dracut is already done and exits.

"If so then that's the bug in RHEL."

It's not so, and it's not a bug.

"If the mdadm tool created the proper links in /dev/md (it doesn't)"

It does.

"and the ramdisk referenced /etc/mdadm.conf (it doesn't) then we'd have a workable solution because then we'd create our Arrays and the link in /dev/md would be consistent from the time we created the array to the time we reboot."

You have to *put* the /etc/mdadm.conf on the initramfs if you want it available during early boot, it *can't* reference the one in /etc/.  Fortunately, this is done automatically by dracut whenever it builds the initramfs for a given kernel as long as dracut detects that you need to start md raid arrays.

"It seems the only time you'd want the mdadm.conf in your ramdisk is when you
want to boot from the array which is probably not the majority of array
configurations."

Correct, although this is actually the majority of raid usage.  Most people that want raid arrays, want some sort of raid array for their root filesystem so that their machine can survive a hard drive failure.  But in that case, you install the system onto the raid array and anaconda takes care of creating the array, installing on it, then setting up the mdadm.conf file and making sure it ends up on the initramfs that dracut builds.

"My question would be too (which I'll find out soon enough) is
that if you update your kernel will the ramdisk be created with your updated
mdadm.conf already or will you have to manually create the ramdisk yourself
during every kernel update?"

Dracut will automatically pick up the update on any future initramfs builds.

"Doug, over breakfast I pondered your answer. 

Let's say I take an array from one host and plug it into the other. An mdadm
-As will NOT give me /dev/md/hostname:arrayname thus I could have corruption so
it doesn't accomplish anything."

This is incorrect.  If you move an array from one host to another, then generally speaking it won't have an ARRAY line in mdadm.conf, and without an ARRAY line in mdadm.conf, mdadm -As won't even assemble it at all.  With an ARRAY line in mdadm.conf, mdadm -As will assemble it, and it won't use the hostname, but that's because the array is listed in mdadm.conf and can be assumed to be local.  So you won't get accidental corruption, you would have to first add the array to mdadm.conf and presumably you wouldn't add it with the same name as another array, so it will get assembled with the proper name you gave it in mdadm.conf.  That's using mdadm -As anyway, if the array gets autoassembled because you hot plugged it into a running server, then mdadm -I used on the devices, not mdadm -As, and it will indeed assemble it and it will assemble it with the hostname, so it will do the right thing.

"The only way from my testing to get /dev/md/hostname:arrayname is to reboot the server which is not acceptable."

This is merely because your aren't accounting for the difference between the hostname being set while running, and not set during initramfs, nor for the difference between mdadm -As and mdadm -I.  In order to simulate the automatic assembly process from the command line, you simply use incremental assembly instead of -As.  So, for example, let's say you want to run a test (which I just did to verify all of this before I wrote it out).  You can do this:

cd /tmp
dd if=/dev/zero bs=1024k count=100 of=block0
dd if=/dev/zer0 bs=1024k count=100 of=block1
losetup /dev/loop0 block0
losetup /dev/loop1 block1
mdadm -C /dev/md/test --name=test --homehost=foreign -l1 -n2 /dev/loop[01]
mdadm -S /dev/md/test
mdadm -As (nothing will happen here as test isn't listed in mdadm.conf)
mdadm -I /dev/loop0 (this will start assembling test as /dev/md/foreign:test)
mdadm -I /dev/loop1 (this will finish assembling and start /dev/md/foreign:test)
mdadm -Db /dev/md/foreign:test >> /etc/mdadm.conf
vi /etc/mdadm.conf (change the ARRAY line we just added to list /dev/md/test as the array name)
mdadm -S /dev/md/foreign:test
mdadm -As (now it will assemble the array without the homehost entry because it's listed in mdadm.conf without it)

"Maybe it's not a bug - it's a design flaw. Again if the upstream wants to make
mdadm -As provide the exact same thing as rebooting then we're on to something
but requiring a reboot to AVOID data corruption is a bad and counter intuitive."

As I've already pointed out, much of what you are upset about here is confusion over how a few things work, not bugs/flaws in mdadm.

"So in summary: 
1. mdadm -As should give the same result as rebooting"

It does on a properly configured system.

"2. If an admin wants to boot from the array there needs to be an easy way to
include the mdadm.conf in the ramdisk. For all other arrays it should not be
required or necessary so an mdadm.conf in the ramdisk shouldn't be the default."

If you install the system onto an array, this is all done for you.  If you want to upgrade from a non-raid install to a raid install, that is a tedious manual procedure that isn't even documented because it isn't recommended, nor supported, but it can be done if you really know what you are doing.

Comment 15 Grant McWilliams 2012-01-30 18:15:01 UTC
Thanks for your really long answer. I didn't mean for you to spend this much time on this. 

"cd /tmp
dd if=/dev/zero bs=1024k count=100 of=block0
dd if=/dev/zer0 bs=1024k count=100 of=block1
losetup /dev/loop0 block0
losetup /dev/loop1 block1
mdadm -C /dev/md/test --name=test --homehost=foreign -l1 -n2 /dev/loop[01]
mdadm -S /dev/md/test
mdadm -As (nothing will happen here as test isn't listed in mdadm.conf)"

I used a couple of XCP disks and duplicated this same thing but got different results. An mdadm -As indeed does start the array even though I don't have an mdadm.conf. A node appears as /dev/md127 as expected and also a link in /dev/md/foreign:test. Are you saying this isn't supposed to be happening?


[root@987654321 md]# mdadm -C /dev/md/test --name=test --homehost=foreign -l1 -n2 /dev/xvdb1 /dev/xvdc1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/test started.
[root@987654321 md]# mdadm -S /dev/md/test
mdadm: stopped /dev/md/test
[root@987654321 md]# mdadm -As
mdadm: /dev/md/foreign:test has been started with 2 drives.

"vi /etc/mdadm.conf (change the ARRAY line we just added to list /dev/md/test as
the array name)
mdadm -S /dev/md/foreign:test
mdadm -As"

This works and mdadm uses the mdadmin.conf to RAID create the raid as /dev/md/test. 

A reboot has the RAID coming up as /dev/md/test as well. 

Deleting the /etc/mdadm.conf and rebooting has the raid coming back up as /dev/md/foreign:test again.


So it seems in summary (again I may be wrong) 

1. mdadm -As will assemble a raid as /dev/md/name if --name was specified when the array was created 

2. mdadm -As will assemble it as /dev/md/homehost:name if --homehost and --name are specified when the array was created 

3. If you reboot without an mdadm.conf the raid will get assembled with data from the RAID and will be named /dev/md/hostname:name if --name was specified when the array was created

4. If you reboot without an mdadm.conf the raid will get assembled with data from the RAID and will be named /dev/md/homehost:name if --name and --homehost were specified when the array was created

4. mdadm -As it will assemble the raid using whatever homehost:name is in the mdadm.conf if present

5. If you reboot with an mdadm.conf the raid will get assembled according to the homehost:name in mdadm.conf

6. If you want to boot your OS from the RAID you need to have the mdadm.conf in the ramdisk.


I've tried each one of these to verify them except for number 6. Do you agree with these conditions? If so my final conclusions would be 

1. You can run without an mdadm.conf like we used to but you have to specify --name and --homehost when creating the array otherwise the system will insert the hostname into the raid device path.
2. Always refer to the raid device as /dev/md/homehost:name or /dev/md/name depending on how you created it.
3. There's no way to reference /dev/md0 so we should probably just get over it.
4. You can override the data built into the RAID in the mdadm.conf.

Comment 16 Doug Ledford 2012-01-30 19:32:41 UTC
(In reply to comment #15)
> Thanks for your really long answer. I didn't mean for you to spend this much
> time on this. 
> 
> "cd /tmp
> dd if=/dev/zero bs=1024k count=100 of=block0
> dd if=/dev/zer0 bs=1024k count=100 of=block1
> losetup /dev/loop0 block0
> losetup /dev/loop1 block1
> mdadm -C /dev/md/test --name=test --homehost=foreign -l1 -n2 /dev/loop[01]
> mdadm -S /dev/md/test
> mdadm -As (nothing will happen here as test isn't listed in mdadm.conf)"
> 
> I used a couple of XCP disks and duplicated this same thing but got different
> results. An mdadm -As indeed does start the array even though I don't have an
> mdadm.conf. A node appears as /dev/md127 as expected and also a link in
> /dev/md/foreign:test. Are you saying this isn't supposed to be happening?

In the absence of an mdadm.conf file, then mdadm -As will assemble any device it finds, however when mdadm.conf is present, it will only assemble devices listed on ARRAY lines in the mdadm.conf file.  My test was done with an mdadm.conf file and I neglected to mention the difference in behavior when there is no file.

> 
> [root@987654321 md]# mdadm -C /dev/md/test --name=test --homehost=foreign -l1
> -n2 /dev/xvdb1 /dev/xvdc1
> mdadm: Note: this array has metadata at the start and
>     may not be suitable as a boot device.  If you plan to
>     store '/boot' on this device please ensure that
>     your boot-loader understands md/v1.x metadata, or use
>     --metadata=0.90
> Continue creating array? yes
> mdadm: Defaulting to version 1.2 metadata
> mdadm: array /dev/md/test started.
> [root@987654321 md]# mdadm -S /dev/md/test
> mdadm: stopped /dev/md/test
> [root@987654321 md]# mdadm -As
> mdadm: /dev/md/foreign:test has been started with 2 drives.
> 
> "vi /etc/mdadm.conf (change the ARRAY line we just added to list /dev/md/test
> as
> the array name)
> mdadm -S /dev/md/foreign:test
> mdadm -As"
> 
> This works and mdadm uses the mdadmin.conf to RAID create the raid as
> /dev/md/test. 
> 
> A reboot has the RAID coming up as /dev/md/test as well. 
> 
> Deleting the /etc/mdadm.conf and rebooting has the raid coming back up as
> /dev/md/foreign:test again.
> 
> 
> So it seems in summary (again I may be wrong) 
> 
> 1. mdadm -As will assemble a raid as /dev/md/name if --name was specified when
> the array was created 

Whether or not you pass --name to mdadm, mdadm will try to detect what name is appropriate based upon the device you create.  So, you need to forget about trying to categorize behavior based upon the presence or absence of --name, it's irrelevant.  For example, if you did this:

mdadm -C /dev/md5 -l1 -n2 /dev/loop[01]
mdadm -Db /dev/md5

you would see in the output that mdadm filled in both the homehost and the name for you.  In my case, it was firewall:5.  The name of the machine I ran the test on is firewall, and because I did a numbered array it used the special number form (which would result in the array coming back as /dev/md5 on reassembly).

It's not really important whether or not you specify --name or --homehost at creation time, these are the things that effect behavior on assembly:

1) Is there an mdadm.conf file?
2) Is the assembly being done via -As or via -I?
3) What is your hostname *AT THE TIME OF ASSEMBLY* (this is why something that works from a command line may not behave as you expect when done during bootup as we don't know our hostname during the early stages of bootup, so depending on whether or not dracut or some other initramfs loader is starting your arrays, this *must* be taking into account) and does it match the homehost field, or does the mdadm.conf file override either the hostname or the matching of homehost?

These are the real issues you have to deal with in order to control how your arrays are assembled.  If there is no mdadm.conf file, then mdadm has to make guesses about things, and it has to use what it considers reasonable default values for things like the homehost field.  This is why it's always preferable to use an mdadm.conf file.

> I've tried each one of these to verify them except for number 6. Do you agree
> with these conditions? If so my final conclusions would be 

Most of the conditionals you listed are irrelevant.  Like I mention above, the set of items you need to pay attention to is not what you are actually paying attention to.

> 1. You can run without an mdadm.conf like we used to but you have to specify
> --name and --homehost when creating the array otherwise the system will insert
> the hostname into the raid device path.

You can run without an mdadm.conf, but if you want the device to be started without the hostname being included in the /dev/md/ device name, then you need to know when the device will be created (by dracut or later during system startup) and you need to make the homehost field in the array match whatever it will be at the time the array is started.

> 2. Always refer to the raid device as /dev/md/homehost:name or /dev/md/name
> depending on how you created it.

If you have set things up properly, you should always be referring to the raid device as /dev/md/name when it is being accessed from the machine it belongs to, and /dev/md/homehost:name if you bring it up on another machine.

> 3. There's no way to reference /dev/md0 so we should probably just get over it.

No, this works just fine, but it is not recommended.  You simply have to control the homehost settings first.  For instance, in my example above I created /dev/md5.  If the array looks like it does not belong to this host, then mdadm will assemble it as /dev/md/homehost:5 but it will not allocate /dev/md5 to the array, it will get some high number.  Mdadm will only honor the /dev/md5 number request that the name 5 implies if the array appears to belong to this machine.  If you want this to work without an mdadm.conf file, then you have to know *when* the array will be assembled, make sure your homehost entry matches whatever mdadm is going to see at that time, and then it will create /dev/md5 for you.

> 4. You can override the data built into the RAID in the mdadm.conf.

If you put an array line in the mdadm.conf file, then whatever device you specify for that array will be created.  So, if you have:

ARRAY /dev/md5 uuid=<blah>

then regardless of the homehost:name fields of the superblock, the array that matches the uuid given will get the /dev/md5 device.

Comment 17 Grant McWilliams 2012-01-30 23:59:39 UTC
>So, you need to forget about
>trying to categorize behavior based upon the presence or absence of --name,
>it's irrelevant.  For example, if you did this:

>mdadm -C /dev/md5 -l1 -n2 /dev/loop[01]
>mdadm -Db /dev/md5

>you would see in the output that mdadm filled in both the homehost and the name
>for you.  In my case, it was firewall:5.  The name of the machine I ran the
>test on is firewall, and because I did a numbered array it used the special
>number form (which would result in the array coming back as /dev/md5 on
>reassembly).

>It's not really important whether or not you specify --name or --homehost at
>creation time, these are the things that effect behavior on assembly:

You are correct if you're only creating raids from the cli. As soon as you reboot everything you said above is wrong. See condition 1. 

>1) Is there an mdadm.conf file?

Again see conditions 4 and 5

> 2) Is the assembly being done via -As or via -I?

This does not seem relevant. The behavior and the result is exactly the same either way. The ONLY difference is -I will assemble one raid and -As will assemble all that it finds.

>3) What is your hostname *AT THE TIME OF ASSEMBLY* (this is why something that
>works from a command line may not behave as you expect when done during bootup
>as we don't know our hostname during the early stages of bootup, so depending
>on whether or not dracut or some other initramfs loader is starting your
>arrays, this *must* be taking into account) and does it match the homehost
>field, or does the mdadm.conf file override either the hostname or the matching
>of homehost?

It will assemble it exactly the same every time if you set --homehost and --name on the cli. See conditions 2 and 4. Only if you do not specify --homehost does the homehost get replaced by the hostname on assemble when you boot. See condition 3.

>These are the real issues you have to deal with in order to control how your
>arrays are assembled.  If there is no mdadm.conf file, then mdadm has to make
>guesses about things, and it has to use what it considers reasonable default
>values for things like the homehost field.  This is why it's always preferable
>to use an mdadm.conf file.

If you specify --homehost and --name mdadm doesn't have to guess anything. Conditions 2 and 4. Leaving either empty leaves room for the system to move the path on you. I agree that using an mdadm.conf has it's benefits - see condition 4 and 5. 

The rest of your responses I won't address as they're covered by the conditionals. However, this is interesting.


>> 3. There's no way to reference /dev/md0 so we should probably just get over it.

>No, this works just fine, but it is not recommended. You simply have to
>control the homehost settings first.  For instance, in my example above I
>created /dev/md5.  If the array looks like it does not belong to this host,
>then mdadm will assemble it as /dev/md/homehost:5 but it will not allocate
>/dev/md5 to the array, it will get some high number.  

>Mdadm will only honor the
>/dev/md5 number request that the name 5 implies if the array appears to belong
>to this machine. 

True but again only on the cli. Reboot and watch the /dev/md5 move to md127. Set --homehost and --name then reference it by /dev/md/.. and it works the same every time.

>If you want this to work without an mdadm.conf file, then you
>have to know *when* the array will be assembled, make sure your homehost entry
>matches whatever mdadm is going to see at that time, and then it will create
>/dev/md5 for you.

This doesn't work during a reboot. I'm not sure what the system would think my homehost is before it boots since it doesn't have an ip address or hostname yet. Maybe there's some magical name to put in to get it to do this but in all cases I've tried it still does an /dev/md127. 

However an assemble on the commandline using mdadm -As or -I WILL give me a /dev/md5. This is irrelevant however since the last thing I want is a RAID that moves when I reboot. Setting both --homehost and --name OR using an mdadm.conf solves this as long as you refer to it as /dev/md/...


Thanks Doug for your time. I've created about 100 RAIDS in the last 24 hrs and rebooted a server 50 times and even though you think I'm focusing on the wrong stuff in the conditions they actually DO work and so far your advice has been mostly correct on the commandline and mostly incorrect in reference to a system during bootup which is understandable since it doesn't look like any of your verification included behavior at boot time. I hope other people who are having the same issues can dig through all of this and  find a solution. 

Thanks, that's all I needed. I can get it to do what I want now. I change my opinion of this being a bug, it's not, it's just that the behavior has changed from how it used to work. Take note of the conditions I wrote and if in doubt set --homehost and --name when creating the RAID or set them in mdadm.conf.

Comment 18 Doug Ledford 2012-01-31 02:13:21 UTC
(In reply to comment #17)
> >So, you need to forget about
> >trying to categorize behavior based upon the presence or absence of --name,
> >it's irrelevant.  For example, if you did this:
> 
> >mdadm -C /dev/md5 -l1 -n2 /dev/loop[01]
> >mdadm -Db /dev/md5
> 
> >you would see in the output that mdadm filled in both the homehost and the name
> >for you.  In my case, it was firewall:5.  The name of the machine I ran the
> >test on is firewall, and because I did a numbered array it used the special
> >number form (which would result in the array coming back as /dev/md5 on
> >reassembly).
> 
> >It's not really important whether or not you specify --name or --homehost at
> >creation time, these are the things that effect behavior on assembly:
> 
> You are correct if you're only creating raids from the cli. As soon as you
> reboot everything you said above is wrong. See condition 1. 

No.  Everything I said is wrong once you reboot *in your experiments*.  But that's not because what I'm saying is wrong, it's because in your experiments, you are not controlling the homehost properly.

Condition #1 and Condition #2 are both wrong, and they are wrong for the very same reason.  You are drawing the wrong conclusion from your experimental data.  You are focusing on a symptom and not the root cause of the problem.

In #1 you assumed that passing a --name and not --homehost caused future assembly to reliably create /dev/md/<name>.  This is not true.  As an experiment, create an array, stop it, change the machine's hostname, then mdadm -As and see what happens.  The fact that your experiments showed what they did is because when you don't specify a homehost, mdadm automatically picks one based on the hostname.  If you then assemble that array on the same machine, the hostname is still the same, so the name mdadm picks is still the same, so mdadm's internal "this is what our homehost arrays should be named" idea and the homehost in the superblock match, causing mdadm to create /dev/md/name.  Once you change the hostname though, that's no longer true, and that array will get assembled as /dev/md/old_hostname:name.  The real reason #1 worked the way you thought it did is *because* you weren't setting the homehost, so you didn't have a chance to set it to something other than what mdadm expects it to be.

Which leads to why condition #2 was working the way it was for you.  You weren't setting the homehost to the correct value.  It's not sufficient to simply put any old homehost in the superblock, it has to be the *right* homehost if you want the array assembled as /dev/md/<name>.  During condition #1, mdadm was picking the homehost out for you and it never screwed up.  During this round, you're picking it, and you aren't picking it right.  At least not as far as mdadm is concerned.  If you want to know what homehost you need in order for mdadm to do the right thing, go back to an array you created under condition #1 and do an mdadm -D of that array.  The homehost:name entry in that array will tell you what homehost mdadm is picking out for your machine.  Now if you create an array using that homehost value, then #2 will in fact act like #1.

However, this is all flaky stuff, brought on by the fact that you are trying to run without any mdadm.conf file whatsoever.  This is bad.  If nothing else, I would strongly suggest you create an mdadm.conf file with at least these three lines:

DEVICE partitions
HOMEHOST <whatever name you want>
AUTO +1.x +imsm -all

This will cause mdadm to search all the entries in /proc/partitions as valid raid devices, will set the homehost to a specific value (and this need not be related to the machine's actual hostname, you could put scoobydoo in here and that's just fine...this setting is used by mdadm when assembling arrays, any array with a homehost field that matches this will be considered local, and is also used when creating arrays as this will override the automatic hostname lookup that mdadm does and instead whenever you don't specify homehost as an option to the create command, this value will be used), and will tell mdadm -I assembly to automatically process all version 1.x md raid arrays and all Intel IMSM raid arrays, but not other raid array types.

Now, on to the reboot issue.  All this stuff works on the command line, and the failure on reboot goes back to what I said before: failure to control the homehost environment will force your arrays to not be assembled properly.  During the very early stages of bootup, the system does not yet have its final hostname or IP address, so the automatic lookup of homehost does not match what it was when you created the array.  Had you used a homehost entry in your mdadm.conf file and put that mdadm.conf file on your initramfs, this issue would not exist.  So your conditions are all just symptoms of this, the *real* problem.  You must control the homehost properly, that means either A) you put an mdadm.conf file on the initramfs that specifies the homehost or B) you tell the initramfs not to start any raid arrays and instead have them started after the machine's hostname has been set.  Either option will solve the problems you've been attributing to reboots.

And condition #3 and #4 are actually the same thing and can be condensed down into one.  The fact that in #3 you get hostname is because that's what mdadm automatically grabbed for the homehost when you didn't specify it, so the distinction here is irrelevant, in both cases the assembly is proceeding with homehost:name and the only difference is who picked out the particular homehost, you or mdadm.  And if you had an mdadm.conf file with the line HOMEHOST scoobydoo in it, and you created an array without specifying anything like in #3, and then rebooted without an mdadm.conf, you would get scoobydoo:name as the array.

Like I've been harping on, it's all about controlling the homehost entry.  Not just at creation, but at assembly time.  You can use the command line to set the homehost at creation (and you can use --homehost='' to completely remove it), or you can use an entry in mdadm.conf to control it at creation time, or you can let mdadm pick out its own default based upon your hostname at creation time.  However, that's only half the story.  In order for assembly to happen properly, the homehost that got stored in the superblock must match the one that mdadm expects to find during reassembly.  From the command line, the homehost will be either A) supplied via --homehost (and yes, this means that if you want to see for yourself that your condition #2 is wrong, create an array, use --homehost=scoobydoo, then when you attempt to assemble the array from the command line use mdadm -As --homehost=scoobydoo and all of a sudden your #2 will behave like #1) or B) supplied as a HOMEHOST entry in mdadm.conf or C) automatically looked up using hostname.  Normally, this is supplied by putting HOMEHOST in the mdadm.conf file and then putting the mdadm.conf file on the initramfs.  We do this because during early bootup, the automated fallback of host name lookup doesn't work.  I'm not sure what we end up with then, it could be (null), '', localhost.localdomain, or several other options.  A person would need to boot dracut into a debug shell, then use mdadm to create an array without specifying a homehost, then examine the new array and see what mdadm stuck in the homehost field.  Once you get that value, you could intentionally create your arrays using that homehost value, and they would then assemble properly from the initramfs even without an mdadm.conf file.  But, this is precisely why you should have an mdadm.conf file on your initramfs or else tell the initramfs not to assemble any raid arrays.

> >1) Is there an mdadm.conf file?
> 
> Again see conditions 4 and 5

You're missing my point.  If there is no mdadm.conf file, then your homehost entries must match whatever the hostname of the machine is, and your arrays can not be assembled by the initramfs.  If there is an mdadm.conf file, then the homehost setting should be present in the file, the file should be on the initramfs, and the homehost on any arrays you create should match what's in the file.

> > 2) Is the assembly being done via -As or via -I?
> 
> This does not seem relevant. The behavior and the result is exactly the same
> either way. The ONLY difference is -I will assemble one raid and -As will
> assemble all that it finds.

The behavior of these changes based upon the answer to #1 above.  If there is an mdadm.conf file, it affects how these two proceed (for example, mdadm -As will start any array it finds when there is no mdadm.conf, but will only start arrays listed in mdadm.conf if it is present...and the presence of mdadm.conf means that there might be an AUTO line in the mdadm.conf file and that line controls the behavior of mdadm -I, causing it to ignore certain types of arrays for example).  So, it is relevant which method you are talking about as the presence of an mdadm.conf file makes them behave differently.

> >3) What is your hostname *AT THE TIME OF ASSEMBLY* (this is why something that
> >works from a command line may not behave as you expect when done during bootup
> >as we don't know our hostname during the early stages of bootup, so depending
> >on whether or not dracut or some other initramfs loader is starting your
> >arrays, this *must* be taking into account) and does it match the homehost
> >field, or does the mdadm.conf file override either the hostname or the matching
> >of homehost?
> 
> It will assemble it exactly the same every time if you set --homehost and
> --name on the cli. See conditions 2 and 4. Only if you do not specify
> --homehost does the homehost get replaced by the hostname on assemble when you
> boot. See condition 3.

No, it doesn't.  See my previous responses to why conclusions 1 through 4 are mistaken.  Regardless though, my point here is that if you don't have an mdadm.conf file, then the hostname of the machine at the time of assembly is going to be what gets used by mdadm to determine if an array is local.

Remember that this is basically a matching game.  Whatever we set in the homehost field on array creation gets stored in the array's superblock.  When we then assemble that array, mdadm tries to determine what the homehost for this machine is (not for this array, for this machine) and it then checks to see if that matches whatever it finds in the array superblock.  If they don't, it creates the array as homehost:name instead of just name.  Your mistaken conclusions in #1 through #4 were because you were controlling only the homehost used during creation, not the homehost that mdadm uses during assembly to match against.

> >These are the real issues you have to deal with in order to control how your
> >arrays are assembled.  If there is no mdadm.conf file, then mdadm has to make
> >guesses about things, and it has to use what it considers reasonable default
> >values for things like the homehost field.  This is why it's always preferable
> >to use an mdadm.conf file.
> 
> If you specify --homehost and --name mdadm doesn't have to guess anything.

Specifying homehost does not mean mdadm is no longer guessing.  It means you were giving a bad homehost that *never* matched mdadm's guess about what the system's homehost should be.  And it also means that not a single array you created this way was ever considered local, they were always considered foreign, and that's why you couldn't create an array with the name 5 and expect it to actually use /dev/md5 for that array.

> Conditions 2 and 4. Leaving either empty leaves room for the system to move the
> path on you.

This path change happens only because your are doing something you aren't allowed to do: create an array on a running system with a valid hostname without using --homehost and without a homehost setting in the mdadm.conf file, and then expecting assembly by initramfs but without an mdadm.conf with a homehost entry to tell it what the machine's local homehost name should be to be able to tell that the array is local and should be assembled using just name as opposed to foreign and assembled with homehost:name.  In short, this is user error.  You have three options here:

1) Don't assemble these arrays in the initramfs
2) Put an mdadm.conf on the initramfs that includes the homehost (or skip the homehost, but list the arrays themselves on array lines)
3) Figure out what homehost the mdadm on the initramfs comes up with on its own when there is no mdadm.conf file and then put that into the homehost field of the arrays you create instead of allowing mdadm to put your hostname in there

> I agree that using an mdadm.conf has it's benefits - see condition
> 4 and 5. 
> 
> The rest of your responses I won't address as they're covered by the
> conditionals. However, this is interesting.
> 
> 
> >> 3. There's no way to reference /dev/md0 so we should probably just get over it.
> 
> >No, this works just fine, but it is not recommended. You simply have to
> >control the homehost settings first.  For instance, in my example above I
> >created /dev/md5.  If the array looks like it does not belong to this host,
> >then mdadm will assemble it as /dev/md/homehost:5 but it will not allocate
> >/dev/md5 to the array, it will get some high number.  
> 
> >Mdadm will only honor the
> >/dev/md5 number request that the name 5 implies if the array appears to belong
> >to this machine. 
> 
> True but again only on the cli. Reboot and watch the /dev/md5 move to md127.

Again, it's all about controlling the homehost.  It works perfectly, but you have to set the homehost to the *right* value for your initramfs.  If you don't use an mdadm.conf to set the homehost, then you have to either tell the initramfs not to start any arrays or you have to figure out what value it picks for homehost when it can't use the hostname.  Then this will work perfectly and /dev/md5 will always be what you expect it to be.

> Set --homehost and --name then reference it by /dev/md/.. and it works the same
> every time.

Not if you sometime later put that homehost value in mdadm.conf and then put that on the initramfs.  Then suddenly your /dev/md/homehost:name would switch back to being /dev/md/name and assuming it was the 5 we were talking about above, it would suddenly start getting /dev/md5 as well.

> >If you want this to work without an mdadm.conf file, then you
> >have to know *when* the array will be assembled, make sure your homehost entry
> >matches whatever mdadm is going to see at that time, and then it will create
> >/dev/md5 for you.
> 
> This doesn't work during a reboot. I'm not sure what the system would think my
> homehost is before it boots since it doesn't have an ip address or hostname
> yet. Maybe there's some magical name to put in to get it to do this but in all
> cases I've tried it still does an /dev/md127. 
> 
> However an assemble on the commandline using mdadm -As or -I WILL give me a
> /dev/md5. This is irrelevant however since the last thing I want is a RAID that
> moves when I reboot. Setting both --homehost and --name OR using an mdadm.conf
> solves this as long as you refer to it as /dev/md/...
> 
> 
> Thanks Doug for your time. I've created about 100 RAIDS in the last 24 hrs and
> rebooted a server 50 times and even though you think I'm focusing on the wrong
> stuff in the conditions they actually DO work

No, they don't.  They seem to because you haven't been playing with changing the homehost setting during assembly, only during creation.  Likewise the reboots haven't behaved like you wanted because you haven't controlled the homehost setting during the bootup process (which is best done with an mdadm.conf file, but if you wanted to hack the dracut scripts you could also do it with the --homehost option on the assemble commands).

> and so far your advice has been
> mostly correct on the commandline and mostly incorrect in reference to a system
> during bootup

No, trust me, my advice is spot on for *properly* configured systems.  But setting the homehost on creation is only half the battle, the other half is making sure you match it properly during assembly.  Saying that things don't work or are unreliable when the system is only half configured is inaccurate.

> which is understandable since it doesn't look like any of your
> verification included behavior at boot time. I hope other people who are having
> the same issues can dig through all of this and  find a solution. 
> 
> Thanks, that's all I needed. I can get it to do what I want now. I change my
> opinion of this being a bug, it's not, it's just that the behavior has changed
> from how it used to work. Take note of the conditions I wrote and if in doubt
> set --homehost and --name when creating the RAID or set them in mdadm.conf.

Well, I pray that if other people end up reading this bug they at least understand what I've been trying to convey here.  Your conditionals are not correct, controlling the homehost is what you need to do in order to make things work properly, but controlling the homehost necessarily involves making sure things are controlled not only at array creation time, but also at array assembly time.

Comment 19 Grant McWilliams 2012-01-31 03:06:23 UTC
Thanks Doug.

It seems that the entire point is that you have to have the homehost in the array superblock match the homehost that mdadm sees when it assembles the array. 

If this assembly is being done at bootup the system doesn't have it's final hostname yet so it will never match unless you put hosthome in your mdadm.conf and install that in the ramdisk 

OR 

you create your arrays with whatever the system thinks it's hostname is at boot ie. localhost.localdomain. 

Are these statements at least true?

Oh, and it's using the FQDN and not hostname -s. This is why the mismatch no matter what I set it to.

Comment 20 Doug Ledford 2012-01-31 04:29:26 UTC
(In reply to comment #19)
> Thanks Doug.
> 
> It seems that the entire point is that you have to have the homehost in the
> array superblock match the homehost that mdadm sees when it assembles the
> array. 

Yes, that is exactly right ;-)

> If this assembly is being done at bootup the system doesn't have it's final
> hostname yet so it will never match unless you put hosthome in your mdadm.conf
> and install that in the ramdisk 
> 
> OR 
> 
> you create your arrays with whatever the system thinks it's hostname is at boot
> ie. localhost.localdomain. 

Correct.

> Are these statements at least true?

Yes, these are 100% accurate.

> Oh, and it's using the FQDN and not hostname -s. This is why the mismatch no
> matter what I set it to.

Yeah, I noticed that in my testing, but chose not to muddy the waters further with that data point (in particular, I was seeing hostname -s as the result, but it was because I had an mdadm.conf file with hostname -s as the HOSTNAME line, when I moved the mdadm.conf to mdadm.conf.bak to test some things with a missing mdadm.conf, I noticed that it got the full host name, not the hostname -s, which is contrary to my expectations since I know that the whole hostname:name field is limited to 32 characters total, so using a full host name is needlessly wasteful of that space and so I expected hostname -s, but that wasn't the case as I learned....sorry for that mistake).

Comment 21 infinality 2012-03-07 04:29:07 UTC
I didn't read all the comments in this thread; I just skimmed them.  This may be "correct behavior" of mdadm according to upstream, but damn, this is such a cluster.  I upgraded my F16 /boot /dev/md1 device from raid1 to raid5 and "all kinds of fun" ensued with grub2, /dev/md127, fstab, mdadm.conf, grub2.cfg, and various boot disks.  All I wanted was for my /boot partition to be /dev/md1.  That's it.  I've settled on whacking it in favor of simply booting from the /boot directory on my / partition.  InSaNiTy!  Long live btrfs.

Comment 22 Drew from Zhrodague 2012-07-18 18:51:36 UTC
I can imagine that we could give a 'friendly' name to our arrays, but wtf do we need a hostname for the raid? This does not make any sense. This is also a big cluster for us, our scripts depend on the old behavior. We will not be upgrading to the new mdadm.

Comment 23 istr 2013-03-28 16:29:09 UTC
I would like to '+1' comments #21 and #22.
Upstream fails to see that the "correct behaviour" is a major pain / showstopper for the simple task of getting a system up and running without having to script dozens of configuration files and fiddle about with lots of flags during creation / assembly of arrays.

> This
> may be "correct behavior" of mdadm according to upstream, but damn, this is
> such a cluster.  I upgraded my F16 /boot /dev/md1 device from raid1 to raid5
> and "all kinds of fun" ensued with grub2, /dev/md127, fstab, mdadm.conf,
> grub2.cfg, and various boot disks.  All I wanted was for my /boot partition
> to be /dev/md1.  That's it.

Exactly. I don't care having any hostname at all. Upstream tries to address a single problem (aka "use case" these days) that rarely exists (stick some raid array disk into the wrong machine w/o having it cleaned BEFOREHAND) and creates dozens of problems for the utmost simple and common requirement (having a system with /root or /boot on /dev/md1 maybe even on a monolithic kernel w/o initramfs and definitely without some obscure mdadm.conf somewhere).
And yes, I read https://raid.wiki.kernel.org/index.php/RAID_Boot but I disagree:
Kernel autoassembly was a good thing, the need for initramfs is a bad thing.

Doug wrote: "brought on by the fact that you are trying to run without any mdadm.conf file whatsoever. This is bad.". This puts upstream's misconception in a nutshell. imnsvho the sensible thing would be to not require any configuration at all, and just allow people to stick with the simple behaviour that worked for ages. The "make your homehost(s) match on build and assemble" requirement is a misconception and a nuisance. It is counter-intuitive, error-prone and superfluous.

Please reopen.
Breaking sane (be it "deprecated") behaviour is a regression, thus a bug.

Comment 24 Doug Ledford 2013-03-28 16:56:02 UTC
(In reply to comment #23)
> Exactly. I don't care having any hostname at all. Upstream tries to address
> a single problem (aka "use case" these days) that rarely exists (stick some
> raid array disk into the wrong machine w/o having it cleaned BEFOREHAND)

Actually, that's not the use case that homehost addressed.  And the issue isn't that upstream tries to address one use case, it's that they do their best to address *all* of the use cases.

> and
> creates dozens of problems for the utmost simple and common requirement
> (having a system with /root or /boot on /dev/md1 maybe even on a monolithic
> kernel w/o initramfs and definitely without some obscure mdadm.conf
> somewhere).
> And yes, I read https://raid.wiki.kernel.org/index.php/RAID_Boot but I
> disagree:
> Kernel autoassembly was a good thing, the need for initramfs is a bad thing.

You are free to disagree.  You are also free to roll your own boot loader and initramfs (or lack thereof).  We, on the other hand, are constrained by needing to work across a variety of situations.

> Doug wrote: "brought on by the fact that you are trying to run without any
> mdadm.conf file whatsoever. This is bad.". This puts upstream's
> misconception in a nutshell. imnsvho the sensible thing would be to not
> require any configuration at all, and just allow people to stick with the
> simple behaviour that worked for ages. The "make your homehost(s) match on
> build and assemble" requirement is a misconception and a nuisance. It is
> counter-intuitive, error-prone and superfluous.

Computing paradigms evolve as people find new ways to use their computers.  We must keep up with those shifts (and in some cases we drive those shifts).  You can choose not to keep up if you like, I hear Windows 95 was a great OS.

> Please reopen.
> Breaking sane (be it "deprecated") behaviour is a regression, thus a bug.

I'm sorry, but no.  The code is working as designed.

Comment 25 istr 2013-03-28 17:56:19 UTC
Doug, thanks for your ultra-fast reply. :-)

(In reply to comment #24)
> Actually, that's not the use case that homehost addressed.
Indeed that's exactly the use case it was intended to address, see Neil Brown's original announcement for mdadm 2.5 at 2006-05-26 (cf. 
http://marc.info/?l=linux-raid&m=114862526231187&w=2). And to be honest I am unable to make up any other use case for that "feature".

> You are free to disagree.  You are also free to roll your own boot loader
> and initramfs (or lack thereof).
I am happy with grub eventually supporting "native" raid1 boot. Speaking of use cases I think that this one is the most common. It took years to finally have a boot loader to support nearly hassle-free raid1 boots (sometimes I am still haunted by lilo nightmares). And it is a pity that mdadm breaks this simplicity just AFTER the first boot stage (having a running kernel!). More often than not you end up with systems that hang WITHIN initramfs, just because mdadm / kernel fail/refuse to automatically assemble the RAIDs, which is the primary thing they are supposed to do (having them NOT assemble unwanted stuff is by far a minor issue).

> You can choose not to keep up if you like, I hear Windows 95 was a great OS.
Yeah, and gcc 2.96.3 was the last great compiler... ;-)

Still, it would be great if upstream could decide to make HOMEHOST <ignore> / unconditional auto-assembly the builtin default for mdadm again.

Name it "feature request" if you feel more comfortable with it, I know that people tend to hate naming regressions what they are: bugs.

Comment 26 Doug Ledford 2013-03-28 21:11:42 UTC
(In reply to comment #25)
> Doug, thanks for your ultra-fast reply. :-)
> 
> (In reply to comment #24)
> > Actually, that's not the use case that homehost addressed.
> Indeed that's exactly the use case it was intended to address, see Neil
> Brown's original announcement for mdadm 2.5 at 2006-05-26 (cf. 
> http://marc.info/?l=linux-raid&m=114862526231187&w=2). And to be honest I am
> unable to make up any other use case for that "feature".

That's just his announcement.  There were discussions that happened back in the day before this code was written.  In those discussions the scenarios were more varied.  They included things like SAN attached storage visible to multiple hosts, transplanting hardware, and in more recent days we've seen it used to allow a person to carry an external USB/eSATA drive tower from machine to machine and plug it in and access the data on it without it ever conflicting with any native md raid arrays on the machine you are plugging the tower into (I see that in places that work with lots and lots of video files, post production facilities, that sort of thing).

> > You are free to disagree.  You are also free to roll your own boot loader
> > and initramfs (or lack thereof).
> I am happy with grub eventually supporting "native" raid1 boot.

The original grub still doesn't support native raid1 boot, so I'm not sure what you are speaking of here.  Now, grub2 *finally* supports native raid1 boots, but that's still a relatively new thing.

> Speaking of
> use cases I think that this one is the most common. It took years to finally
> have a boot loader to support nearly hassle-free raid1 boots (sometimes I am
> still haunted by lilo nightmares).

Lilo was fine as long as you were OK with it being in the master boot record.  Or at least the one we shipped was.  Just point it at your raid array as the boot device and it would automatically go to all of the drives in the raid1 array and install itself on the master boot record pointing at the boot partition on that particular drive.

> And it is a pity that mdadm breaks this
> simplicity just AFTER the first boot stage (having a running kernel!). More
> often than not you end up with systems that hang WITHIN initramfs, just
> because mdadm / kernel fail/refuse to automatically assemble the RAIDs,
> which is the primary thing they are supposed to do (having them NOT assemble
> unwanted stuff is by far a minor issue).
> 
> > You can choose not to keep up if you like, I hear Windows 95 was a great OS.
> Yeah, and gcc 2.96.3 was the last great compiler... ;-)
> 
> Still, it would be great if upstream could decide to make HOMEHOST <ignore>
> / unconditional auto-assembly the builtin default for mdadm again.

You're more than welcome to lobby for this upstream.  I don't necessarily disagree with you even.  But I don't want to carry a large difference in behavior from upstream as that means people using our product and other products see two totally different behaving systems.  It's needlessly separatist and fractional.

> Name it "feature request" if you feel more comfortable with it, I know that
> people tend to hate naming regressions what they are: bugs.

Comment 27 istr 2013-03-29 12:37:00 UTC
(In reply to comment #26)

> The original grub still doesn't support native raid1 boot, so I'm not sure
> what you are speaking of here.  Now, grub2 *finally* supports native raid1
> boots, but that's still a relatively new thing.
Yes, my inaccuracy: grub2 of course.


> You're more than welcome to lobby for this upstream.  I don't necessarily
> disagree with you even.  But I don't want to carry a large difference in
> behavior from upstream as that means people using our product and other
> products see two totally different behaving systems.  It's needlessly
> separatist and fractional.
Ok, point taken -- I totally agree with that; diverging from upstream would make things worse, definitely. So I will try to lobby there... :-)

After all the best solution would be to be able to mark the RAID "auto-assemble" or "dont-auto-assemble" during creation -- in fact it is kind of possible as of now, simply using 0.9 vs. 1.* superblocks. 
But I agree, it is best to lobby at kernel.org and mdadm for that change.

Comment 28 Vincent Gerris 2013-08-05 14:39:48 UTC
I am not sure if this comment will make sense, but anyway I would like to thank naren for post #8 , because it fixed an issue we had.
Since we are running proxmox with RAID, I am not sure if this is relevant for RedHat and the above issues.
What I do consider a bug would be the fact that a machine with mdadm RAID config would not boot after a kernel update, because the initrd image is not updated.
So +1 for post number 9 :).


Note You need to log in before you can comment on or make changes to this bug.