Red Hat Bugzilla – Bug 1056643
Can mdadm report when a RAID mirror is deliberately broken?
Last modified: 2014-12-09 09:27:49 EST
There is a lot of history to this request, so I will try to summarize:
People run RAID mirrors for /. When they go to do a fresh install or upgrade, they want to break the mirror and perform the install or upgrade on only one half of the mirror. The idea being that the other half serves as a backup they can revert to. This makes sense.
However, in anaconda we prohibit use of RAID devices that we detect as degraded. The position has always been that the device(s) you are installing to need to be healthy and a degraded RAID needs attention.
But, if there was a way to determine if a user had deliberately broken a RAID mirror for the purposes of installation or upgrade, we could support this use case.
Maybe this is already supported. We check the /sys/.../md/degraded file for the RAID device. If it is '1', we treat the device as degraded and prevent the user from using it. If there was a way for deliberately broken RAID mirrors to report a different value here, it would make supporting this use case very easy.
+++ This bug was initially created as a clone of Bug #188314 +++
Description of problem:
When installing Fedora and doing custom partitioning, you are not allowed to set
up a raid 1 partition with only one drive, despite this being supported after
the OS is installed.
It can be convenient in a home user situation to break a mirror and do an
install on one half of the broken mirror while using the other half as a way to
do a quick fall back or as a convenient source for user data.
Once the system is installed and any data recovered from the other half of the
mirror, it can easily be put back in the mirror using mdadm.
While I believe it is possible to turn an ext3 partition into a software raid
partition with an ext3 partition on top of it, this doesn't seem to be well
documented and if you make a mistake, problems may not show up until much later
when recovery could be more expensive than during an initial install.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Do a custom partition during a fedora core install.
2. Try to set up a software raid partition with only one underlying partition.
You cannot even try to create a raid partition if only one partition of type
software raid is definied. If you do have more than one such partition, you
still aren't allowed to only have one partition in the array even though this
will work for raid 1.
When only one partition is in an array, raid 1 should be allowed.
--- Additional comment from Jeremy Katz on 2006-04-10 14:14:35 EDT ---
People using RAID1 are doing so with an expectation of some reliability which
you don't get with a single drive. We're not going to do this.
--- Additional comment from Bruno Wolff III on 2006-04-11 09:24:54 EDT ---
I am not sure you understand the request. I wasn't suggesting that people run
raid 1 with one drive on an ongoing basis, just that they be able to use it
during an install. This provides a nice way to do a fresh install where the old
data is still conveniently available on a hard drive. Otherwise you need to do a
restore later from a backup medium or do extra copies of the drive contents.
People doing this would already be doing custom layouts, so it isn't something
people are likely to do by mistake. This mode is supported by md, so I wouldn't
expect it to be a big change.
--- Additional comment from Bob Gustafson on 2006-04-25 17:39:59 EDT ---
I agree with the submitter.
An Install is a non critical (failure prone - for many other reasons) part of OS
Why not make it easy on folks and allow a one disk raid install.
The second disk can be mounted, stuff can be copied off, it can be repartitioned
and assembled into the running 2 disk RAID 1 pair. Saves on the need for extra
disk storage (don't need two blank disks at one time).
--- Additional comment from Bruno Wolff III on 2006-10-28 16:45:18 EDT ---
Now that FC6 is here and I wanted to do some test installs, I played with this
some more. And I noticed that not only does anaconda not let you set up degraded
raid 1 arrays, it won't even use ones I created previously while running FC5.
The raid 1 arrays that weren't degraded were usable as is.
I still think being able to set up raid 1 arrays with only one drive is useful,
since the created file system uses up as much of the space as possible. There
isn't an easy way to do this without starting with a software raid partition.
It is one thing to keep users from accidentally setting up arrays with only 1
active drive, but another to go out of your way to keep people who want to use
temporarily degraded arrays from doing so.
--- Additional comment from Bob Gustafson on 2006-10-28 17:20:05 EDT ---
As before, I agree with Bruno.
I have two raided systems, one I have already upgraded to FC6. The other system
is currently in a degraded state. Thanks to your note, I won't struggle with
installing FC6 on this second system until either I get another drive, or FC6
becomes installable on a degraded Raid 1.
--- Additional comment from Jeremy Katz on 2006-10-30 14:05:27 EST ---
If the array is degraded, it's not complete. Trying to guess that the user is
_intending_ for that to be correct is insane. We need to know that the setup
when we install is accurate for the post-install system so that we can't make
sure that things are set up correctly and working to avoid surprises later on.
--- Additional comment from Bob Gustafson on 2006-10-30 14:44:48 EST ---
The drive in my degraded one disk Raid 1 running FC5 is only a year and a half
old. The disk that failed 4 months ago was over 5 years old (both SCSI). I
figure I have a little while before things could go bad. It would be nice to
enjoy (and help debug) FC6 during this period.
It would be nice to have a command line switch for Anaconda, so folks who "know
what they are doing" can override the fact that Anaconda insists on a two disks
when installing on a Raid 1 array.
I wonder if the Oracle distribution allows installation on a degraded Raid?
--- Additional comment from Bruno Wolff III on 2006-10-30 16:26:20 EST ---
mdadm lets you build arrays that way, so the install shouldn't need to know that
to provide a working system. Later the admin can use mdadm to add in a missing
or additional drive to the mirror. It certainly seems reasonable to warn about
the unusual configuration.
In my test I used an array with a missing drive. On my next test install I'll try
using an array created with one device instead of being degraded. I suspect that
won't work either, but I'll test it anyway and report back what happened, just
for the record.
--- Additional comment from Bruno Wolff III on 2006-10-30 23:09:44 EST ---
I tested doing an install using a raid 1 array with only one drive in it.
Anaconda recognized the array (unlike the degraded array with only one
functioning drive), but refused to use it in the install complaining about raid
1 arrays having a minimum of 2 drives.
My current thinking is that customizing Anaconda to change the test for the
minimum number of drives probably isn't hard. The Unity project is doing regular
respins, so they probably have a fairly systematic way of building ISOs using
binary rpms and some metadata. Unfortunately they don't obviously publish any
of that info, only the ISOs. I sent them an email message asking if they make
this information available and where I might find it. If they do, I can probably
make my own custom respins that do what I want. Since I have other reasons I
might want my own respins, I think it is worth looking into this. If I actually
have some success, I'll report back here.
--- Additional comment from Bruno Wolff III on 2006-11-12 13:06:31 EST ---
This is a script I got found using a google search that will do respins. It has
instructions emdedded in the script.
--- Additional comment from Bruno Wolff III on 2006-11-12 13:09:44 EST ---
Note this is just FYI for observers.
This spec file will use a patch file which I will upload shortly that can be
used to modify ananconda to use and build arrays with one drive for RAID 1.
--- Additional comment from Bruno Wolff III on 2006-11-12 13:13:24 EST ---
Note this FYI for observers, I don't expect you to apply this.
To use this install the anaconda src rpm, replace the spec file with the
posted spec file and add this patch file to the SOURCES directory. The rpmbuild
to make a new source rpm and the anaconda rpms.
--- Additional comment from Bruno Wolff III on 2006-11-12 13:32:16 EST ---
This is a wrap up for people who want to do this on their own.
You are going to need FC6 installed somewhere and have enough space to do a
respin. The patch I made doesn't allow you to use degraded arrays. You have to
use a RAID 1 array that is defined to have one active device. After you break
the mirror you want to use the mdadm Grow command to reduce the number of
active members to 1. You should also create a new raid device on the former
member(s) so that you have consistant array definitions.
If you only have one box you might change the partition you want to install
on to not be a raid partition for the first go around. Once you have the respin
DVD (or CDs) you can change the partition back to a raid partition.
To do the respin you will need some packages from Extras. They are listed in the
There is a preexisting bug (215231) with text based installs and software raid, so
you should stick to graphical installs for now.
Once you have your first FC6 installed you can modify anaconda and install it
using the spec file and patch file attached to this bug report. Then you can use
the respin script to build new install media which will let you use raid 1 arrays
with one member (both existing and new). When doing the respin, be sure to copy
the new anaconda rpms over the ones from the original install media.
This issue turns out to be a good example of why I use free software. If I had
disaggreed about a feature for propietary software, I would be stuck. But with
Fedora I have the option to make my own custom version (even though there is
--- Additional comment from Peter Jones on 2007-07-09 11:24:27 EDT ---
*** Bug 247119 has been marked as a duplicate of this bug. ***
--- Additional comment from Gerry Reno on 2007-07-09 11:50:12 EDT ---
I do not understand the WONTFIX reasoning behind closing this bug. As you can
see in the bug that I opened 247119, the whole point is to provide greater
flexibility and convenience for the user. As far as a user having an
expectation regarding RAID-1, you present a warning message that tells them by
installing with only one drive in the array that there is no redundancy until
they add a second drive. And at least now they can *easily* add that second
drive at any time. Their system is as reliable as any other single drive
installation and a lot more flexible and convenient.
Please reconsider and REOPEN this bug.
--- Additional comment from Bob Gustafson on 2007-07-09 17:44:26 EDT ---
Yes, here is another vote for a REOPEN. See also bug 189971
Bruno Wolff has a respin recipe - haven't tried it, but should be useful. I'm
not sure anyone will see this bug with a WONTFIX label though.
Fedora7 is even more unfriendly to RAID and LVM users. Kernel crash. See bug
237415 and others linked in.
--- Additional comment from Gerry Reno on 2007-07-09 17:58:56 EDT ---
I understand that they want to prevent grandma from shooting herself in the
foot when she installs Fedora using Anaconda but this greatly constrains
experienced administrators who know what they want and what is possible.
Perhaps there just needs to be an "Expert" mode in Anaconda that lets
experienced administrators have more control.
Please reconsider and REOPEN this bug.
--- Additional comment from Bruno Wolff III on 2008-12-21 10:09:02 EST ---
FYI. I needed this feature again in order to conveniently change the layout of my disk partitions, switch to using encrypted partitions while using a fresh install to handle the details.
There had been some discussion about allowing extra space to be left on a partition without actually setting raid 1 on the device to make copying easier later. However after reading though the software raid documentation, I see the are different possible sizes for the area used for supporting raid and even different locations the data can be saved in. So just reserving a fixed size at the end of the partition is still going to be error prone.
--- Additional comment from Roberto Ragusa on 2009-07-11 05:53:30 EDT ---
Please REOPEN this bug.
This is imposing unacceptable limitations to expert administrators.
Even if installing/upgrading on a degraded or singledisk RAID-1
is considered dangerous (but I don't see how), there has to be
a way for an expert admin to force the desired behavior.
Rm has -f, rpm has --force, other dangerous stuff (dd) never
Please, if you really are concerned about inexperienced users,
give us a "are you a grand-ma?" "yes"-"no" option in anaconda
and let us do our stuff with no babysitter "help".
--- Additional comment from Susi Lehtola on 2009-07-11 17:21:30 EDT ---
I agree that this should be a feature of anaconda, even if it could only be triggered by a magic keyword in boot and if it displayed a bunch of warnings in partitioning.
I have a system that currently has two identical hard drives and would like to make a RAID-1 array without first transferring my data elsewhere. The easiest way to do this would be using just one drive to (re)install, restore my data on it and extend the RAID to the other drive as well..
--- Additional comment from Bjoern Gerhart on 2009-10-15 10:56:24 EDT ---
Thanks for the patch Bruno! I applied it to the anaconda source code for my CentOS 5.3 installation scenario. It works as expected ;-)
In fact I'd also be very happy when the issue would get REOPENed! Needing this functionality for a customer project who wants to do a migration - but he also wants to be able to rollback to the older OS if something goes wrong during installation.
--- Additional comment from H. Peter Anvin on 2011-10-06 21:22:50 EDT ---
For this to be forbidden is idiotic.
It forbids not just one, but two real-life use scenarios, both of which are extremely useful:
1. Break a mirror for reinstall/upgrade, if successful re-merge otherwise easy rollback is readily available;
2. "I may want to RAID this drive some day."
I used to work around Anaconda brain damage by just setting this up manually, but found today to my dismay that it is almost impossible to do correctly if you also want encrypted drives.
--- Additional comment from H. Peter Anvin on 2011-10-06 21:23:44 EDT ---
Braindamage still present in Fedora 15 at least.
--- Additional comment from Susi Lehtola on 2011-10-07 04:03:08 EDT ---
I agree: installing to a single drive should be possible as for the reasons presented in comment #22.
The installer could just issue a warning in this case if only one drive is used. (No safety benefit, slower operation.)
--- Additional comment from on 2013-12-31 15:20:31 EST ---
(In reply to H. Peter Anvin from comment #22)
> For this to be forbidden is idiotic.
> It forbids not just one, but two real-life use scenarios, both of which are
> extremely useful:
> 1. Break a mirror for reinstall/upgrade, if successful re-merge otherwise
> easy rollback is readily available;
I totally agree. I was going to install fedora this way but now have too look for another much more cumbersome solution. Please reopen this bug! A simple warning message is enough, there is no need to dumb down the installer.
--- Additional comment from paul on 2014-01-22 01:21:01 EST ---
I was able to create raid 1 (with LVM also) with 2 partitions on a single drive at Fedora 15, for /boot, / etc., with the custom layout option.
Now I find it is impossible to do so in Fedora 20!!!
Fedora 20 even do not allow to use partition / raid / LVM that I had setup on the harddisk drive before i run the installation DVD - why ?
--- Additional comment from Bruno Wolff III on 2014-01-22 10:28:32 EST ---
That is not the issue being discussed in this bug. You might want to start with asking these questions on the user list to make sure there isn't some way to do what you want, that you have missed. If the limitation(s) are confirmed, then you really should open a different bug, as this one is about a limitation that isn't affecting you.
Lately when I deliberately break mirrors I set the number of devices to 1 (using mdadm Grow) for the one I want to keep running. It is also a good idea to change the uuid of the mirror that you don't want to use. Otherwise which one you get when booting isn't always going to be the same. (Though it is usually stable I have seen things switch.)
I'm actually running some raid 1 arrays with one device right now and the degraded file for them contains '0'.
I often physically pull the drive I want preserved. That make is a bit hard for mdadm to know what my intent is, although I guess I could go in and set the number of drives to 1; it just means another boot.
There is a separate use case for a degenerate RAID-1 (with only one drive): it allows a mirror to be constructed later.
I agree supporting install onto half an existing RAID1 array for testing makes
a lot of sense. This should be supported better than it currently is.
However, mdadm cannot and should not try to determine how a RAID1 was
deliberately broken. There is a million ways an array can be broken and for
this to go wrong. In addition it leaves open the authority problem later when
trying to assemble the drive as to which drive is authoritative and should be
resync'ed to the other.
In addition, as Brono points out, ending up with two arrays having the same
UUID is a bit of an 'issue'.
The correct way to do this is (using sd[ab]1 as example drives, and md98 as
the old RAID1 array):
1) Fail /dev/sda1 and remove it from the old array:
mdadm /dev/md98 --fail /dev/sda1 --remove /dev/sda1
2) zero the superblock on /dev/sda1
mdadm --zero-superblock /dev/sda1
3) Create a new RAID1 array with just one drive:
mdadm -C /dev/md99 --raid-devices 1 --force --level=1 /dev/sda1
4) Perform the install onto the new array (/dev/md99)
5) Once determined the new install is good, stop the old array and zero the
superblock of the second drive:
mdadm -S /dev/md98
mdadm --zero-superblock /dev/sdb1
6) Reshape /dev/md99 into a two drive array, and add /dev/sdb1 to the array:
mdadm -G /dev/md99 --raid-devices 2 --add /dev/sdb1
Note that if you create the array as show in 3), /sys/block/md99/md/degraded
will return 0, and anaconda should be happy to install onto it.
Basically Anaconda needs to allow for creating single-drive RAID1 arrays,
and be taught to remove a drive from an existing array (if it doesn't know
how to do so already).
I don't see the need for making any changes to mdadm or the drivers/md stack
over this, everything needed is in place. This needs to be handled in
Anaconda - reassigning to anaconda.
Long term it might be worth having some graphical sysadmin tool that can
manipulate drives and handle 5) + 6) above, but at least we should also
document how the admin performs these steps post install.