Red Hat Bugzilla – Bug 151652
anaconda rejects single-member raid 1
Last modified: 2007-11-30 17:11:02 EST
Creating a single-member raid 1 array is not only forbidden by anaconda, but
also rejected if created outside anaconda. This prevented some logical volumes
on my box from being recognized, that have been created as such to enable the
seamless transition to actually-replicated raid 1 arrays at some point. Such
arrangement would make just as much sense with raid 0.
It might seem that for logical volumes it doesn't matter so much, since it's
very unlikely that the small raid superblock will overlap with the much-bigger
last extent, but this is more of an issue for /boot on raid1. Not enabling
someone to create such /boot devices forces manual intervention to resize the
filesystem in order to re-create it as a raid device, which is a bit of a pain.
Since the rejection is gratuitous, might I suggest that it be removed? I can
provide a patch if you agree with the change.
Version-Release number of selected component (if applicable):
We're just not going to support single-device mirrors.
Care to justify in public why you make such a distinction between the following
1) box with 2 disks is assigned to be a server. System manager decides they
want to tolerate 2 disk failures, so one more disk is ordered. Meanwhile,
system manager goes ahead and installs the box with 2-member raid1 devices, and
starts configuring the server. When the new disk arrives, the future server is
brought down, the disk is added, partitioned like the 2 existing disks, mdadm
--grow is used to make the 2-member arrays 3-member arrays, and mdadm --add adds
the partitions in the new disk to the running arrays
2) box with 1 disk is assigned to be a server. System manager decides they want
to tolerate 1 disk failure, so one more disk is ordered. Meanwhile, system
manager goes ahead and installs the box with 1-member raid1 devices, and starts
configuring the server. When the new disk arrives, the future server is brought
down, the disk is added, partitioned like the 1 existing disk, mdadm --grow is
used to make the 1-member arrays 2-member arrays, and mdadm --add adds the
partitions in the new disk to the running arrays.
Why is (1) supported, but (2) isn't?
This is not a made-up scenario, it actually is happening to me right now. I
can't install-test rawhide on one of my boxes because one of the disks died and
the replacement will take a while to arrive.
The number of people who have ever considered doing #2 is, as far as I can tell, 1.
The number of users who might try to do it accidentally were the code to allow
it is significantly larger, and UI to tell them that it *might* be ok to do that
is will result in a many more users doing this by accident, likely incurring
extra load for support and development.
Also, ignoring the completely abstract "why is 2->3 ok but 1->2 bad" argument, a
one volume RAID mirror simply does not make sense.
So, in short, the latter isn't supported because it doesn't gain us *anything*,
whereas the former has a clear gain.
Another reason to permit single-member raid devices: they can be partitioned.
If you, for whatever reason, need more than 16 partitions on a SCSI disk, you
can create multiple single-member partitionable raid devices as partitions
in the actual disk, and then create partitions in them.
If you need to do that, you should be using LVM instead.
LVM would go atop of the smallish partitions. Reason being that having smaller
bits of disk to manage enables you to move stuff around to e.g. change raid
levels without requiring twice the disk space. Which happens to be the exact
reason to want more than 15 partitions in the first place.