Bug 151652 - anaconda rejects single-member raid 1
Summary: anaconda rejects single-member raid 1
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: anaconda
Version: 4
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Alexandre Oliva
QA Contact: Mike McLean
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2005-03-21 14:25 UTC by Alexandre Oliva
Modified: 2007-11-30 22:11 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2005-05-13 18:46:36 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Alexandre Oliva 2005-03-21 14:25:31 UTC
Creating a single-member raid 1 array is not only forbidden by anaconda, but
also rejected if created outside anaconda.  This prevented some logical volumes
on my box from being recognized, that have been created as such to enable the
seamless transition to actually-replicated raid 1 arrays at some point.  Such
arrangement would make just as much sense with raid 0.

It might seem that for logical volumes it doesn't matter so much, since it's
very unlikely that the small raid superblock will overlap with the much-bigger
last extent, but this is more of an issue for /boot on raid1.  Not enabling
someone to create such /boot devices forces manual intervention to resize the
filesystem in order to re-create it as a raid device, which is a bit of a pain.
 Since the rejection is gratuitous, might I suggest that it be removed?  I can
provide a patch if you agree with the change.

Version-Release number of selected component (if applicable):
anaconda-10.2.0.28-1

Comment 1 Peter Jones 2005-03-28 19:33:16 UTC
We're just not going to support single-device mirrors.

Comment 2 Alexandre Oliva 2005-03-28 20:11:28 UTC
Care to justify in public why you make such a distinction between the following
two scenarios:

1) box with 2 disks is assigned to be a server.  System manager decides they
want to tolerate 2 disk failures, so one more disk is ordered.  Meanwhile,
system manager goes ahead and installs the box with 2-member raid1 devices, and
starts configuring the server.  When the new disk arrives, the future server is
brought down, the disk is added, partitioned like the 2 existing disks, mdadm
--grow is used to make the 2-member arrays 3-member arrays, and mdadm --add adds
the partitions in the new disk to the running arrays

2) box with 1 disk is assigned to be a server.  System manager decides they want
to tolerate 1 disk failure, so one more disk is ordered.  Meanwhile, system
manager goes ahead and installs the box with 1-member raid1 devices, and starts
configuring the server.  When the new disk arrives, the future server is brought
down, the disk is added, partitioned like the 1 existing disk, mdadm --grow is
used to make the 1-member arrays 2-member arrays, and mdadm --add adds the
partitions in the new disk to the running arrays.

Why is (1) supported, but (2) isn't?

This is not a made-up scenario, it actually is happening to me right now.  I
can't install-test rawhide on one of my boxes because one of the disks died and
the replacement will take a while to arrive.

Comment 3 Peter Jones 2005-03-28 22:04:26 UTC
The number of people who have ever considered doing #2 is, as far as I can tell, 1.

The number of users who might try to do it accidentally were the code to allow
it is significantly larger, and UI to tell them that it *might* be ok to do that
is will result in a many more users doing this by accident, likely incurring
extra load for support and development.

Also, ignoring the completely abstract "why is 2->3 ok but 1->2 bad" argument, a
one volume RAID mirror simply does not make sense.

So, in short, the latter isn't supported because it doesn't gain us *anything*,
whereas the former has a clear gain.

Comment 4 Alexandre Oliva 2005-05-13 18:33:06 UTC
Another reason to permit single-member raid devices: they can be partitioned. 
If you, for whatever reason, need more than 16 partitions on a SCSI disk, you
can create multiple single-member partitionable raid[01] devices as partitions
in the actual disk, and then create partitions in them.

Comment 5 Peter Jones 2005-05-13 18:46:36 UTC
If you need to do that, you should be using LVM instead.

Comment 6 Alexandre Oliva 2005-05-13 20:08:45 UTC
LVM would go atop of the smallish partitions.  Reason being that having smaller
bits of disk to manage enables you to move stuff around to e.g. change raid
levels without requiring twice the disk space.  Which happens to be the exact
reason to want more than 15 partitions in the first place.


Note You need to log in before you can comment on or make changes to this bug.