Red Hat Bugzilla – Bug 129306
Should support install to degraded RAID-1
Last modified: 2016-04-08 16:50:11 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (compatible; Konqueror/3.2; Linux) (KHTML, like Gecko)
Description of problem:
Ideally Anaconda would allow installing to a degraded RAID-1. This means having a RAID-1 comprised of one active volume and one volume labelled as failed-disk in the raidtab file. At some later time a second disk can be installed and added to the RAID set.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Create a RAID disk and observe that anaconda insists on having two or more RAID partitions before creating a RAID.
This isn't going to happen. It provides a false sense of security and
isn't all that useful while being horrendous from a UI perspective.
mdadm provides a way to go from single disk to mirrored RAID from what
I remember as well.
sheesh - when I _want_ an OS which thinks it knows better than me what
I want I know where Redmond is
I'd really have liked to install onto a degraded RAID tonight - the
local store had just one 160G drive in stock so I couldn't buy two but
can probably pick one up mail order in the next week.
Yes, it is possible to create the RAID partition later but it's fiddly
- you have to make a new, empty MD device on the second disk and copy
the data to it, reboot and add the old device. It's even more fiddly
for the root device.
Also the box I'm installing onto has pretty limited space - two hard
drives is the max. Thus I wanted, initially, to have the new (160G)
drive with degraded RAIDs and the old (80g) drive with the user data
that I want to copy over in the machine. Once the copy was done I
would have swapped over to a second 160G drive to form the mirror.
I really don't think this should have been closed with a "WONTFIX"
*** Bug 815985 has been marked as a duplicate of this bug. ***
*** Bug 906417 has been marked as a duplicate of this bug. ***
*** Bug 1212036 has been marked as a duplicate of this bug. ***
*** Bug 1325061 has been marked as a duplicate of this bug. ***
For all of you who are irritated that anaconda won't let you do this, if you just tried a little harder you could get what you want:
1) Do the normal install to your non-raid system.
Then, at whatever point you get your second drive...
2) Create the broken raid mirror on the new drive
3) Copy/Move data from the non-raid drive to the broken raid
4) Add the old drive to the raid
5) reconfigure your /etc/fstab, initramfs, /etc/crypttab as needed
The whole problem is that I want to do the install to the degraded raid on drive 0, copy the data from drive 1 onto drive 0, and then overwrite the data on drive 1 by synchronizing the RAID.
Here's the scenario: I have machines with Windows Server in RAID-1 that I want to migrate to Linux. I want to boot Linux, configure drive 0 for degraded dmraid, and install Linux on drive 0. Once Linux is running, I mount the Windows NTFS partitions on drive 1 manually, copy over the data files from drive 1 to drive 0, and then -- and only then -- copy the partition table from drive 0 to drive 1 and then add the new partitions on drive 1 to un-degrade the RAID-1.
There are only two drives. There are only two controller ports. It costs me money each time I open the machines. External USB drives would be slow and impractical.
While your proposed solution as I understand it would work if Linux is installed on a non-RAID volume (for example, Linux on /dev/sda1 and then make /dev/sda2 part of a degraded RAID volume /dev/md0), this leaves no realistic way to then get the installation volume protected by RAID. One could manually copy the contents of /dev/sda1 to /dev/sdb1, but that would not keep them in sync across operating system upgrades and so forth. The goal is not merely protection of data on /dev/md0 (that is, /dev/sda2 and /dev/sdb2), but also having the machine able to boot from either drive in case of failure of the other.
The fact is that this is a perfectly reasonable scenario with good justification to do it in a way that might seem strange, and moreover it works with the Debian installer. I realize it's unusual, and I agree there are real dangers for people who don't know what they're doing, but the whole idea of denying administrators a manual override in Linux bothers me. When someone selects manual partitioning, they are taking responsibility.
I would completely understand if the installer displayed huge red warning signs that this is not recommended, and I could understand making the user click through two or three "Do you really want to do this?" and "We're not kidding, you probably don't want to do this" warnings. But, ultimately, I think there should be a manual override.
I've personally had to install Debian instead of Fedora because of this, and although I like Debian the purpose is to run an application that is supported on Fedora. The end result is that the machine boots into Debian and starts up a Fedora guest under KVM virtualization. While that is a perfectly good solution, it's a pretty substantial workaround for a Fedora installer limitation.