Bug 488071 - anaconda raid.py has wrong minimum member count for md raid10
anaconda raid.py has wrong minimum member count for md raid10
Status: CLOSED WONTFIX
Product: Fedora
Classification: Fedora
Component: anaconda (Show other bugs)
rawhide
All Linux
low Severity medium
: ---
: ---
Assigned To: Anaconda Maintenance Team
Fedora Extras Quality Assurance
: Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2009-03-02 09:25 EST by Tuomo Soini
Modified: 2009-04-12 03:16 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-03-19 17:11:27 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Tuomo Soini 2009-03-02 09:25:45 EST
Description of problem:

get_raid_min_members lists minimum member count of 4 for md raid10. This is incorrect. Correct minimum for raid10 is 2.
Comment 1 Tuomo Soini 2009-03-05 09:20:32 EST
There is one special reason why 2 wuld be better. md raid10 module can do raid1E setup with 3 disks...
Comment 2 Chris Lumens 2009-03-09 16:16:24 EDT
What is the rationale for requiring a minimum of two members?
Comment 3 Tuomo Soini 2009-03-10 05:17:27 EDT
Because 2 is real minimum member count for md raid10. I agree that 3 would kind of make more sense because raid1E configuration would make more sense than raid1 with raid10 module. Especially when different md raid10 layouts  are not supported by anaconda. With far layout there is real reason to run raid1 setup with 2 disks using raid10 for better read performance.

I just don't see reason to limit to 3 when 2 is real minimum amount of raid members for md raid10.
Comment 4 Chris Lumens 2009-03-18 10:54:45 EDT
It is my understanding that two devices in a RAID10 results in a degraded RAID array, which anaconda explicitly does not support.  We have no plans to add support for this, either.
Comment 5 Tuomo Soini 2009-03-18 12:27:18 EDT
2 disk raid10 setup doesn't have any degraded disks. Try to create one with mdadm if you don't beleave. And same for 3 disks raid10 setup (aka raid1e).
Comment 6 Chris Lumens 2009-03-19 17:11:27 EDT
The entire point of RAID10 is to create a stripe across mirrored sets of disks.  If you're only using two disks, then you are losing half the purpose of even using RAID10 and might as well be using another RAID level.  This is not the kind of configuration that we're looking to support in anaconda.
Comment 7 Rolf Fokkens 2009-04-12 03:16:06 EDT
So far I've been using RAID1, and have been happy with it. Until recently, that is. I did some performance tests and noticed that RAID1 has the same performance as a single disk on sequential I/O. iostat just shows that always one disk at a time is accessed.

I also have a three disk RAID5 array in another PC. And that PC has excelent performance on sequential I/O: 3x the speed of a single disk!

The solution appears to be to use RAID10.f2 (far layout, 2 copies of each block). If I'm well informed RAID10.f2 is there exactly to do this: performance gain. And of course it also supports the other goal of RAID10: striping acros mirrored sets of disks.

In short: I think RAID10 on 2 disks is kept out of anaconda for the wrong reasons:

* RAID10.f2 on 2 disks is NOT a degraded array
* RAID10.f2 on 2 disks delivers a better performance than RAID1

So Chris, could you please reconsider?

Note You need to log in before you can comment on or make changes to this bug.