Bug 752276

Summary: Anaconda fails to create RAID1 (and 10) device
Product: [Fedora] Fedora Reporter: Szymon Gruszczynski <sz.gruszczynski>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED WONTFIX QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: high Docs Contact:
Priority: unspecified    
Version: 16CC: anaconda-maint-list, jonathan, lance, maci, redhat, rjones, samuel-rhbugs, vanmeeuwen+fedora
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-02-13 08:35:54 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Szymon Gruszczynski 2011-11-09 01:46:18 UTC
Description of problem:
hi
i try to install fedora 16 /from installation dvd; checksums ok/ on 2 hdd setup /de facto: 3, but 3rd one is not for linux/ setup. i would also use lvm: i created small /200mb/ partition for /boot /raid1- mirror/ and created ext2 fs on it.
than i wanted to create rather big partitions /on both harddrives/ for /dev/md1- also mirror /on raid10- so i could add a disk in the future/.
i set their size to 935000mb, i crated partitions without any problem- but when i wanted to create raid device /raid1 as i wrote/, i got following error: 
"Only RAID0 arrays can contain growable members" /it happens even if i do not wanna to use lvm- also it happens when i choose RAID1.

Version-Release number of selected component (if applicable):

fedora 16, installation dvd, 64 bit version

How reproducible:
always

Steps to Reproduce:
1. setup anaconda
2. set size of disk
3. couple of clicks and voila!
  
Actual results:

an error stopping installation

Expected results:

anaconda should create partitions like that without problem.

Additional info:

this bug is new to F16: it worked without problems /on the same hardware/ on f14/15
here i've found some info about similar problem:

https://lists.fedoraproject.org/pipermail/test/2011-October/103996.html

unfortunatelly, as i wrote earlier- setting a specified size doesn't fix it /in my case/
i've got 3 hdd: samsung hd103sj /f3/, p67 mobo /gigabyte p67 ud4 b3/

Comment 1 David Lehman 2012-03-20 13:44:59 UTC
The error message states fairly clearly that the problem is you are activating one of the "Fill ..." options for your RAID partitions, which is only allowed if you are creating a RAID0 array. Since you are not creating a RAID0 array, you must specify the sizes of your member partitions as a FIXED SIZE. If this does not work for some reason, please provide the new error message you are presented with after making this adjustment.

Comment 2 Richard W.M. Jones 2012-03-20 13:52:13 UTC
The problem is two-fold.

(1) The error message is unclear.  It requires an internet
search leading to this bug to find out what's going on.

(2) The test is done in the wrong place.  My two partitions
*are* the same size, and would remain that way unless I added
another partition later.  It should do the test after all
user selection has finished and just before committing the
changes to disk.

Comment 3 Schlomo Schapiro 2012-03-28 09:04:59 UTC
I think the problem is larger: RHEL5 and RHEL6 allow this. What about RHEL7?

We have an IMHO fairly common setup: Our servers have 2 hard disks of equal size and we use MD software raid. So far (RHEL5, RHEL6) our kickstart file looks like this:


clearpart --all --initlabel

# Boot
part raid.01    --size 250     --asprimary     --ondrive=sda
part raid.02    --size 250     --asprimary     --ondrive=sdb

# Rest for System
part raid.03	--size=100     --grow 	       --ondrive=sda
part raid.04	--size=100     --grow 	       --ondrive=sdb

# Assembling
raid /boot --device md0 --level=1 raid.01 raid.02
raid pv.system --level=1 --device=md1 raid.03 raid.04

# LVM
volgroup vg_system --pesize=32768 pv.system
logvol swap --size=8192 --name=swap --vgname=vg_system
logvol /var --size=5120 --name=var --vgname=vg_system
logvol / --size=10240 --name=root --vgname=vg_system
logvol /data --size=100 --grow --name=data --vgname=vg_system

Now we have to add some code to *calculate* the disk and partition sizes for no apparent benefit.

Could you maybe reevaluate this issue to support this (IMHO common) scenario? Or is there a new way to tell anaconda to build this kind of layout?

Also, how would we support users migrating from RHEL6 to a future RHEL7? It seems to me that they would face the same issue as well.

Comment 4 David Lehman 2012-03-28 13:19:06 UTC
(In reply to comment #2)
> The problem is two-fold.
> 
> (1) The error message is unclear.  It requires an internet
> search leading to this bug to find out what's going on.

Feel free to propose clearer language if you believe you have a strong grasp of what's wrong with the current message.

> (2) The test is done in the wrong place.  My two partitions
> *are* the same size, and would remain that way unless I added
> another partition later.  It should do the test after all
> user selection has finished and just before committing the
> changes to disk.

That might help you with the specific configuration you tried this time. What about the user who has created an LVM configuration on top of his/her RAID and then must go back and take the whole thing apart and then rebuild it because it happened that automatic partition led to different sized member partitions? They will be angrily filing bugs that we shouldn't have waited so long to warn them. (In the current anaconda code you cannot modify partitions if they are already part of a higher-level device.)

Comment 5 Samuel Sieb 2012-05-01 00:06:51 UTC
I just ran into this installing F17.  I would suggest that it would be clearer if the message said that at least one of the selected partitions was created with one of the fill options.  Or that one of the selected partitions was not created with a fixed size.  I had no idea what a growable member was until I used Google to find this bug.

Comment 6 Marcel Wysocki 2012-11-12 10:59:04 UTC
i ran into this error message when i tried to create a LVM PV on a raid device which is on a raid1
this was with f17 anaconda

Comment 7 Fedora End Of Life 2013-01-16 10:11:16 UTC
This message is a reminder that Fedora 16 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 16. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '16'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 16's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 16 is end of life. If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora, you are encouraged to click on 
"Clone This Bug" and open it against that version of Fedora.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 8 Fedora End Of Life 2013-02-13 08:36:01 UTC
Fedora 16 changed to end-of-life (EOL) status on 2013-02-12. Fedora 16 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.