Description of problem: Trying to setup a raid10 array in a kickstart install with: raid / --fstype=ext3 --level=RAID10 --device=md2 raid.2 raid.3 raid.5 raid.6 but anaconda complains "RAID device requested without raid level" I've attached an untested attempt to patch anaconda to support raid10. Is there somewhere I could put the updated scripts in order to test it out?
Created attachment 148888 [details] patch to add raid10 support
I suppose we also need to update pykickstart with something like: --- pykickstart-0.98/pykickstart/commands/raid.py.raid10 2007-02-19 11:54:57.000000000 -0700 +++ pykickstart-0.98/pykickstart/commands/raid.py 2007-02-27 15:52:13.000000000 -0700 @@ -255,6 +255,8 @@ parser.values.ensure_value(option.dest, "RAID5") elif value == "RAID6" or value == "6": parser.values.ensure_value(option.dest, "RAID6") + elif value == "RAID10" or value == "10": + parser.values.ensure_value(option.dest, "RAID10") op = KSOptionParser(lineno=self.lineno) op.add_option("--bytes-per-inode", dest="bytesPerInode", action="store", Maybe you need to add a FC7_Raid version instead. Still haven't figured out how to get updates.img to work...
http://fedoraproject.org/wiki/Anaconda/Updates has some information on how to create/use an updates.img. But the quick and dirty way now is find <list of files> | cpio -H crc -o | gzip -9 > updates.img Patch looks good to me, though.
Thanks for the patch. I've applied this to HEAD and will be in the next build of anaconda. I took care of the pykickstart portion as well.
Sorry, typo in the patch. This should fix: --- ./anaconda-11.2.0.28/raid.py.raid10 2007-02-28 09:14:48.000000000 -0700 +++ ./anaconda-11.2.0.28/raid.py 2007-03-02 09:36:15.000000000 -0700 @@ -189,7 +189,7 @@ if isRaid0(raidlevel): return 0 elif (isRaid1(raidlevel) or isRaid5(raidlevel) or isRaid6(raidlevel) or - isRaid10(raidLevel): + isRaid10(raidLevel)): return max(0, nummembers - get_raid_min_members(raidlevel)) else: raise ValueError, "invalid raidlevel in get_raid_max_spares"
Tried updating stage2.img with a fixed raid.py, but am now seeing: Error pulling second part of kickstart config: Unknown Error!
Yeah we just caught the typo. David fixed a similar one elsewhere in the file. The error you're now seeing is some %ksappend change I committed yesterday but obviously didn't test too thoroughly. I'll look into fixing that.
Looks like a new pykickstart with the RAID10 changes was not released as well. Updated that but still seeing the error so there may be other issues too.
Thanks for reminding me. I'll roll out a new one today.
I see the new FC7_Raid with RAID10 in pykickstart, but kickstart.py in anaconda is still using FC5_Raid.
Fixed. Thanks.
Note that it should be F7_Raid, not FC7_Raid like I wrote earlier.
Something else must still not be right. I'm trying with the following partition setup: part raid.1 --size=128 --ondisk=sda part raid.2 --size=6000 --grow --ondisk=sda part raid.3 --size=6000 --grow --ondisk=sdc part raid.4 --size=128 --ondisk=sdb part raid.5 --size=6000 --grow --ondisk=sdb part raid.6 --size=6000 --grow --ondisk=sdd raid /boot --fstype ext3 --level=RAID1 --device=md0 raid.1 raid.4 raid pv.1 --level=RAID10 --device=md1 raid.2 raid.3 raid.5 raid.6 volgroup rootvg pv.1 but no partitions are being created. It fails on: 12:25:47 INFO : moving (1) to step partitionobjinit 12:25:47 INFO : no initiator set 12:25:47 INFO : no /tmp/fcpconfig; not configuring zfcp 12:25:48 INFO : moving (1) to step autopartitionexecute 12:25:50 INFO : moving (1) to step partitiondone 12:25:50 INFO : moving (1) to step bootloadersetup 12:25:50 WARNING : MBR not suitable as boot device; installing to partition 12:25:50 INFO : moving (1) to step networkdevicecheck 12:25:50 INFO : moving (1) to step reposetup 12:25:50 INFO : added repository extras with with source URL http://www.cora.nwra.com/fedora/extras/development/i386/ 12:25:50 INFO : added repository CoRA with with source URL http://www.cora.nwra.com/fedora/CoRPMS/development/i386/ 12:25:55 INFO : moving (1) to step basepkgsel 12:25:58 DEBUG : no package matching gv 12:26:22 DEBUG : no package matching gv 12:26:26 DEBUG : no such package isdn4k-utils 12:26:26 INFO : moving (1) to step postselection 12:26:26 DEBUG : no kernel-smp package 12:26:26 INFO : selected kernel package for kernel 12:30:33 INFO : moving (1) to step install 12:30:33 INFO : moving (1) to step enablefilesystems 12:30:34 INFO : going to run: ['mdadm', '--create', '/dev/md1', '--run', '--chunk=256', '--level=0', '--raid-devices=4', '/dev/sda2', '/dev/sdb2', '/dev/sdc1', '/dev/sdd1'] 12:30:34 CRITICAL: Traceback (most recent call first): File "/usr/lib/anaconda/lvm.py", line 277, in pvcreate raise PVCreateError(node) File "/usr/lib/anaconda/fsset.py", line 2263, in setupDevice lvm.pvcreate(node) File "/usr/lib/anaconda/fsset.py", line 1632, in createLogicalVolumes entry.device.setupDevice(chroot) File "/usr/lib/anaconda/packages.py", line 149, in turnOnFilesystems anaconda.id.fsset.createLogicalVolumes(anaconda.rootPath) File "/usr/lib/anaconda/dispatch.py", line 203, in moveStep rc = stepFunc(self.anaconda) File "/usr/lib/anaconda/dispatch.py", line 126, in gotoNext self.moveStep() File "/usr/lib/anaconda/text.py", line 602, in run anaconda.dispatch.gotoNext() File "/usr/bin/anaconda", line 956, in <module> anaconda.intf.run(anaconda) PVCreateError: pvcreate of pv "/dev/md1" failed I see mdadm complaints on one of the vt's about the partitions (/dev/sda2, etc) not existing. fdisk shows empty tables for all of the disks.
That's a new (and weird) traceback. I'm reassigning this to anaconda-maint-list because it's likely someone else has an idea of what's going on here.
I'm still seeing the new partition problem with anaconda-11.2.0.37-1. I've also tested without RAID10 and used RAID1 instead and I get the same result, so it's not a RAID10 issue. Would it be best to file a new bug?
Since we're seeing a different problem than the original bug report, yeah it might be best to close this one as RAWHIDE and open a new one with the new issue. Thanks.
Created attachment 150427 [details] Fixes for current raid10-related code This fixes couple of issues with current RAID code wrt. raid10. Also it adds raid10 module to images so raid10 personality is acutally available at the time of install. This patch is to current HEAD and untested, although similar changes was made do FC6 anaconda here and everything looks ok.
Thanks for the patch. I've applied it on head and it will be in the next build of anaconda. I'm going to put this bug in MODIFIED for now. There's no point in moving it back and forth out of CLOSED if more issues regarding basic support keep coming up. If this works for you in the next build, let me know and we'll close it out.
Was able to successfuly do a RAID10 install today. Did notice on the VT5 screen though that it looks like it attempted to create the array again after creating the filesystems. From anaconda.log: 13:09:18 INFO : moving (1) to step install 13:09:18 INFO : moving (1) to step enablefilesystems 13:09:22 INFO : going to run: ['mdadm', '--create', '/dev/md1', '--run', '--chunk=256', '--level=10', '--raid-devices=4', '/dev/sda2', '/dev/sdb2', '/dev/sdc1', '/dev/sdd1'] 13:09:26 INFO : formatting swap as swap 13:09:27 INFO : formatting / as ext3 13:09:27 INFO : Format command: ['mke2fs', '/dev/rootvg/root', '-i', '4096', '-j'] 13:09:28 INFO : formatting /boot as ext3 13:09:28 INFO : going to run: ['mdadm', '--create', '/dev/md0', '--run', '--chunk=256', '--level=1', '--raid-devices=2', '/dev/sda1', '/dev/sdb1'] 13:09:28 INFO : Format command: ['mke2fs', '/dev/md0', '-i', '4096', '-j'] 13:09:30 INFO : formatting /tftpboot as ext3 13:09:30 INFO : Format command: ['mke2fs', '/dev/rootvg/tftpboot', '-i', '4096', '-j'] 13:09:31 INFO : formatting /export/local as ext3 13:09:31 INFO : Format command: ['mke2fs', '/dev/rootvg/local', '-i', '4096', '-j'] 13:10:23 INFO : formatting /usr as ext3 13:10:23 INFO : Format command: ['mke2fs', '/dev/rootvg/usr', '-i', '4096', '-j'] 13:10:27 INFO : formatting /var as ext3 13:10:27 INFO : Format command: ['mke2fs', '/dev/rootvg/var', '-i', '4096', '-j'] 13:10:29 INFO : formatting /var/spool/mail as ext3 13:10:29 INFO : Format command: ['mke2fs', '/dev/rootvg/mail', '-i', '4096', '-j'] 13:10:33 DEBUG : error reading swap label on /dev/rootvg: [Errno 21] Is a directory 13:10:33 DEBUG : error reading xfs label on /dev/rootvg: [Errno 21] Is a directory 13:10:33 DEBUG : error reading jfs label on /dev/rootvg: [Errno 21] Is a directory 13:10:33 DEBUG : error reading reiserfs label on /dev/rootvg: [Errno 21] Is a directory 13:10:33 INFO : going to run: ['mdadm', '--create', '/dev/md1', '--run', '--chunk=256', '--level=10', '--raid-devices=4', '/dev/sda2', '/dev/sdb2', '/dev/sdc1', '/dev/sdd1'] 13:10:33 INFO : trying to mount rootvg/root on / 13:10:33 INFO : set SELinux context for mountpoint / to system_u:object_r:root_t:s0 13:10:33 DEBUG : isys.py:mount()- going to mount /dev/rootvg/root on /mnt/sysimage 13:10:33 INFO : trying to mount sys on /sys