Description of Problem: Installer bails out with a "TypeError: number coercion failed" if software RAID mirroring is configured Version-Release number of selected component (if applicable): Redhat 7.2 (similar ks.cfg file works just fine in Redhat 7.1) How Reproducible: Will attach ks.cfg file and anacdump.txt
Created attachment 35401 [details] Kickstart File
Can you capture the entire debug message when the install crashes? That would be helpful.
Created attachment 35661 [details] Working ks.cfg, Broken ks.cfg plus anacdump.txt
Added a .tar.gz file which contains working and broken Kickstart files plus anacdump.txt archive. It looks like the problem is specific to mirrored swap partitions. Difference between working and broken ks.cfg files: magenta[7.2]$ diff -c test2.cfg test3.cfg *** test2.cfg Tue Oct 30 11:54:19 2001 --- test3.cfg Tue Oct 30 11:54:19 2001 *************** *** 28,37 **** # Split the four disks in two parts: OS and spool space part raid.01 --size 2047 --ondisk sda part raid.02 --size 2047 --ondisk sdb ! part swap --size 2047 --ondisk sdb # Create two mirrored pairs from this disk space raid / --level 1 --device md0 raid.01 raid.02 # Authentication mechanisms auth --enablemd5 --useshadow --- 28,39 ---- # Split the four disks in two parts: OS and spool space part raid.01 --size 2047 --ondisk sda part raid.02 --size 2047 --ondisk sdb ! part raid.03 --size 2047 --ondisk sda ! part raid.04 --size 2047 --ondisk sdb # Create two mirrored pairs from this disk space raid / --level 1 --device md0 raid.01 raid.02 + raid swap --level 1 --device md2 raid.03 raid.04 # Authentication mechanisms auth --enablemd5 --useshadow anacdump.txt includes: Traceback (innermost last): File "/usr/bin/anaconda", line 620, in ? intf.run(id, dispatch, configFileData) File "/usr/lib/anaconda/text.py", line 364, in run (step, args) = dispatch.currentStep() File "/usr/lib/anaconda/dispatch.py", line 243, in currentStep self.gotoNext() File "/usr/lib/anaconda/dispatch.py", line 143, in gotoNext self.moveStep() File "/usr/lib/anaconda/dispatch.py", line 208, in moveStep rc = apply(func, self.bindArgs(args)) File "/usr/lib/anaconda/autopart.py", line 899, in doAutoPartition (errors, warnings) = sanityCheckAllRequests(partitions, diskset, 1) File "/usr/lib/anaconda/partitioning.py", line 595, in sanityCheckAllRequests swapSize = swapSize + requestSize(request, diskset) TypeError: number coercion failed This looks consistent with some problem involving mirrored swap, which worked just fine in Redhat 7.1. Hope this helps. Let me know if you need any further information.
I will try to duplicate this behavior when I get to work tomorrow. Can you tell me why you would want the RAID swap device to be md2 instead of md1? You've got: # Create two mirrored pairs from this disk space raid / --level 1 --device md0 raid.01 raid.02 raid swap --level 1 --device md2 raid.03 raid.04 Why not: # Create two mirrored pairs from this disk space raid / --level 1 --device md0 raid.01 raid.02 raid swap --level 1 --device md1 raid.03 raid.04 instead? Does that make any difference at all?
> I will try to duplicate this behavior when I get to work tomorrow. Thanks. > Can you tell me why you would want the RAID swap device to be md2 instead > of md1? I just cut the original ks.cfg that I sent on Monday (which had four seperate md devices) down to a minimal test case. I can confirm that I see the same behaviour using md0 and md1. Also if I generate two mirrored pairs using four partitions on a single disk.
Ok, I've got a reproducible test case. I'm investigating further.
I've committed a fix for this in CVS.
I have made an updates disk available at ftp://people.redhat.com/bfox/7.2-raid-swap-update.img. You need to dd this file to a floppy, then boot the 7.2 install with 'linux updates ks=floppy'. When prompted, insert the updates disk that you made. This works for me in my testing...does it work for you?
*** Bug 55949 has been marked as a duplicate of this bug. ***
*** Bug 56827 has been marked as a duplicate of this bug. ***
*** Bug 58322 has been marked as a duplicate of this bug. ***
*** Bug 57964 has been marked as a duplicate of this bug. ***