Red Hat Bugzilla – Bug 232936
Partitions not being created in raid install
Last modified: 2007-11-30 17:11:59 EST
I'm trying with the following partition setup:
part raid.1 --size=128 --ondisk=sda
part raid.2 --size=6000 --grow --ondisk=sda
part raid.3 --size=6000 --grow --ondisk=sdc
part raid.4 --size=128 --ondisk=sdb
part raid.5 --size=6000 --grow --ondisk=sdb
part raid.6 --size=6000 --grow --ondisk=sdd
raid /boot --fstype ext3 --level=RAID1 --device=md0 raid.1 raid.4
raid pv.1 --level=RAID1 --device=md1 raid.2 raid.3 raid.5 raid.6
volgroup rootvg pv.1
but no partitions are being created. It fails on:
12:25:47 INFO : moving (1) to step partitionobjinit
12:25:47 INFO : no initiator set
12:25:47 INFO : no /tmp/fcpconfig; not configuring zfcp
12:25:48 INFO : moving (1) to step autopartitionexecute
12:25:50 INFO : moving (1) to step partitiondone
12:25:50 INFO : moving (1) to step bootloadersetup
12:25:50 WARNING : MBR not suitable as boot device; installing to partition
12:25:50 INFO : moving (1) to step networkdevicecheck
12:25:50 INFO : moving (1) to step reposetup
12:25:50 INFO : added repository extras with with source URL
12:25:50 INFO : added repository CoRA with with source URL
12:25:55 INFO : moving (1) to step basepkgsel
12:25:58 DEBUG : no package matching gv
12:26:22 DEBUG : no package matching gv
12:26:26 DEBUG : no such package isdn4k-utils
12:26:26 INFO : moving (1) to step postselection
12:26:26 DEBUG : no kernel-smp package
12:26:26 INFO : selected kernel package for kernel
12:30:33 INFO : moving (1) to step install
12:30:33 INFO : moving (1) to step enablefilesystems
12:30:34 INFO : going to run: ['mdadm', '--create', '/dev/md1', '--run',
'--chunk=256', '--level=0', '--raid-devices=4', '/dev/sda2', '/dev/sdb2',
12:30:34 CRITICAL: Traceback (most recent call first):
File "/usr/lib/anaconda/lvm.py", line 277, in pvcreate
File "/usr/lib/anaconda/fsset.py", line 2263, in setupDevice
File "/usr/lib/anaconda/fsset.py", line 1632, in createLogicalVolumes
File "/usr/lib/anaconda/packages.py", line 149, in turnOnFilesystems
File "/usr/lib/anaconda/dispatch.py", line 203, in moveStep
rc = stepFunc(self.anaconda)
File "/usr/lib/anaconda/dispatch.py", line 126, in gotoNext
File "/usr/lib/anaconda/text.py", line 602, in run
File "/usr/bin/anaconda", line 956, in <module>
PVCreateError: pvcreate of pv "/dev/md1" failed
I see mdadm complaints on one of the vt's about the partitions (/dev/sda2, etc)
not existing. fdisk shows empty tables for all of the disks.
I've also tried with RAID1 instead of RAID10 with the same result.
Created attachment 150379 [details]
Still happens with 126.96.36.199. This strikes me as a pretty serious failure. Can
anyone else reproduce?
I can confirm this. I'm going to try a kickstart without lvm to see if its
specifically lvm causing the problem.
I've seen errors when just creating basic configs on a single IDE drive, no LVM
anywhere in the config, (1 GB swap, the rest being / and ext3) on T2 ... tried
dd'ing the drive before to clean it off, still no good. I spoke briefly with
jeremy on #et-mgmt and he seemed to imply LVM was at fault. This happens when
using kickstart or not.
To clarify, no lvm was involved, but the lvm still choked on my setup while
looking for partitions. I don't exactly remember the details as I do not have
the machine around anymore but can help recreate if needed.
*** Bug 232502 has been marked as a duplicate of this bug. ***
The problem here is that we're executing the clearpart code at least twice, and
one of those times is after we've committed the partitions to disk. If you use
existing partitions or dd in your %pre script to clear out the disk label, you
should be able to work around this problem. Of course, it's still a bug.
I'm not sure why this is only showing up as a bug now, and it's a bit of a
tangle of code to work through. I have a preliminary patch worked up for
Committed a potential fix for this issue, though the risk of regressions when
editing this stuff is always pretty high. I'll put it in MODIFIED for now.
Please test the next build of anaconda and if this issue is solved, we'll close
it out. Thanks.