From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1) Gecko/20030225 Description of problem: when installing using kickstart, i've attempted the following: (snipped out of my ks.cfg) -------------- part /boot --fstype=ext3 --size=96 --asprimary part pv.01 --size=1 --asprimary --grow volgroup vg01 pv.01 logvol / --vgname=vg01 --size=256 --name=root --fstype=ext3 logvol swap --vgname=vg01 --size=1536 --name=swap --fstype=swap logvol /usr --vgname=vg01 --size=2048 --name=usr --fstype=ext3 logvol /opt --vgname=vg01 --size=4096 --name=opt --fstype=ext3 logvol /var --vgname=vg01 --size=1024 --name=var --fstype=ext3 logvol /tmp --vgname=vg01 --size=512 --name=tmp --fstype=ext3 logvol /home --vgname=vg01 --size=512 --name=home --fstype=ext3 --------------- precisely this configuration worked perfectly in redhat 8.0, so i know i'm not doing anything that hasn't worked in the past... so now, what happens in redhat 9? the installer fails because all the logical volumes have IDENTICALLY the same minor number in the device files created by anaconda to support the desired volume group. for example, the following is done from the second virtual console after the installation fails: # ls -la /dev/vg01 dr-xr-xr-x 2 root 0 1024 Apr 15 17:53 . drwxr-xr-x 6 root 0 1024 Apr 15 17:53 .. crw-r----- 1 root 6 109, 0 Apr 15 17:53 group brw-rw---- 1 root 0 58, 0 Apr 15 17:53 home brw-rw---- 1 root 0 58, 0 Apr 15 17:53 opt brw-rw---- 1 root 0 58, 0 Apr 15 17:53 root brw-rw---- 1 root 0 58, 0 Apr 15 17:53 swap brw-rw---- 1 root 0 58, 0 Apr 15 17:53 tmp brw-rw---- 1 root 0 58, 0 Apr 15 17:53 usr brw-rw---- 1 root 0 58, 0 Apr 15 17:53 var notice that each of the logical volumes has the proper major number, but duplicate minor numbers. i haven't dug into anaconda directly to see precisely where the error is yet... Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. create a logical volume in a kickstart installation 2. cry ;) Actual Results: virtual console 5 has: "/dev/vg01/root is mounted; will not make a filesystem here!". the reason for this is that swap was already activated out of this volume group, and all the logical volumes collide, so thus it believes the device is already in use (which it is). Expected Results: it should have just worked ;) Additional info: description contains sufficient info.
I did not have this problem when I copied the ks snippet above into an ks file and booted Shrike. It installed fine. Is it possible you ran out of space, or some other error occured? Could you put /tmp/syslog and /tmp/anaconda.log onto a floppy (from VC2) and attach them? Was there enough space for all the logical volumne in the volume group that was created? We should have caught this error but I'm just grabbing for answers at this point since it works for me.
I'm having pretty much the same problem: the installer gives each logical volume the same minor and everything breaks down. I'll inline the relevant portion of my ks.cfg: clearpart --all part /boot --size=512 --ondisk=sda --asprimary part / --size=512 --ondisk=sdb --asprimary part raid.00 --size=1000000 --ondisk=sda part raid.01 --size=1000000 --ondisk=sdb part raid.10 --size=1 --grow --ondisk=sda part raid.11 --size=1 --grow --ondisk=sdb raid pv.00 --level=0 --device=md0 raid.00 raid.01 raid pv.01 --level=0 --device=md1 raid.10 raid.11 #raid pv.00 --level=1 --device=md0 raid.00 raid.01 #raid pv.01 --level=1 --device=md1 raid.10 raid.11 volgroup nas pv.00 volgroup sys pv.01 logvol /usr --vgname=sys --name=usr --size=4096 --fstype=ext3 logvol /var --vgname=sys --name=var --size=4096 --fstype=ext3 logvol /tmp --vgname=sys --name=tmp --size=8192 --fstype=ext3 logvol /cache --vgname=sys --name=cache --size=4096 --fstype=ext3 Note that sda and sdb are each 1.1TB 3ware RAID arrays that I'm trying to stripe across. I have to make two software RAID volumes because otherwise things fail as described in bug 90871. Now, if I switch those commented lines so that the RAID devices are created as RAID1 (and thus half the size) everything works fine. If I create the system partitions directly instead of within a volume group, everything installs OK (but I've yet to test to see if I can create other logical volumes normally after a reboot). I'm happy to try any suggestions.
Created attachment 91695 [details] anaconda.log
Created attachment 91696 [details] syslog
Created attachment 91697 [details] lvmout I thought this file might be useful as well
I found that if I comment out just the line volgroup nas pv.00 then the system installs fine. It fails to boot, but that's for another bug report.
What are you using as your clearpart line?
each attempt i've done uses: clearpart --all
Unable to reproduce with our current codebase... I've made some changes, though, that could be helping things in how we remove pre-existing volumes.
Mass-closing lots of old bugs which are in MODIFIED (and thus presumed to be fixed). If any of these are still a problem, please reopen or file a new bug against the release which they're occurring in so they can be properly tracked.