Bug 88958 - kickstart installation using LVM on redhat 9 broken
Summary: kickstart installation using LVM on redhat 9 broken
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: anaconda
Version: 9
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Jeremy Katz
QA Contact: Mike McLean
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2003-04-15 18:23 UTC by Peter J. Dohm
Modified: 2007-04-18 16:53 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2006-04-24 18:37:38 UTC
Embargoed:


Attachments (Terms of Use)
anaconda.log (6.68 KB, text/plain)
2003-05-15 18:54 UTC, Jason Tibbitts
no flags Details
syslog (24.95 KB, text/plain)
2003-05-15 18:55 UTC, Jason Tibbitts
no flags Details
lvmout (2.23 KB, text/plain)
2003-05-15 18:56 UTC, Jason Tibbitts
no flags Details

Description Peter J. Dohm 2003-04-15 18:23:59 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1) Gecko/20030225

Description of problem:
when installing using kickstart, i've attempted the following:

(snipped out of my ks.cfg)
--------------
part /boot --fstype=ext3 --size=96 --asprimary
part pv.01 --size=1 --asprimary --grow
                                                                                
volgroup vg01 pv.01
                                                                                
logvol / --vgname=vg01 --size=256 --name=root --fstype=ext3
logvol swap --vgname=vg01 --size=1536 --name=swap --fstype=swap
logvol /usr --vgname=vg01 --size=2048 --name=usr --fstype=ext3
logvol /opt --vgname=vg01 --size=4096 --name=opt --fstype=ext3
logvol /var --vgname=vg01 --size=1024 --name=var --fstype=ext3
logvol /tmp --vgname=vg01 --size=512 --name=tmp --fstype=ext3
logvol /home --vgname=vg01 --size=512 --name=home --fstype=ext3
---------------

precisely this configuration worked perfectly in redhat 8.0, so i know i'm not
doing anything that hasn't worked in the past...

so now, what happens in redhat 9?

the installer fails because all the logical volumes have IDENTICALLY the same
minor number in the device files created by anaconda to support the desired
volume group.

for example, the following is done from the second virtual console after the
installation fails:

# ls -la /dev/vg01
dr-xr-xr-x     2 root     0           1024 Apr 15 17:53 .
drwxr-xr-x     6 root     0           1024 Apr 15 17:53 ..
crw-r-----     1 root     6       109,   0 Apr 15 17:53 group
brw-rw----     1 root     0        58,   0 Apr 15 17:53 home
brw-rw----     1 root     0        58,   0 Apr 15 17:53 opt
brw-rw----     1 root     0        58,   0 Apr 15 17:53 root
brw-rw----     1 root     0        58,   0 Apr 15 17:53 swap
brw-rw----     1 root     0        58,   0 Apr 15 17:53 tmp
brw-rw----     1 root     0        58,   0 Apr 15 17:53 usr
brw-rw----     1 root     0        58,   0 Apr 15 17:53 var

notice that each of the logical volumes has the proper major number, but
duplicate minor numbers.

i haven't dug into anaconda directly to see precisely where the error is yet...


Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. create a logical volume in a kickstart installation
2. cry ;)
    

Actual Results:  virtual console 5 has: "/dev/vg01/root is mounted; will not
make a filesystem here!".

the reason for this is that swap was already activated out of this volume group,
and all the logical volumes collide, so thus it believes the device is already
in use (which it is).

Expected Results:  it should have just worked ;)

Additional info:

description contains sufficient info.

Comment 1 Michael Fulbright 2003-04-22 20:58:40 UTC
I did not have this problem when I copied the ks snippet above into an ks file
and booted Shrike.  It installed fine.

Is it possible you ran out of space, or some other error occured? Could you put
/tmp/syslog and /tmp/anaconda.log onto a floppy (from VC2) and attach them?

Was there enough space for all the logical volumne in the volume group that was
created?  We should have caught this error but I'm just grabbing for answers at
this point since it works for me.


Comment 2 Jason Tibbitts 2003-05-15 18:47:36 UTC
I'm having pretty much the same problem: the installer gives each logical volume
the same minor and everything breaks down.  I'll inline the relevant portion of
my ks.cfg:

clearpart --all

part /boot  --size=512  --ondisk=sda --asprimary
part /      --size=512 --ondisk=sdb --asprimary
 
part raid.00 --size=1000000 --ondisk=sda
part raid.01 --size=1000000 --ondisk=sdb
 
part raid.10 --size=1 --grow --ondisk=sda
part raid.11 --size=1 --grow --ondisk=sdb
 
raid pv.00 --level=0 --device=md0 raid.00 raid.01
raid pv.01 --level=0 --device=md1 raid.10 raid.11
#raid pv.00 --level=1 --device=md0 raid.00 raid.01
#raid pv.01 --level=1 --device=md1 raid.10 raid.11
 
volgroup nas pv.00
volgroup sys pv.01
 
logvol /usr   --vgname=sys --name=usr   --size=4096 --fstype=ext3
logvol /var   --vgname=sys --name=var   --size=4096 --fstype=ext3
logvol /tmp   --vgname=sys --name=tmp   --size=8192 --fstype=ext3
logvol /cache --vgname=sys --name=cache --size=4096 --fstype=ext3

Note that sda and sdb are each 1.1TB 3ware RAID arrays that I'm trying to stripe
across.  I have to make two software RAID volumes because otherwise things fail
as described in bug 90871.

Now, if I switch those commented lines so that the RAID devices are created as
RAID1 (and thus half the size) everything works fine.  If I create the system
partitions directly instead of within a volume group, everything installs OK
(but I've yet to test to see if I can create other logical volumes normally
after a reboot).

I'm happy to try any suggestions.

Comment 3 Jason Tibbitts 2003-05-15 18:54:51 UTC
Created attachment 91695 [details]
anaconda.log

Comment 4 Jason Tibbitts 2003-05-15 18:55:22 UTC
Created attachment 91696 [details]
syslog

Comment 5 Jason Tibbitts 2003-05-15 18:56:01 UTC
Created attachment 91697 [details]
lvmout

I thought this file might be useful as well

Comment 6 Jason Tibbitts 2003-05-15 20:10:20 UTC
I found that if I comment out just the line

volgroup nas  pv.00

then the system installs fine.  It fails to boot, but that's for another bug report.


Comment 7 Jeremy Katz 2003-06-23 20:31:00 UTC
What are you using as your clearpart line?

Comment 8 Peter J. Dohm 2003-07-01 00:39:51 UTC
each attempt i've done uses:

clearpart --all

Comment 9 Jeremy Katz 2003-09-25 21:21:58 UTC
Unable to reproduce with our current codebase... I've made some changes, though,
that could be helping things in how we remove pre-existing volumes.

Comment 10 Jeremy Katz 2006-04-24 18:37:38 UTC
Mass-closing lots of old bugs which are in MODIFIED (and thus presumed to be
fixed).  If any of these are still a problem, please reopen or file a new bug
against the release which they're occurring in so they can be properly tracked.


Note You need to log in before you can comment on or make changes to this bug.