Bug 88958
Summary: | kickstart installation using LVM on redhat 9 broken | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [Retired] Red Hat Linux | Reporter: | Peter J. Dohm <dohmp> | ||||||||
Component: | anaconda | Assignee: | Jeremy Katz <katzj> | ||||||||
Status: | CLOSED RAWHIDE | QA Contact: | Mike McLean <mikem> | ||||||||
Severity: | medium | Docs Contact: | |||||||||
Priority: | medium | ||||||||||
Version: | 9 | CC: | j | ||||||||
Target Milestone: | --- | ||||||||||
Target Release: | --- | ||||||||||
Hardware: | All | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2006-04-24 18:37:38 UTC | Type: | --- | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Peter J. Dohm
2003-04-15 18:23:59 UTC
I did not have this problem when I copied the ks snippet above into an ks file and booted Shrike. It installed fine. Is it possible you ran out of space, or some other error occured? Could you put /tmp/syslog and /tmp/anaconda.log onto a floppy (from VC2) and attach them? Was there enough space for all the logical volumne in the volume group that was created? We should have caught this error but I'm just grabbing for answers at this point since it works for me. I'm having pretty much the same problem: the installer gives each logical volume the same minor and everything breaks down. I'll inline the relevant portion of my ks.cfg: clearpart --all part /boot --size=512 --ondisk=sda --asprimary part / --size=512 --ondisk=sdb --asprimary part raid.00 --size=1000000 --ondisk=sda part raid.01 --size=1000000 --ondisk=sdb part raid.10 --size=1 --grow --ondisk=sda part raid.11 --size=1 --grow --ondisk=sdb raid pv.00 --level=0 --device=md0 raid.00 raid.01 raid pv.01 --level=0 --device=md1 raid.10 raid.11 #raid pv.00 --level=1 --device=md0 raid.00 raid.01 #raid pv.01 --level=1 --device=md1 raid.10 raid.11 volgroup nas pv.00 volgroup sys pv.01 logvol /usr --vgname=sys --name=usr --size=4096 --fstype=ext3 logvol /var --vgname=sys --name=var --size=4096 --fstype=ext3 logvol /tmp --vgname=sys --name=tmp --size=8192 --fstype=ext3 logvol /cache --vgname=sys --name=cache --size=4096 --fstype=ext3 Note that sda and sdb are each 1.1TB 3ware RAID arrays that I'm trying to stripe across. I have to make two software RAID volumes because otherwise things fail as described in bug 90871. Now, if I switch those commented lines so that the RAID devices are created as RAID1 (and thus half the size) everything works fine. If I create the system partitions directly instead of within a volume group, everything installs OK (but I've yet to test to see if I can create other logical volumes normally after a reboot). I'm happy to try any suggestions. Created attachment 91695 [details]
anaconda.log
Created attachment 91696 [details]
syslog
Created attachment 91697 [details]
lvmout
I thought this file might be useful as well
I found that if I comment out just the line volgroup nas pv.00 then the system installs fine. It fails to boot, but that's for another bug report. What are you using as your clearpart line? each attempt i've done uses: clearpart --all Unable to reproduce with our current codebase... I've made some changes, though, that could be helping things in how we remove pre-existing volumes. Mass-closing lots of old bugs which are in MODIFIED (and thus presumed to be fixed). If any of these are still a problem, please reopen or file a new bug against the release which they're occurring in so they can be properly tracked. |