Bug 92245 - ks anaconda fails to create logvol mounted under /var
ks anaconda fails to create logvol mounted under /var
Status: CLOSED CURRENTRELEASE
Product: Red Hat Linux
Classification: Retired
Component: anaconda (Show other bugs)
9
All Linux
medium Severity medium
: ---
: ---
Assigned To: Jeremy Katz
Mike McLean
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2003-06-03 23:15 EDT by Marc MERLIN
Modified: 2007-04-18 12:54 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-10-04 23:14:53 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Marc MERLIN 2003-06-03 23:15:42 EDT
My kickstart file has:

# Clear all partitions from all disks
clearpart --drives=hda,hdc --initlabel 

# Raid 1 IDE config
part raid.11	--size 1000	--asprimary	--ondrive=hda
part raid.12	--size 1000	--asprimary	--ondrive=hda
part raid.13	--size 2000	--asprimary	--ondrive=hda
part raid.14	--size 8000			--ondrive=hda
part raid.15	--size 1 --grow			--ondrive=hda

part raid.21	--size 1000	--asprimary	--ondrive=hdc
part raid.22	--size 1000	--asprimary	--ondrive=hdc
part raid.23	--size 2000	--asprimary	--ondrive=hdc
part raid.24	--size 8000			--ondrive=hdc
part raid.25	--size 1 --grow			--ondrive=hdc

# You can add --spares=x
raid /		--fstype ext3 --device md0 --level=RAID1 raid.11 raid.21
raid /safe	--fstype ext3 --device md1 --level=RAID1 raid.12 raid.22
raid swap	--fstype swap --device md2 --level=RAID1 raid.13 raid.23
raid /usr	--fstype ext3 --device md3 --level=RAID1 raid.14 raid.24
raid pv.01	--fstype ext3 --device md4 --level=RAID1 raid.15 raid.25

# LVM configuration so that we can resize /var and /usr/local later
volgroup sysvg pv.01
logvol /usr/local	--vgname=sysvg	--size=1 --grow	--name=usrlocal
logvol /var/freespace	--vgname=sysvg	--size=8000	--name=freespacetouse
logvol /var		--vgname=sysvg	--size=8000	--name=var


With this config, the 3 VGs get created, and I can see them in F2 when the
machine loads (lvscan shows var, and it's mounted in /var).
However, when the machine reboots, /dev/sysvg/var is nowhere to be found, while
the other two LVs show up fine.
lvscan only shows 2 vg, and vgdisplay -a shows that sysvg has 8G free, as if
the space allocated for sysvg/var got freed when the system rebooted after the
initial install.

The kicker is that I didn't have the problem when I didn't create /var/freespace
(and the drives are 80G, so there is plenty of space for sysvg/usrlocal)
Comment 1 Marc MERLIN 2003-06-03 23:37:18 EDT
Incidently, it also works if I create my buffer lv in /, as in:

logvol /usr/local	--vgname=sysvg	--size=1 --grow	--name=usrlocal
logvol /var		--vgname=sysvg	--size=8000	--name=var
logvol /freespacetouse	--vgname=sysvg	--size=8000	--name=freespacetouse

But putting it in /var/freespace, causes the problems (I've reproduced it 3
times in a row). No idea why, maybe an umount problem before the reboot causing the
/var lv not to get properly saved?
Comment 2 Marc MERLIN 2003-06-11 01:45:10 EDT
More info:
I'm now getting the problem with /usr/local (being lvm), mounted in /usr,
even though /usr isn't lvm.
the userlocal lv is there during the install, and disappears after the reboot
Comment 3 Marc MERLIN 2003-06-12 20:10:11 EDT
Even more info.
With this config:
# One disk config (IDE)
part /		--fstype ext3		--size 1000 --asprimary	--ondrive=hda
part /safe	--fstype ext3		--size 1000 --asprimary --ondrive=hda
part swap	--fstype swap		--size 2000		--ondrive=hda 
part /usr	--fstype ext3		--size 8000		--ondrive=hda
part pv.01	--fstype ext3		--size 17000 --grow	--ondrive=hda

# LVM configuration so that we can resize /var and /usr/local later
volgroup sysvg pv.01
logvol /usr/local	--vgname=sysvg	--size=48000 --name=usrlocal
logvol /var		--vgname=sysvg	--size=8000	--name=var
# This is a bogus partition to create a space buffer
logvol /freespacetouse	--vgname=sysvg	--size=8000	--name=freespacetouse

This works fine.
If I bump /usr/local to 62000 (or -1, my disk is bigger than that), the lv
gets created by the installer, written to, but is no were to be found after
the reboot.
It sounds like there is some limit when, if reached, causes the lv not to be
persistent across the first reboot
Comment 4 Marc MERLIN 2003-06-13 14:27:06 EDT
I have settled with:
logvol /usr/local	--vgname=sysvg	--size -1 --grow	--maxsize=54000 --name=usrlocal

This works, but fails if I set maxsize to a number that's somewhat bigger (didn't
care to find out which value it is exactly).

For now, I'm installing like this and growing the lv by hand after the install
Comment 5 Marc MERLIN 2003-06-17 20:49:41 EDT
On some other system where I was doing raid 1 under lvm, I had to lower maxsize 
to 50,000, it wouldn't work with 54,000 (it'd load and lose an lv after reboot)
This is quite weird...
Comment 6 Marc MERLIN 2003-07-30 02:34:03 EDT
Hi,

any updates on this? This bug is quite crippling if you want to use LVM...
Comment 7 Jeremy Katz 2004-10-04 23:14:53 EDT
This looks like it works to me with current releases.

Note You need to log in before you can comment on or make changes to this bug.