Red Hat Bugzilla – Bug 132870
Cant reboot with lvm2 in fstab.
Last modified: 2007-11-30 17:10:49 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.2)
Description of problem:
Nothing was wrong during the installation (upgrade, in fact) process:
the lvm was mounted, but during the first boot (and others too,
actually) the initscripts failed to active the lvm2 group and mount
failed a bit later.
I had to comment the fstab entry, reboot, activate the group with
"vgchange -a y 1er", and then mount it manually. The logical volumes
are fine after that.
Version-Release number of selected component: initscripts(0:7.80-1).i386
How reproducible: Always
Steps to Reproduce:
1. Configure lvm2 things (create a volume group, add physical volumes
and create a logical volume)
2. Add it to your fstab so that it's default-mounted.
is it duplicate of
Does it work with selinux=0?
I'm getting the same thing; selinux is disabled.
Note that this is a pre-existing LVM2 volume created before the
upgrade to FC3t2.
vgscan appears to always return failure to the shell, even when it
Commenting out the
"if /sbin/lvm.static vgscan --mknodes --ignorelockingfailure > /dev/null
2>&1 ; then" and the cooresponding fi results in the system booting
fine, with the LVM volumes properly enabled.
The vgscan man page says "In LVM2, vgscans take place automatically;
but you might still need to run one explicitly after changing
hardware.", so this seems a valid thing to do.
I did a fresh install of FC3t2 and created a new LVM2. The install
was fine, but just as described above I could not boot because the LVM
partition was not mounted. I initially had SELinux active but
deactivated it when I attempted to troubleshoot the problem.
Uncommenting the lines in rc.sysinit solved the problem for me.
Bill : same issue with selinux=0.
Created attachment 104098 [details]
This is a log of the output of the vgscan command from rc.sysinit, modified for
maximum verbosity and debug output.
Assigning to the lvm tools.
Slightly different configuration here: LVM1 from older times. FC3T2
fresh install into non-LVM partition. Booting into it fails to
activate the volume group, because with the --mknodes option, lvm
returns 'false' which in turn causes the if-statement to not execute
# lvm vgscan --mknodes --ignorelockingfailure && echo "true"
Reading all physical volumes. This may take a while...
Found volume group "VolGrp01" using metadata type lvm1
When I drop the --mknodes option, it returns 'true' and booting suceeds.
Booting FC2 succeeds, btw (it does the mknodes thing differently).
vgmknodes is returning an error if it can't delete an LV symlink
because it doesn't exist - will fix
will be in 2.00.25
*** Bug 133378 has been marked as a duplicate of this bug. ***
*** Bug 134102 has been marked as a duplicate of this bug. ***
*** Bug 134770 has been marked as a duplicate of this bug. ***
*** Bug 134768 has been marked as a duplicate of this bug. ***
Are users still seeing this with current rawhide packages?
Works for me.
<metoo>The volume group has been correctly enabled at boot time.</>
*** Bug 135078 has been marked as a duplicate of this bug. ***
Works for me too.