Red Hat Bugzilla – Bug 65260
rc.sysinit fails to call vgscan in certain configurations
Last modified: 2014-03-16 22:27:30 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.76 [en] (WinNT; U)
Description of problem:
In a configuration with a single physical volume on a software RAID disk (/dev/md?) (and possibly in other configurations as well), rc.sysinit fails
call vgscan, resulting in the lvm system not coming up, and lvm-based filesystems not mounting at startup.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
NOTE: The prodedure below matches that I did. I'm not sure if it's a requirement to have only one physical volume, and to have that physical
on a RAID disk, but that's the configuration I used.
1. Install a standard RedHat 7.3 installation. Don't create any RAID disks during the install. If the install didn't result in the lvm package being
installed, manually add it with rpm.
2. After the install is complete, create a RAID partition (/dev/md0). Don't put a filesystem on it.
3. Use pvcreate and vgcreate to create a new volume group, with /dev/md0 as the physical volume. (NOTE: This should be the only volume group
and the only physical volume in the system.)
4. Create a logical volume in that volume group, create a filesystem on that logical volume, add an entry for that fileystem into /etc/fstab.
5. Manually mount the filesystem to verify that it works.
Actual Results: At startup, the system failed to mount the filesystem created in step 4. Research indicated that the problem was that vgscan and
vgchange had never been run.
Expected Results: rc.sysinit should have ran vgscan and "vgchange -a y" and mounted the filesystem in question.
Manually runing vgscan and "vgchange -a y" followed by "mount (mountpoint)" will get the filesystem mounted. Removing the check forthe
of "/proc/lvm" from the rc.sysinit scripts will also eliminate the problem and allow the filesystem to be mounted at startup.
NOTE: rc.sysinit attempts to run vgscan twice, once before RAID initialization, and once after RAID initialization. Both of them have checks for the
existance of /proc/lvm. I'm not sure if my problem is unqiue to RAID -- perhaps if there is a native LVM partition, the LVM partition type causes the
lvm module to be loaded prior to vgscan being executed, I haven't tested that case. But in my case, there is only one physical volume, and it's a
RAID disk (/dev/md0), so there are no partitions with the LVM type. And in my case, the lvm module (lvm-mod) doesn't get loaded until vgscan is
run. Since lvm-mod creates /proc/lvm, the startup script's requirement that /proc/lvm exist as a prerequisite for running "vgscan" creates a
I am having a similar problem with a single ide disk, redhat 7.3 installation
(kernel 2.4.18-4 upgrade applied) using one PV. sysinit.rc does not call
vgscan. Further, running "vgchange -a y" gives syntax error.
Running "vgchange -ay" produces desired result. I am not sure if this is a bug
or a documentation error.
The basic problem is described in bug 57563, the default ramdisk images
are not built with the lvm-mod loaded. This means that /proc/lvm does not
exist when the rc.sysinit script runs and the lvm conditional tests fail in
consequence. The choices are to create a new ramdisk image using mkinitrd and
applying the option --preload=lvm-mod; or modify rc.sysinit to just run
vgchange -ay without a conditional test, this will load the lvm module; or
modify the conditional code in rc.sysinit to remove references to /proc/lvm.
This seems to make the most sense since /proc/lvm shouldn't exist until after
the lvm-mod is loaded. It seems pointless to test for a condition that can't
be met until the dependent action occurs.
refer to 57563 for further information.
Closing bugs on older, no longer supported, releases. Apologies for any lack of
If this persists on a current release, such as Fedora Core 4, please open a new bug.