Red Hat Bugzilla – Bug 183120
nash init script use of vgchange too restrictive
Last modified: 2008-04-10 17:07:52 EDT
Description of problem: The nash init script hard-codes the volume group name to
be activated, thus making it impossible to for t a volume group with a different
name. Leaving the name out of the command will activate all volume groups, an
easy fix to impliment.
Version-Release number of selected component (if applicable): n/a
How reproducible: Allways
Steps to Reproduce:
1. Change the name of you boot logical volume.
2. Change grub.conf to use the new name.
3. Attempt to boot.
Actual results: New volume group is found, but not activated. Kernel panic results.
Expected results: Normal boot
Additional info: Simple to fix by removing the volume group name from the
vgchange line in the nash init script.
Comment: Why is "lvm vgchange" used instead of "vgchange?"
This won't actually work -- leaving it unrestricted means we'll try to bring up
clustered VGs and the like, which will fail. What's the problem you're actually
trying to solve?
As I tried to say, I want to be able to boot to a specific volume group (from
GRUB, but that's probably irrelevant), and to be able to change the volume group
names "on the fly" without breaking (or needing to recreate) initrd.
The specific circumstance was a HD failure rendering a partition unreadable, so
the drive wouldn't mount properly. Since other partitions on that drive
contained a FC4 root, we installed a new drive and restored the FC4 from the
backup. But the backups weren't current, so I "fixed" the bad drive. But now I
had two drives, both containing "VolGrp00," and lvm would only "see" one of
these. (By the way, vgscan didn't even report the "duplicate" VG name as an
error.) Renaming the VG of the newer (booting) VG made the older VG visible (and
bootable), but left the newer one unbootable. Changing the vgchange line in init
fixed that problem, and (since I thought the solution fairly easy to implement)
I reported the problem and solution as a bug. (Perhaps a "suggestion for
improvement" would have been a better label.)
Why would a failure to activate other VGs be of any significance? If they cannot
be activated during a boot, they are not be likely to (or, in fact, can not) be
needed during the boot process.
If you think that only activating the VG need for the boot is necessary, then
can you change mkrootdev (for example) to do the activation for the specific
device used by /boot instead of putting it in init? (I make that suggestion
because, with my current setup, there are at least three "/" directories on my
hard drives, and the boot process seems to be able to figure out which of those
"/" directories needs to be used for a specific boot. I presume the information
comes from the "kernel" line in grub.conf, but I don't really know. But it
seems clear that the boot process must be "aware" of the actual "/" location to
use for a specific boot, and that location would [I'm guessing here] include the
This report targets the FC3 or FC4 products, which have now been EOL'd.
Could you please check that it still applies to a current Fedora release, and
either update the target product or close it ?
This still applies to at least RHEL4. The mkinitrd script adds this to the
initrd's init script.
I consistantly add drives to systems to work on them via a usb-ide/sata drive
bay. I recently tried to do this on my own system, which when I loaded, I used
the default LVM naming scheme "VolGroup00". This causes a conflict with
multiple VGs on the same system (which is reported in syslog, not stderr.) To
remedy this problem, I needed to rename my VG. I may submit another bug that
you can't do this from the linux rescue image, but that's off topic. After
renaming the VG, you must edit the initrd, or it will kernel panic when it can't
mount the root partition. This is because mkinitrd hardcodes the VG name in the
lines 736-739 of /sbin/mkinitrd from mkinitrd-184.108.40.206-1
echo "echo Scanning logical volumes" >> $RCFILE
echo "lvm vgscan --ignorelockingfailure" >> $RCFILE
echo "echo Activating logical volumes" >> $RCFILE
echo "lvm vgchange -ay --ignorelockingfailure $root_vg" >> $RCFILE
The logic of the init script does a vgscan right before this. It's output could
be captured and awk'd to make all VGs available:.
This is causing a new install of F7 to fail to boot for me.
I have an existing LVM2 VG VolGroup01 which contains swap and data file systems.
During the F7 install, I created a new LVM2 VG VolGroup00 which contains the
On boot, both VGs are found but only LVs on VolGroup01 are activated - and so
the root filesystem cannot then be found or mounted.
If I boot with an alternate root filesystem, everything works fine and I am able
to mount the new root filesystem.
I am currently investigating why the VG on which the root filesystem resides is
not activated in the initrd. I will add more detail then.
Confirmed in F7.
Tested with a kernel upgrade that mkinitrd builds and initrd that is missing the
Volume Group for the root silesystem fromvgchange command line in nash init file.
I will take a look what is happening but could someone bump the product on this
bug to F7?
Fedora apologizes that these issues have not been resolved yet. We're
sorry it's taken so long for your bug to be properly triaged and acted
on. We appreciate the time you took to report this issue and want to
make sure no important bugs slip through the cracks.
If you're currently running a version of Fedora Core between 1 and 6,
please note that Fedora no longer maintains these releases. We strongly
encourage you to upgrade to a current Fedora release. In order to
refocus our efforts as a project we are flagging all of the open bugs
for releases which are no longer maintained and closing them.
If this bug is still open against Fedora Core 1 through 6, thirty days
from now, it will be closed 'WONTFIX'. If you can reporduce this bug in
the latest Fedora version, please change to the respective version. If
you are unable to do this, please add a comment to this bug requesting
Thanks for your help, and we apologize again that we haven't handled
these issues to this point.
The process we are following is outlined here:
We will be following the process here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this
doesn't happen again.
And if you'd like to join the bug triage team to help make things
better, check out http://fedoraproject.org/wiki/BugZappers
I worked around the issue I was seeing in F7 with the root VG not being added to
initrd for activation by mkinitrd by changing /etc/fstab to mount / based on LVM
device path (/dev/VolGroup00/Root00) instead of label (LABEL=/1). After this
change, mkinitrd "found" the root VG and added it to initrd for activation.
I will file a separate bug if I ever track down the problem.
Thanks for your update.