Red Hat Bugzilla – Bug 191879
Boot fails if any logical volume is exported
Last modified: 2011-06-06 22:13:29 EDT
Description of problem:
We have a number of essential bits of the system, e.g. /usr on a logical
volume, e.g. /dev/mapper/Volume00-usr. These are on a local disk.
We also have some VGs on SAN and need to leave those exported from time to time.
When we try to boot the system with some VGs exported, the system fails to
activate any logical volumes, and therefore the system won't start. However
using lvm.static to activate the Volume00 VG allows the system to come up
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Install RHEL using LVM for /usr, /var, etc.
2. Create further PVs, LVs and VGs on another physical disk (e.g. OtherVg
3. vgexport OtherVg
The system fails to boot because it can't find /dev/mapper/Volume00-usr etc
The system should start without activating the exported VG
The /etc/rc.sysinit script contains a number of stanzas like this:
if [ -x /sbin/lvm.static ]; then
if /sbin/lvm.static vgscan --mknodes --ignorelockingfailure > /dev/null
2>&1 ; then
action $"Setting up Logical Volume Management:" /sbin/lvm.static
vgchange -a y --ignorelockingfailure
These avoid attempting to activate any VGs if vgscan fails. However, if any
VGs are exported, vgscan exits with status 5, even if at least one VG is not
exported. This is interpreted as failure.
Editing /etc/rc.d/rc.sysinit to make the vgchange unconditional has enabled our
system to boot when non-essential VGs are exported at boot time.
Are there other cases it would return non-zero when it actually did not fail?
lvm2 reports the most severe error it found for anything it was asked to do.
For vgscan during startup, it's probably best to ignore errors and proceed to
the vgchange regardless: A failure in vgscan does not necessarily mean that
vgchange won't succeed.
I suppose the reason that's there is that if you don't catch any errors, you get
'Setting up logical volume management: No volume groups found'
message on every boot.
Is anything going to happen with this? This problem prevents systems booting,
and as you point out, removing the error test would still produce a useful
message at boot-time if something was really wrong.
No, what I'm saying is that you remove the test, you get an error/warning
message on every boot when it *is* working fine, which isn't really what you want.
When I boot with a patched rc.sysinit (which I'll attach), I get exactly what I
Setting up logical volume management: Ok
I think I'd only get the message 'No volume groups found' if I actually had no
volume groups, or if they were all exported.
Created attachment 134447 [details]
Patched rc.sysinit that avoids vgscan
This is simply the stock rc.sysinit with references to vgscan removed.
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release. Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update
Added in CVS, will be in 7.93.26.EL-1 or later.
The patch leaves an ugly FAILED in the boot output:
Setting up Logical Volume Management: Volume group "vg2" is exported
1 logical volume(s) in volume group "vg1" now active
Still, this is probably better than ignoring real LVM setup failures when no VGs
are exported. The exit code of vgchange is undocumented, so initscripts can't
distinguish between an exported volume and other errors.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.
I seemed to have a similar issue during the _installation_ of version 6. i have a system with a few VG (LVM2). to preserve the content of one of the VG i followed the LVM FAQ (http://tldp.org/HOWTO/LVM-HOWTO/recipemovevgtonewsys.html):
1. umount ;
2. vgchange -ay
4. updated /etc/fstab to comment out the VG.
Affter reboot the VG reports to be in non-active status and FS is not mounted.
after i start the network install and it comes to checking the available drives, anakonda throws an exception - "can't activate exported VG". And the installer terminates.
I understand that i can physically unplug the drives, but that's not a good enough solution.