Bug 191879 - Boot fails if any logical volume is exported
Boot fails if any logical volume is exported
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: initscripts (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Miloslav Trmač
Brock Organ
Depends On:
  Show dependency treegraph
Reported: 2006-05-16 05:56 EDT by Bob Gautier
Modified: 2011-06-06 22:13 EDT (History)
6 users (show)

See Also:
Fixed In Version: RHBA-2007-0303
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-05-01 13:29:05 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Patched rc.sysinit that avoids vgscan (26.94 KB, application/octet-stream)
2006-08-18 11:08 EDT, Bob Gautier
no flags Details

  None (edit)
Description Bob Gautier 2006-05-16 05:56:12 EDT
Description of problem:

We have a number of essential bits of the system, e.g. /usr on a logical 
volume, e.g. /dev/mapper/Volume00-usr.  These are on a local disk.

We also have some VGs on SAN and need to leave those exported from time to time.

When we try to boot the system with some VGs exported, the system fails to 
activate any logical volumes, and therefore the system won't start.  However 
using lvm.static to activate the Volume00 VG allows the system to come up 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Install RHEL using LVM for /usr, /var, etc.
2. Create further PVs, LVs and VGs on another physical disk (e.g. OtherVg
3. vgexport OtherVg
4. Reboot
Actual results:

The system fails to boot because it can't find /dev/mapper/Volume00-usr etc

Expected results:

The system should start without activating the exported VG

Additional info:

The /etc/rc.sysinit script contains a number of stanzas like this:

    if [ -x /sbin/lvm.static ]; then
	if /sbin/lvm.static vgscan --mknodes --ignorelockingfailure > /dev/null 
2>&1 ; then
	    action $"Setting up Logical Volume Management:" /sbin/lvm.static 
vgchange -a y --ignorelockingfailure

These avoid attempting to activate any VGs if vgscan fails.  However, if any 
VGs are exported, vgscan exits with status 5, even if at least one VG is not 
exported.  This is interpreted as failure.

Editing /etc/rc.d/rc.sysinit to make the vgchange unconditional has enabled our 
system to boot when non-essential VGs are exported at boot time.
Comment 1 Bill Nottingham 2006-05-16 16:30:12 EDT
Are there other cases it would return non-zero when it actually did not fail?
Comment 2 Alasdair Kergon 2006-05-16 17:08:33 EDT
lvm2 reports the most severe error it found for anything it was asked to do.
For vgscan during startup, it's probably best to ignore errors and proceed to
the vgchange regardless:  A failure in vgscan does not necessarily mean that
vgchange won't succeed.
Comment 3 Bill Nottingham 2006-05-16 18:38:40 EDT
I suppose the reason that's there is that if you don't catch any errors, you get
a nice:

'Setting up logical volume management: No volume groups found'

message on every boot.
Comment 4 Bob Gautier 2006-08-18 05:44:37 EDT
Is anything going to happen with this?  This problem prevents systems booting, 
and as you point out, removing the error test would still produce a useful 
message at boot-time if something was really wrong.
Comment 5 Bill Nottingham 2006-08-18 10:57:41 EDT
No, what I'm saying is that you remove the test, you get an error/warning
message on every boot when it *is* working fine, which isn't really what you want.
Comment 6 Bob Gautier 2006-08-18 11:04:38 EDT
When I boot with a patched rc.sysinit (which I'll attach), I get exactly what I 

Setting up logical volume management: Ok

I think I'd only get the message 'No volume groups found' if I actually had no 
volume groups, or if they were all exported.
Comment 7 Bob Gautier 2006-08-18 11:08:14 EDT
Created attachment 134447 [details]
Patched rc.sysinit that avoids vgscan

This is simply the stock rc.sysinit with references to vgscan removed.
Comment 8 RHEL Product and Program Management 2006-08-18 15:42:38 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
Comment 10 Bill Nottingham 2006-11-20 16:35:54 EST
Added in CVS, will be in 7.93.26.EL-1 or later.
Comment 11 Miloslav Trmač 2007-01-01 19:22:19 EST
The patch leaves an ugly FAILED in the boot output:

Setting up Logical Volume Management:   Volume group "vg2" is exported
  1 logical volume(s) in volume group "vg1" now active

Still, this is probably better than ignoring real LVM setup failures when no VGs
are exported.  The exit code of vgchange is undocumented, so initscripts can't
distinguish between an exported volume and other errors.
Comment 15 Red Hat Bugzilla 2007-05-01 13:29:05 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

Comment 16 andrew 2011-06-06 22:12:14 EDT
I seemed to have a similar issue during the _installation_ of version 6. i have a system with a few VG (LVM2). to preserve the content of one of the VG i followed the  LVM FAQ (http://tldp.org/HOWTO/LVM-HOWTO/recipemovevgtonewsys.html):
1. umount ;
2. vgchange -ay 
3. vgexport
4. updated /etc/fstab to comment out the VG.
Affter reboot the VG reports to be in non-active status and FS is not mounted.

after i start the network install and it comes to checking the available drives, anakonda throws an exception - "can't activate exported VG". And the installer terminates.
I understand that i can physically unplug the drives, but that's not a good enough solution.
Please advise.

Note You need to log in before you can comment on or make changes to this bug.