Bug 913644 - vgimport: Unable to import volume groups that have failed PVs
vgimport: Unable to import volume groups that have failed PVs
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.4
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Jonathan Earl Brassow
Cluster QE
:
: 908381 (view as bug list)
Depends On:
Blocks: 960054 969166
  Show dependency treegraph
 
Reported: 2013-02-21 12:44 EST by Jonathan Earl Brassow
Modified: 2013-11-21 18:20 EST (History)
14 users (show)

See Also:
Fixed In Version: lvm2-2.02.100-1.el6
Doc Type: Bug Fix
Doc Text:
Previously, if a device had failed after a vgexport was issued, it would be impossible to import the volume group. Additionally, a failure to import also meant it was impossible to repair the volume group. It is now possible to use the '--force' option with vgimport to import volume groups even if there are devices missing.
Story Points: ---
Clone Of:
: 969166 (view as bug list)
Environment:
Last Closed: 2013-11-21 18:20:41 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jonathan Earl Brassow 2013-02-21 12:44:19 EST
Catch-22:

It is not possible to import a volume group that has a failed PV.  The user is told to use 'vgreduce --removemissing' to make the VG consistent before they are able to import.  However, 'vgreduce --removemissing' is not allowed because the VG is not imported.  :)

The solution is to allow the user to 'force' the import so they can proceed on to the 'vgreduce --removemissing'.
Comment 1 Jonathan Earl Brassow 2013-02-21 12:51:22 EST
Unit test showing the problem:

[root@bp-01 lvm2]# vgcreate vg /dev/sd[abcdefg]1
  Volume group "vg" successfully created
[root@bp-01 lvm2]# lvcreate --type raid1 -L 500M -n lv vg
  Logical volume "lv" created
[root@bp-01 lvm2]# devices vg
  LV            Cpy%Sync Devices                      
  lv              100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0]          /dev/sda1(1)                 
  [lv_rimage_1]          /dev/sdb1(1)                 
  [lv_rmeta_0]           /dev/sda1(0)                 
  [lv_rmeta_1]           /dev/sdb1(0)                 
[root@bp-01 lvm2]# vgchange -an vg
  0 logical volume(s) in volume group "vg" now active
[root@bp-01 lvm2]# vgexport vg
  Volume group "vg" successfully exported
[root@bp-01 lvm2]# off.sh sdb
Turning off sdb
[root@bp-01 lvm2]# vgimport vg
  Couldn't find device with uuid xMKsmR-birg-rT8b-4tAI-mjdC-Y6nQ-c8Y0KU.
  Cannot change VG vg while PVs are missing.
  Consider vgreduce --removemissing.
[root@bp-01 lvm2]# vgreduce --removemissing --force vg
  Couldn't find device with uuid xMKsmR-birg-rT8b-4tAI-mjdC-Y6nQ-c8Y0KU.
  Volume group "vg" is exported
  Failed to suspend vg/lv before committing changes
  Device '/dev/sdf1' has been left open.
  Device '/dev/sdc1' has been left open.
  Device '/dev/sdg1' has been left open.
  Device '/dev/sdd1' has been left open.
  Device '/dev/sdd1' has been left open.
  Device '/dev/sdg1' has been left open.
  Device '/dev/sdg1' has been left open.
  Device '/dev/sde1' has been left open.
  Device '/dev/sda1' has been left open.
  Device '/dev/sdf1' has been left open.
  Device '/dev/sdf1' has been left open.
  Device '/dev/sda1' has been left open.
  Device '/dev/sdd1' has been left open.
  Device '/dev/sdc1' has been left open.
  Device '/dev/sdd1' has been left open.
  Device '/dev/sda1' has been left open.
  Device '/dev/sdc1' has been left open.
  Device '/dev/sdc1' has been left open.
  Device '/dev/sde1' has been left open.
  Device '/dev/sdg1' has been left open.
  Device '/dev/sdd1' has been left open.
  Device '/dev/sde1' has been left open.
  Device '/dev/sdf1' has been left open.
  Device '/dev/sdg1' has been left open.
  Device '/dev/sda1' has been left open.
  Device '/dev/sde1' has been left open.
  Device '/dev/sdc1' has been left open.
  Device '/dev/sde1' has been left open.
  Device '/dev/sda1' has been left open.
  Device '/dev/sdf1' has been left open.


... and the solution when using --force

[root@bp-01 lvm2]# vgimport --force vg
  '--force' supplied.  Volume groups with missing PVs will be imported.
  Couldn't find device with uuid xMKsmR-birg-rT8b-4tAI-mjdC-Y6nQ-c8Y0KU.
  Volume group "vg" successfully imported
[root@bp-01 lvm2]# vgreduce --removemissing --force vg
  Couldn't find device with uuid xMKsmR-birg-rT8b-4tAI-mjdC-Y6nQ-c8Y0KU.
  Wrote out consistent volume group vg
Comment 2 Jonathan Earl Brassow 2013-02-21 12:52:17 EST
Fix checked in upstream:

commit 3ab46449f4dbcc45fe878149838a8439f5ac8b34
Author: Jonathan Brassow <jbrassow@redhat.com>
Date:   Wed Feb 20 16:28:26 2013 -0600

    vgimport:  Allow '--force' to import VGs with missing PVs.
    
    When there are missing PVs in a volume group, most operations that alter
    the LVM metadata are disallowed.  It turns out that 'vgimport' is one of
    those disallowed operations.  This is bad because it creates a circular
    dependency.  'vgimport' will complain that the VG is inconsistent and that
    'vgreduce --removemissing' must be run.  However, 'vgreduce' cannot be run
    because it has not been imported.  Therefore, 'vgimport' must be one of
    the operations allowed to change the metadata when PVs are missing.  The
    '--force' option is the way to make 'vgimport' happen in spite of the
    missing PVs.
Comment 6 Jonathan Earl Brassow 2013-05-13 15:26:42 EDT
*** Bug 908381 has been marked as a duplicate of this bug. ***
Comment 9 Nenad Peric 2013-10-22 08:21:44 EDT
[root@virt-008 ~]# vgimport raid
  Couldn't find device with uuid iNNmlx-yi1u-JGRg-J0if-V9C2-md0f-1IOTsp.
  Cannot change VG raid while PVs are missing.
  Consider vgreduce --removemissing.
  Skipping volume group raid
[root@virt-008 ~]# vgimport --force raid
  WARNING: Volume groups with missing PVs will be imported with --force.
  Couldn't find device with uuid iNNmlx-yi1u-JGRg-J0if-V9C2-md0f-1IOTsp.
  Volume group "raid" successfully imported


After this a removal of missing PV and/or repair was possible. 

Marking VERIFIED with: lvm2-2.02.100-6.el6.x86_64
Comment 10 errata-xmlrpc 2013-11-21 18:20:41 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1704.html

Note You need to log in before you can comment on or make changes to this bug.