RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 913644 - vgimport: Unable to import volume groups that have failed PVs
Summary: vgimport: Unable to import volume groups that have failed PVs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
: 908381 (view as bug list)
Depends On:
Blocks: 960054 969166
TreeView+ depends on / blocked
 
Reported: 2013-02-21 17:44 UTC by Jonathan Earl Brassow
Modified: 2018-12-03 18:20 UTC (History)
14 users (show)

Fixed In Version: lvm2-2.02.100-1.el6
Doc Type: Bug Fix
Doc Text:
Previously, if a device had failed after a vgexport was issued, it would be impossible to import the volume group. Additionally, a failure to import also meant it was impossible to repair the volume group. It is now possible to use the '--force' option with vgimport to import volume groups even if there are devices missing.
Clone Of:
: 969166 (view as bug list)
Environment:
Last Closed: 2013-11-21 23:20:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1704 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-11-20 21:52:01 UTC

Description Jonathan Earl Brassow 2013-02-21 17:44:19 UTC
Catch-22:

It is not possible to import a volume group that has a failed PV.  The user is told to use 'vgreduce --removemissing' to make the VG consistent before they are able to import.  However, 'vgreduce --removemissing' is not allowed because the VG is not imported.  :)

The solution is to allow the user to 'force' the import so they can proceed on to the 'vgreduce --removemissing'.

Comment 1 Jonathan Earl Brassow 2013-02-21 17:51:22 UTC
Unit test showing the problem:

[root@bp-01 lvm2]# vgcreate vg /dev/sd[abcdefg]1
  Volume group "vg" successfully created
[root@bp-01 lvm2]# lvcreate --type raid1 -L 500M -n lv vg
  Logical volume "lv" created
[root@bp-01 lvm2]# devices vg
  LV            Cpy%Sync Devices                      
  lv              100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0]          /dev/sda1(1)                 
  [lv_rimage_1]          /dev/sdb1(1)                 
  [lv_rmeta_0]           /dev/sda1(0)                 
  [lv_rmeta_1]           /dev/sdb1(0)                 
[root@bp-01 lvm2]# vgchange -an vg
  0 logical volume(s) in volume group "vg" now active
[root@bp-01 lvm2]# vgexport vg
  Volume group "vg" successfully exported
[root@bp-01 lvm2]# off.sh sdb
Turning off sdb
[root@bp-01 lvm2]# vgimport vg
  Couldn't find device with uuid xMKsmR-birg-rT8b-4tAI-mjdC-Y6nQ-c8Y0KU.
  Cannot change VG vg while PVs are missing.
  Consider vgreduce --removemissing.
[root@bp-01 lvm2]# vgreduce --removemissing --force vg
  Couldn't find device with uuid xMKsmR-birg-rT8b-4tAI-mjdC-Y6nQ-c8Y0KU.
  Volume group "vg" is exported
  Failed to suspend vg/lv before committing changes
  Device '/dev/sdf1' has been left open.
  Device '/dev/sdc1' has been left open.
  Device '/dev/sdg1' has been left open.
  Device '/dev/sdd1' has been left open.
  Device '/dev/sdd1' has been left open.
  Device '/dev/sdg1' has been left open.
  Device '/dev/sdg1' has been left open.
  Device '/dev/sde1' has been left open.
  Device '/dev/sda1' has been left open.
  Device '/dev/sdf1' has been left open.
  Device '/dev/sdf1' has been left open.
  Device '/dev/sda1' has been left open.
  Device '/dev/sdd1' has been left open.
  Device '/dev/sdc1' has been left open.
  Device '/dev/sdd1' has been left open.
  Device '/dev/sda1' has been left open.
  Device '/dev/sdc1' has been left open.
  Device '/dev/sdc1' has been left open.
  Device '/dev/sde1' has been left open.
  Device '/dev/sdg1' has been left open.
  Device '/dev/sdd1' has been left open.
  Device '/dev/sde1' has been left open.
  Device '/dev/sdf1' has been left open.
  Device '/dev/sdg1' has been left open.
  Device '/dev/sda1' has been left open.
  Device '/dev/sde1' has been left open.
  Device '/dev/sdc1' has been left open.
  Device '/dev/sde1' has been left open.
  Device '/dev/sda1' has been left open.
  Device '/dev/sdf1' has been left open.


... and the solution when using --force

[root@bp-01 lvm2]# vgimport --force vg
  '--force' supplied.  Volume groups with missing PVs will be imported.
  Couldn't find device with uuid xMKsmR-birg-rT8b-4tAI-mjdC-Y6nQ-c8Y0KU.
  Volume group "vg" successfully imported
[root@bp-01 lvm2]# vgreduce --removemissing --force vg
  Couldn't find device with uuid xMKsmR-birg-rT8b-4tAI-mjdC-Y6nQ-c8Y0KU.
  Wrote out consistent volume group vg

Comment 2 Jonathan Earl Brassow 2013-02-21 17:52:17 UTC
Fix checked in upstream:

commit 3ab46449f4dbcc45fe878149838a8439f5ac8b34
Author: Jonathan Brassow <jbrassow>
Date:   Wed Feb 20 16:28:26 2013 -0600

    vgimport:  Allow '--force' to import VGs with missing PVs.
    
    When there are missing PVs in a volume group, most operations that alter
    the LVM metadata are disallowed.  It turns out that 'vgimport' is one of
    those disallowed operations.  This is bad because it creates a circular
    dependency.  'vgimport' will complain that the VG is inconsistent and that
    'vgreduce --removemissing' must be run.  However, 'vgreduce' cannot be run
    because it has not been imported.  Therefore, 'vgimport' must be one of
    the operations allowed to change the metadata when PVs are missing.  The
    '--force' option is the way to make 'vgimport' happen in spite of the
    missing PVs.

Comment 6 Jonathan Earl Brassow 2013-05-13 19:26:42 UTC
*** Bug 908381 has been marked as a duplicate of this bug. ***

Comment 9 Nenad Peric 2013-10-22 12:21:44 UTC
[root@virt-008 ~]# vgimport raid
  Couldn't find device with uuid iNNmlx-yi1u-JGRg-J0if-V9C2-md0f-1IOTsp.
  Cannot change VG raid while PVs are missing.
  Consider vgreduce --removemissing.
  Skipping volume group raid
[root@virt-008 ~]# vgimport --force raid
  WARNING: Volume groups with missing PVs will be imported with --force.
  Couldn't find device with uuid iNNmlx-yi1u-JGRg-J0if-V9C2-md0f-1IOTsp.
  Volume group "raid" successfully imported


After this a removal of missing PV and/or repair was possible. 

Marking VERIFIED with: lvm2-2.02.100-6.el6.x86_64

Comment 10 errata-xmlrpc 2013-11-21 23:20:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1704.html


Note You need to log in before you can comment on or make changes to this bug.