RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 908381 - Cannot vgimport repair exported broken LVM mirror
Summary: Cannot vgimport repair exported broken LVM mirror
Keywords:
Status: CLOSED DUPLICATE of bug 913644
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.0
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On: 908097
Blocks: 835616 960054
TreeView+ depends on / blocked
 
Reported: 2013-02-06 14:56 UTC by Chris Williams
Modified: 2018-12-03 18:16 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 908097
Environment:
Last Closed: 2013-05-13 19:26:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Chris Williams 2013-02-06 14:56:18 UTC
+++ This bug was initially created as a clone of Bug #908097 +++

Description of problem:

If you have an exported mirrored LVM DG that is missing a mirror it is impossible to import the DG.  Tested in 5.8 and 6.3... See following reproducer:

[root@ll-chsystest03 logs]# dd if=/dev/zero of=loop0 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 0.583542 s, 1.8 GB/s

[root@ll-chsystest03 logs]# dd if=/dev/zero of=loop1 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 0.593067 s, 1.8 GB/s

[root@ll-chsystest03 logs]# losetup /dev/loop0 /logs/loop0

[root@ll-chsystest03 logs]# losetup /dev/loop1 /logs/loop1

[root@ll-chsystest03 logs]# pvcreate /dev/loop0

[root@ll-chsystest03 logs]# pvcreate /dev/loop1

[root@ll-chsystest03 logs]# vgcreate TRASHME /dev/loop0 /dev/loop1

[root@ll-chsystest03 logs]# lvcreate --contiguous n --alloc normal -m 1 -n trashme -L500m TRASHME /dev/loop0 /dev/loop1
  Logical volume "trashme" created

[root@ll-chsystest03 logs]# lvs
  LV      VG      Attr     LSize   Pool Origin Data%  Move Log          Copy%  Convert
  trashme TRASHME mwn-aom- 500.00m                         trashme_mlog 100.00
  logs    vg0     -wi-ao-- 520.69g
  root    vg0     -wi-ao--  14.00g
  tmp     vg0     -wi-ao--   2.00g
  var     vg0     -wi-ao--   6.00g


[root@ll-chsystest03 logs]# vgs
  VG      #PV #LV #SN Attr   VSize   VFree
  TRASHME   2   1   0 wz--n-   1.95g 988.00m
  vg0       1   4   0 wz--n- 542.69g      0


[root@ll-chsystest03 logs]# vgchange -an TRASHME
  0 logical volume(s) in volume group "TRASHME" now active

[root@ll-chsystest03 logs]# vgexport TRASHME
  Volume group "TRASHME" successfully exported

[root@ll-chsystest03 logs]# losetup -d /dev/loop1

[root@ll-chsystest03 logs]# vgs
  Couldn't find device with uuid utPEIg-JC7U-4xP0-BXsu-oRl9-IjHS-4oyEGQ.
  VG      #PV #LV #SN Attr   VSize   VFree
  TRASHME   2   1   0 wzxpn-   1.95g 988.00m
  vg0       1   4   0 wz--n- 542.69g      0


[root@ll-chsystest03 logs]# vgimport TRASHME
  Couldn't find device with uuid utPEIg-JC7U-4xP0-BXsu-oRl9-IjHS-4oyEGQ.
  Cannot change VG TRASHME while PVs are missing.
  Consider vgreduce --removemissing.

[root@ll-chsystest03 logs]# vgreduce  TRASHME --removemissing --force
  Couldn't find device with uuid utPEIg-JC7U-4xP0-BXsu-oRl9-IjHS-4oyEGQ.
  Unable to determine mirror sync status of TRASHME/trashme.
  Volume group "TRASHME" is exported
  Failed to lock trashme

The "Failed to lock" problem is where the issue comes in.  "vgreduce" cannot fix the problem because it can't get a lock on an exported diskgroup, and vgimport can't import the diskgroup since its "broken".  Catch-22.



Version-Release number of selected component (if applicable):

RHEL5.8 and RHEL6.3

How reproducible:


See above


Additional info:

Workaround #1

#this will give me the pv uuid
vgdisplay  --partial --verbose

cd /etc/lvm/backup

There should be a file there called TRASHME

get a pv same size as what was there before, your are simulating

losetup /dev/loop1 /logs/loop1

now restore the pv with same uuid and meta data from our /etc/lvm/backup/TRASHME file
pvcreate --restorefile TRASHME --uuid mBR3gH-V90u-yOOk-V4BL-Llg0-UFH1-MfKbib /dev/loop1 -ff

[root@localhost backup]# vgimport TRASHME
  Volume group "TRASHME" successfully imported

Workaround #2

You could hand edit the backup file of the LVM metadata (generally found in /etc/lvm/backup/<VG NAME>).  You would simply need to remove the 'EXPORTED' flag wherever it is found and then run vgcfgrestore

Comment 2 Jonathan Earl Brassow 2013-05-13 19:26:42 UTC

*** This bug has been marked as a duplicate of bug 913644 ***


Note You need to log in before you can comment on or make changes to this bug.