Bug 393271 - vgcfgrestore can restore wrong vg info
Summary: vgcfgrestore can restore wrong vg info
Keywords:
Status: CLOSED DUPLICATE of bug 406021
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: lvm2-cluster
Version: 4
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Milan Broz
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-11-20 22:07 UTC by Nate Straz
Modified: 2013-03-01 04:06 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2008-06-02 15:00:24 UTC
Embargoed:


Attachments (Terms of Use)

Description Nate Straz 2007-11-20 22:07:53 UTC
Description of problem:

When a VG configuration is restored with vgcfgrestore from a node in a cluster
other than the one that created the most recent LV, previously existing LVs may
be restored instead of the recently created one.  This was found in the
"recover_corrupt_metadata" scenario from mirror_sanity.

Version-Release number of selected component (if applicable):
lvm2-2.02.27-4.el4
lvm2-cluster-2.02.27-4.el4
cmirror-1.0.1-1
cmirror-kernel-2.6.9-38.5


How reproducible:
100%

Steps to Reproduce:
1. Create and remove some LVs in a VG
2. Select a random node
3. lvcreate -m 1 -n corrupt_meta_mirror -L 1G --nosync mirror_sanity
4. Switch to a different node
5. Corrupt an underlying PV used by the LV corrupt_meta_mirror
   dd if=/dev/zero of=/dev/sda2 count=1000
6. Verifying that this VG is now corrupt
   lvs -a -o +devices 
7. Activate VG in partial readonly mode
   vgchange -an --partial
8. Recreate the PV using it's old uuid
   pvcreate --uuid A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu /dev/sda2 
9. Restore the VG back to it's original state
   vgcfgrestore mirror_sanity 
10. Check that the LVs were restored correctly
    lvs

Actual results:

[root@tank-03 ~]# lvcreate -m 1 -n corrupt_meta_mirror -L 1G --nosync mirror_sanity
  WARNING: New mirror won't be synchronised. Don't read what you didn't write!
  Logical volume "corrupt_meta_mirror" created

## Changing nodes ##

[root@tank-02 ~]# lvs -a -o +devices
  LV                             VG            Attr   LSize  Origin Snap%  Move
Log                      Copy%  Devices                                        
               
  LogVol00                       VolGroup00    -wi-ao 36.22G                   
                                /dev/hda2(0)                                   
               
  LogVol01                       VolGroup00    -wi-ao  1.94G                   
                                /dev/hda2(1159)                                
               
  corrupt_meta_mirror            mirror_sanity Mwi-a-  1.00G                   
corrupt_meta_mirror_mlog 100.00
corrupt_meta_mirror_mimage_0(0),corrupt_meta_mirror_mimage_1(0)
  [corrupt_meta_mirror_mimage_0] mirror_sanity iwi-ao  1.00G                   
                                /dev/sda2(0)                                   
               
  [corrupt_meta_mirror_mimage_1] mirror_sanity iwi-ao  1.00G                   
                                /dev/sda1(0)                                   
               
  [corrupt_meta_mirror_mlog]     mirror_sanity lwi-ao  4.00M                   
                                /dev/sdb1(0)                                   
               
[root@tank-02 ~]# dd if=/dev/zero of=/dev/sda2 count=1000
1000+0 records in
1000+0 records out
[root@tank-02 ~]# lvs -a -o +devices
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Volume group "mirror_sanity" not found
  LV       VG         Attr   LSize  Origin Snap%  Move Log Copy%  Devices        
  LogVol00 VolGroup00 -wi-ao 36.22G                               /dev/hda2(0)   
  LogVol01 VolGroup00 -wi-ao  1.94G                               /dev/hda2(1159)
[root@tank-02 ~]# vgchange -an --partial
  Partial mode. Incomplete volume groups will be activated read-only.
  Can't deactivate volume group "VolGroup00" with 2 open logical volume(s)
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  0 logical volume(s) in volume group "mirror_sanity" now active
[root@tank-02 ~]# pvcreate --uuid A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu /dev/sda2
  Physical volume "/dev/sda2" successfully created
[root@tank-02 ~]# vgcfgrestore mirror_sanity 
  Restored volume group mirror_sanity
[root@tank-02 ~]# lvs -a -o +devices
  LV                           VG            Attr   LSize  Origin Snap%  Move
Log                    Copy%  Devices                                          
         
  LogVol00                     VolGroup00    -wi-ao 36.22G                     
                            /dev/hda2(0)                                       
       
  LogVol01                     VolGroup00    -wi-ao  1.94G                     
                            /dev/hda2(1159)                                    
       
  mirror_up_convert            mirror_sanity Mwi---  1.00G                   
mirror_up_convert_mlog       
mirror_up_convert_mimage_0(0),mirror_up_convert_mimage_1(0)
  [mirror_up_convert_mimage_0] mirror_sanity iwi---  1.00G                     
                            /dev/sda2(0)                                       
       
  [mirror_up_convert_mimage_1] mirror_sanity iwi---  1.00G                     
                            /dev/sda1(0)                                       
       
  [mirror_up_convert_mlog]     mirror_sanity lwi---  4.00M                     
                            /dev/sdb1(0)                                       
       


Expected results:
vgcfgrestore should work correctly from any node.

Additional info:

Comment 1 Alasdair Kergon 2007-11-20 22:44:05 UTC
All we could do is add more warnings to the command: people should not use it
blindly without checking *what* they are restoring first e.g. with -ll.

Comment 2 Alasdair Kergon 2007-11-20 22:46:21 UTC
(for future reference, you should have done a vgcfgbackup *before* the restore
to the replacement disk)

Comment 3 Nate Straz 2007-11-20 23:02:43 UTC
(In reply to comment #2)
> (for future reference, you should have done a vgcfgbackup *before* the restore
> to the replacement disk)

Will this cause the backup to happen on every node in the cluster?
When does a backup occur as a side effect of an lvm command?


Comment 4 Alasdair Kergon 2007-11-20 23:46:11 UTC
Missing the point: take a backup of the *current* metadata from one of the
remaining PVs then restore it onto the replacement one.  But didn't we make
commands already to handle this situation for the dmeventd plugin to use without
needing to fiddle with vgcfgbackup/restore?

Backups happen automatically when any command is run, but only on the node where
that command is run.  Rebooting the cluster involves running such commands, so
should lead to every node taking an up-to-date local backup - would someone like
to check if that's indeed the case?

Comment 5 Milan Broz 2008-06-02 15:00:24 UTC
clvmd now runs backups on remote nodes too.

*** This bug has been marked as a duplicate of 406021 ***


Note You need to log in before you can comment on or make changes to this bug.