Bug 393271 - vgcfgrestore can restore wrong vg info
vgcfgrestore can restore wrong vg info
Status: CLOSED DUPLICATE of bug 406021
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: lvm2-cluster (Show other bugs)
4
All Linux
medium Severity medium
: ---
: ---
Assigned To: Milan Broz
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-11-20 17:07 EST by Nate Straz
Modified: 2013-02-28 23:06 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-06-02 11:00:24 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Nate Straz 2007-11-20 17:07:53 EST
Description of problem:

When a VG configuration is restored with vgcfgrestore from a node in a cluster
other than the one that created the most recent LV, previously existing LVs may
be restored instead of the recently created one.  This was found in the
"recover_corrupt_metadata" scenario from mirror_sanity.

Version-Release number of selected component (if applicable):
lvm2-2.02.27-4.el4
lvm2-cluster-2.02.27-4.el4
cmirror-1.0.1-1
cmirror-kernel-2.6.9-38.5


How reproducible:
100%

Steps to Reproduce:
1. Create and remove some LVs in a VG
2. Select a random node
3. lvcreate -m 1 -n corrupt_meta_mirror -L 1G --nosync mirror_sanity
4. Switch to a different node
5. Corrupt an underlying PV used by the LV corrupt_meta_mirror
   dd if=/dev/zero of=/dev/sda2 count=1000
6. Verifying that this VG is now corrupt
   lvs -a -o +devices 
7. Activate VG in partial readonly mode
   vgchange -an --partial
8. Recreate the PV using it's old uuid
   pvcreate --uuid A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu /dev/sda2 
9. Restore the VG back to it's original state
   vgcfgrestore mirror_sanity 
10. Check that the LVs were restored correctly
    lvs

Actual results:

[root@tank-03 ~]# lvcreate -m 1 -n corrupt_meta_mirror -L 1G --nosync mirror_sanity
  WARNING: New mirror won't be synchronised. Don't read what you didn't write!
  Logical volume "corrupt_meta_mirror" created

## Changing nodes ##

[root@tank-02 ~]# lvs -a -o +devices
  LV                             VG            Attr   LSize  Origin Snap%  Move
Log                      Copy%  Devices                                        
               
  LogVol00                       VolGroup00    -wi-ao 36.22G                   
                                /dev/hda2(0)                                   
               
  LogVol01                       VolGroup00    -wi-ao  1.94G                   
                                /dev/hda2(1159)                                
               
  corrupt_meta_mirror            mirror_sanity Mwi-a-  1.00G                   
corrupt_meta_mirror_mlog 100.00
corrupt_meta_mirror_mimage_0(0),corrupt_meta_mirror_mimage_1(0)
  [corrupt_meta_mirror_mimage_0] mirror_sanity iwi-ao  1.00G                   
                                /dev/sda2(0)                                   
               
  [corrupt_meta_mirror_mimage_1] mirror_sanity iwi-ao  1.00G                   
                                /dev/sda1(0)                                   
               
  [corrupt_meta_mirror_mlog]     mirror_sanity lwi-ao  4.00M                   
                                /dev/sdb1(0)                                   
               
[root@tank-02 ~]# dd if=/dev/zero of=/dev/sda2 count=1000
1000+0 records in
1000+0 records out
[root@tank-02 ~]# lvs -a -o +devices
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find all physical volumes for volume group mirror_sanity.
  Volume group "mirror_sanity" not found
  LV       VG         Attr   LSize  Origin Snap%  Move Log Copy%  Devices        
  LogVol00 VolGroup00 -wi-ao 36.22G                               /dev/hda2(0)   
  LogVol01 VolGroup00 -wi-ao  1.94G                               /dev/hda2(1159)
[root@tank-02 ~]# vgchange -an --partial
  Partial mode. Incomplete volume groups will be activated read-only.
  Can't deactivate volume group "VolGroup00" with 2 open logical volume(s)
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  Couldn't find device with uuid 'A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu'.
  0 logical volume(s) in volume group "mirror_sanity" now active
[root@tank-02 ~]# pvcreate --uuid A1HDfB-nS3s-2cdO-Yj6I-YaCe-KZls-PxdTiu /dev/sda2
  Physical volume "/dev/sda2" successfully created
[root@tank-02 ~]# vgcfgrestore mirror_sanity 
  Restored volume group mirror_sanity
[root@tank-02 ~]# lvs -a -o +devices
  LV                           VG            Attr   LSize  Origin Snap%  Move
Log                    Copy%  Devices                                          
         
  LogVol00                     VolGroup00    -wi-ao 36.22G                     
                            /dev/hda2(0)                                       
       
  LogVol01                     VolGroup00    -wi-ao  1.94G                     
                            /dev/hda2(1159)                                    
       
  mirror_up_convert            mirror_sanity Mwi---  1.00G                   
mirror_up_convert_mlog       
mirror_up_convert_mimage_0(0),mirror_up_convert_mimage_1(0)
  [mirror_up_convert_mimage_0] mirror_sanity iwi---  1.00G                     
                            /dev/sda2(0)                                       
       
  [mirror_up_convert_mimage_1] mirror_sanity iwi---  1.00G                     
                            /dev/sda1(0)                                       
       
  [mirror_up_convert_mlog]     mirror_sanity lwi---  4.00M                     
                            /dev/sdb1(0)                                       
       


Expected results:
vgcfgrestore should work correctly from any node.

Additional info:
Comment 1 Alasdair Kergon 2007-11-20 17:44:05 EST
All we could do is add more warnings to the command: people should not use it
blindly without checking *what* they are restoring first e.g. with -ll.
Comment 2 Alasdair Kergon 2007-11-20 17:46:21 EST
(for future reference, you should have done a vgcfgbackup *before* the restore
to the replacement disk)
Comment 3 Nate Straz 2007-11-20 18:02:43 EST
(In reply to comment #2)
> (for future reference, you should have done a vgcfgbackup *before* the restore
> to the replacement disk)

Will this cause the backup to happen on every node in the cluster?
When does a backup occur as a side effect of an lvm command?
Comment 4 Alasdair Kergon 2007-11-20 18:46:11 EST
Missing the point: take a backup of the *current* metadata from one of the
remaining PVs then restore it onto the replacement one.  But didn't we make
commands already to handle this situation for the dmeventd plugin to use without
needing to fiddle with vgcfgbackup/restore?

Backups happen automatically when any command is run, but only on the node where
that command is run.  Rebooting the cluster involves running such commands, so
should lead to every node taking an up-to-date local backup - would someone like
to check if that's indeed the case?
Comment 5 Milan Broz 2008-06-02 11:00:24 EDT
clvmd now runs backups on remote nodes too.

*** This bug has been marked as a duplicate of 406021 ***

Note You need to log in before you can comment on or make changes to this bug.