Bug 186395 - lvm2 won't read lvm1 metadata
lvm2 won't read lvm1 metadata
Status: CLOSED RAWHIDE
Product: Fedora
Classification: Fedora
Component: lvm2 (Show other bugs)
rawhide
All Linux
medium Severity medium
: ---
: ---
Assigned To: Alasdair Kergon
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2006-03-23 06:26 EST by Paul Howarth
Modified: 2007-11-30 17:11 EST (History)
3 users (show)

See Also:
Fixed In Version: 2.02.24
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-03-19 18:08:15 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Paul Howarth 2006-03-23 06:26:51 EST
Description of problem:
Upgraded box FC4->FC5. The box has been through several upgrade cycles in the
past, and has an LVM1 VG called "fsdata", consisting of two PVs, on hda13 and hdb11:

[root@laurel ~]# pvdisplay -v /dev/hda13
    Using physical volume(s) on command line
  --- Physical volume ---
  PV Name               /dev/hda13
  VG Name               fsdata
  PV Size               37.46 GB / not usable 6.29 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              9588
  Free PE               9588
  Allocated PE          0
  PV UUID               lLiDnm-QGLI-QD2p-mBo6-4yr3-Rg7b-58zMwm

[root@laurel ~]# pvdisplay -v /dev/hdb11
    Using physical volume(s) on command line
  --- Physical volume ---
  PV Name               /dev/hdb11
  VG Name               fsdata
  PV Size               37.46 GB / not usable 6.29 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              9588
  Free PE               9588
  Allocated PE          0
  PV UUID               Sm43Gy-Rv27-RNg7-jqRD-RPkb-ipqF-uYHwzy

Now I should have suspected something was wrong when the installer didn't detect
the volume group, but I was keen to upgrade so I commented the entry out of
fstab, reasoning that I would sort out the problem post-upgrade.

Post-upgrade, vgscan complains as follows:

# vgscan -v
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
  Reading all physical volumes.  This may take a while...
    Finding all volume groups
    Finding volume group "fsdata"
  LV fsdata: inconsistent LE count 5120 != 10240
  Internal error: LV segments corrupted in fsdata.
  Volume group "fsdata" not found

I get similar errors from most other lvm commands other than pvdisplay.
For instance:

# pvscan -v
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
    Walking through all physical volumes
  LV fsdata: inconsistent LE count 5120 != 10240
  Internal error: LV segments corrupted in fsdata.
  No matching physical volumes found

I then tried copying /sbin/lvm.static from an up-to-date FC4 installation, and
its vgscan had no problems:

# ./lvm.static vgscan -v
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
  Reading all physical volumes.  This may take a while...
    Finding all volume groups
    Finding volume group "fsdata"
  Found volume group "fsdata" using metadata type lvm1

I expect I'll be able to convert the volume to lvm2 metadata using the FC4
lvm.static, but I'll hold off for a while in case there's some diagnostic
information I can provide with the metadata as it is.

Version-Release number of selected component (if applicable):
lvm2-2.02.01-1.2.1
Comment 1 Alasdair Kergon 2006-03-24 12:00:13 EST
Puzzled about the versioning here:

Are you sure 2.02.01-1.2.1 is the one that's not working?

Add --version to check e.g. 'pvscan --version'.

There was a bug like this but it should have been fixed a while ago.
Comment 2 Paul Howarth 2006-03-24 12:10:36 EST
[root@laurel ~]# pvscan --version
  LVM version:     2.02.01 (2005-11-23)
  Library version: 1.02.02 (2005-12-02)
  Driver version:  4.5.0
[root@laurel ~]# pvscan
  LV fsdata: inconsistent LE count 5120 != 10240
  Internal error: LV segments corrupted in fsdata.
  No matching physical volumes found
Comment 3 Alasdair Kergon 2006-03-24 12:31:10 EST
Please email your metadata to me (the first 300k from each of the two PVs might
be enough, compressed) and I'll take a look.  (agk@redhat.com)
Comment 4 Paul Howarth 2006-03-24 12:43:38 EST
(In reply to comment #3)
> Please email your metadata to me (the first 300k from each of the two PVs might
> be enough, compressed) and I'll take a look.  (agk@redhat.com)

Data's in the post :-)

Comment 5 Alasdair Kergon 2006-03-24 15:18:32 EST
Well the size of that LV is 10240 extents in the on-disk metadata.
But adding up the extents, the new code thinks there should only be 5120 extents
in it.  Extents look to be 4MB, so is the LV indeed 20GB not 40?

Assuming that's the case, take a metadata backup using the old software
(lvm.static vgcfgbackup - or you might already have one up-to-date in
/etc/lvm/backup) and use the new version of vgcfgrestore to restore metadata
from that backup. (vgcfgrestore -M1 -f <file> fsdata).
Comment 6 Paul Howarth 2006-03-24 18:30:51 EST
(In reply to comment #5)
> Well the size of that LV is 10240 extents in the on-disk metadata.
> But adding up the extents, the new code thinks there should only be 5120 extents
> in it.  Extents look to be 4MB, so is the LV indeed 20GB not 40?

No, it's 40GB:

[root@laurel ~]# ./lvm.static vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "fsdata" using metadata type lvm1
[root@laurel ~]# ./lvm.static vgchange -ay
  1 logical volume(s) in volume group "fsdata" now active
[root@laurel ~]# ./lvm.static vgdisplay -v fsdata
    Using volume group(s) on command line
    Finding volume group "fsdata"
  --- Volume group ---
  VG Name               fsdata
  System ID             laurel.intra.city-fan.org1066890344
  Format                lvm1
  VG Access             read/write
  VG Status             resizable
  MAX LV                256
  Cur LV                1
  Open LV               0
  Max PV                256
  Cur PV                2
  Act PV                2
  VG Size               74.91 GB
  PE Size               4.00 MB
  Total PE              19176
  Alloc PE / Size       10240 / 40.00 GB
  Free  PE / Size       8936 / 34.91 GB
  VG UUID               PjLb3o-zRJ8-69RM-bfUf-rVzc-4QLl-Y61dEK

  --- Logical volume ---
  LV Name                /dev/fsdata/fsdata
  VG Name                fsdata
  LV UUID                000000-0000-0000-0000-0000-0000-000000
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                40.00 GB
  Current LE             10240
  Segments               1
  Allocation             normal
  Read ahead sectors     10000
  Block device           253:0

  --- Physical volumes ---
  PV Name               /dev/hda13
  PV UUID               lLiDnm-QGLI-QD2p-mBo6-4yr3-Rg7b-58zMwm
  PV Status             allocatable
  Total PE / Free PE    9588 / 4468

  PV Name               /dev/hdb11
  PV UUID               Sm43Gy-Rv27-RNg7-jqRD-RPkb-ipqF-uYHwzy
  PV Status             allocatable
  Total PE / Free PE    9588 / 4468

[root@laurel ~]# mount /dev/fsdata/fsdata /usr/share/fsdata
[root@laurel ~]# df /usr/share/fsdata
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/fsdata-fsdata
                      41284928  34721256   4466520  89% /usr/share/fsdata

> Assuming that's the case, take a metadata backup using the old software
> (lvm.static vgcfgbackup - or you might already have one up-to-date in
> /etc/lvm/backup) and use the new version of vgcfgrestore to restore metadata
> from that backup. (vgcfgrestore -M1 -f <file> fsdata).

Couldn't I use the old version to do a vgconvert to lvm2 metadata?
Or would it probably still be broken?

I can let you have a copy of the backup if it's any use.
Comment 7 Paul Howarth 2006-07-11 14:12:07 EDT
(In reply to comment #5)
> Well the size of that LV is 10240 extents in the on-disk metadata.
> But adding up the extents, the new code thinks there should only be 5120 extents
> in it.  Extents look to be 4MB, so is the LV indeed 20GB not 40?
> 
> Assuming that's the case, take a metadata backup using the old software
> (lvm.static vgcfgbackup - or you might already have one up-to-date in
> /etc/lvm/backup) and use the new version of vgcfgrestore to restore metadata
> from that backup. (vgcfgrestore -M1 -f <file> fsdata).

Doesn't seem to help:

# ./lvm.static vgdisplay
  --- Volume group ---
  VG Name               fsdata
  System ID             laurel.intra.city-fan.org1066890344
  Format                lvm1
  VG Access             read/write
  VG Status             resizable
  MAX LV                256
  Cur LV                1
  Open LV               0
  Max PV                256
  Cur PV                2
  Act PV                2
  VG Size               74.91 GB
  PE Size               4.00 MB
  Total PE              19176
  Alloc PE / Size       12800 / 50.00 GB
  Free  PE / Size       6376 / 24.91 GB
  VG UUID               PjLb3o-zRJ8-69RM-bfUf-rVzc-4QLl-Y61dEK

# ./lvm.static vgcfgbackup
  Volume group "fsdata" successfully backed up.
# ls -l /etc/lvm/backup/
total 8
-rw------- 1 root root 1379 Jul 11 19:17 fsdata
# vgcfgrestore -M1 -f /etc/lvm/backup/fsdata fsdata
  Restored volume group fsdata
# vgdisplay
  LV fsdata: inconsistent LE count 6400 != 12800
  Internal error: LV segments corrupted in fsdata.
  Volume group "fsdata" doesn't exist
Comment 8 Alasdair Kergon 2007-03-19 18:08:15 EDT
A related fix applied in 2.02.24 which I hope fixes this.
Built for FC7.  (Will release update for fc5 & fc6 in due course.)

Note You need to log in before you can comment on or make changes to this bug.