Bug 863401 - [lvmetad] lvm1 type metadata: pvs cycling after vgscan --cache, vgs not showing the LVM1 type VG
Summary: [lvmetad] lvm1 type metadata: pvs cycling after vgscan --cache, vgs not showi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.4
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: Petr Rockai
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-05 11:33 UTC by Marian Csontos
Modified: 2013-02-21 08:14 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.98-1.el6
Doc Type: Bug Fix
Doc Text:
When using LVM1 (legacy) metadata and lvmetad together, LVM commands could run into infinite loops (hang) when invoked. The problem (which was fixed) was a failure in "pvscan --cache" to read part of LVM1 metadata.
Clone Of:
Environment:
Last Closed: 2013-02-21 08:14:20 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0501 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-02-20 21:30:45 UTC

Description Marian Csontos 2012-10-05 11:33:38 UTC
Description of problem:
when creating lvm1 type PV + VG, pvs enters infinite loop after vgscan --cache.

Version-Release number of selected component (if applicable):
upstream lvm2

How reproducible:
100%

Steps to Reproduce:

    PV1=/dev/PV1
    VG=vg

    pvcreate --metadatatype 1 $PV1
    vgscan --cache
    pvs # OK
    vgcreate --metadatatype 1 $VG $PV1
    vgs # LVM1 type VG is not listed
    pvs # ERROR
  
Actual results:
vgs not showing $VG
infinite loop after pvs

Expected results:
vgs should list $VG
pvs should list PVs

Additional info:

Comment 1 Marian Csontos 2012-10-05 11:37:20 UTC
*Updated reproducer*: after vgcreate run `vgscan --cache`

The repeating part in `pvs -vvvv` is following:

#format1/disk-rep.c:404       Found /dev/vdh1 in VG snapper
#device/dev-io.c:577         Closed /dev/vdh1
#device/dev-cache.c:600       unknown device: stat failed: No such file or directory
#metadata/metadata.c:3626         <backtrace>
#metadata/metadata.c:2771         <backtrace>
#format1/format1.c:317       Reading physical volume data /dev/vdh1 from disk
#device/dev-io.c:524         Opened /dev/vdh1 RO O_DIRECT
#device/dev-io.c:137         /dev/vdh1: block size is 512 bytes  

Looks like this is caused by missing VG record.

Comment 3 Marian Csontos 2012-10-05 12:01:12 UTC
To be sure: this happens only *with lvmetad running*.

Also: `lvremove` any LV on lvm1 type VG is displaying errors (what again may be related) and `vgscan --cache` is not the way to sort it out.

I am happy with "Release note only" solution: lvmetad + lvm1 is not supported.

But should this be fixed, please verify other operations as well.

Comment 4 Petr Rockai 2012-10-09 08:26:35 UTC
Can't reproduce in tests. I have pushed the following test:

. lib/test

test -e LOCAL_LVMETAD || skip
aux prepare_devs 2
pvcreate --metadatatype 1 $dev1
vgscan --cache
pvs | grep $dev1
vgcreate --metadatatype 1 $vg1 $dev1
vgs | grep $vg1
pvs | grep $dev1

and it passes both on my machine and in hydra. Any further details you could share to help track this down?

Comment 5 Marian Csontos 2012-10-10 12:22:49 UTC
(In reply to comment #4)
> Can't reproduce in tests.

You must run vgscan --cache after vgcreate:

(In reply to comment #1)
> *Updated reproducer*: after vgcreate run `vgscan --cache`

Comment 6 Petr Rockai 2012-10-10 20:00:21 UTC
Oh, I see. Fixed in deea86c7f49ea825608826e29b56a005e2c9e747.

Comment 7 Petr Rockai 2012-10-12 09:12:05 UTC
The current upstream test case is this (it reliably trips the bug):

. lib/test

test -e LOCAL_LVMETAD || skip
aux prepare_devs 2
pvcreate --metadatatype 1 $dev1
vgscan --cache
pvs | grep $dev1
vgcreate --metadatatype 1 $vg1 $dev1
vgscan --cache
vgs | grep $vg1
pvs | grep $dev1

Comment 10 Corey Marthaler 2013-01-24 23:25:33 UTC
Fix verified in the latest rpms.

2.6.32-354.el6.x86_64
lvm2-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
lvm2-libs-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
lvm2-cluster-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
udev-147-2.43.el6    BUILT: Thu Oct 11 05:59:38 CDT 2012
device-mapper-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-libs-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-event-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-event-libs-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
cmirror-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013

[root@qalvm-01 ~]# ps -ef | grep lvmetad
root      3771     1  0 17:09 ?        00:00:00 lvmetad
root      6572  1926  0 17:22 pts/0    00:00:00 grep lvmetad

[root@qalvm-01 ~]# pvcreate --metadatatype 1 /dev/vdb1
  Physical volume "/dev/vdb1" successfully created
[root@qalvm-01 ~]# vgscan --cache
  Reading all physical volumes.  This may take a while...
  Found volume group "vg_qalvm01" using metadata type lvm2
[root@qalvm-01 ~]# pvs | grep /dev/vdb1
  /dev/vdb1             lvm1 a--  10.00g 10.00g
[root@qalvm-01 ~]# vgcreate --metadatatype 1 VG /dev/vdb1
  Volume group "VG" successfully created
[root@qalvm-01 ~]# vgscan --cache
  Reading all physical volumes.  This may take a while...
  Found volume group "VG" using metadata type lvm1
  Found volume group "vg_qalvm01" using metadata type lvm2
[root@qalvm-01 ~]# vgs | grep VG
  VG         #PV #LV #SN Attr   VSize  VFree
  VG           1   0   0 wz--n-  9.99g 9.99g
[root@qalvm-01 ~]# pvs | grep /dev/vdb1
  /dev/vdb1  VG         lvm1 a--   9.99g 9.99g

Comment 11 errata-xmlrpc 2013-02-21 08:14:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0501.html


Note You need to log in before you can comment on or make changes to this bug.